article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
effective control of flows in geological formations is a key factor for exploiting resources that are highly important for the society , such as ground water , geothermal energy , geological storage of co2 , high quality fossil fuel ( gas and oil ) and potentially large scale storage of energy in terms or heat or gas . today 60% of the world energy consumptionis based on oil and natural gas resources .in addition 19% is based on coal which needs large scale co2 storage to be safely exploited without large scale impact on climate .geothermal energy is an important source of green energy , which would be even more valuable in the future as the supply of fossil fuel is expected to decrease .gas storage is today an integrated part of the energy supply and provides reliable large scale storage of energy .it enables to both attenuate the volatility of energy prices and ensure energy security .the use of all of the above resources will benefit from a reliable control of the flow properties around the wells that are used to exploit them .increased injectivity is particularly important for exploiting resources in tight formation or where high flow rates are required .for enhanced geothermal applications , rock fracturing is a prerequisite for economical exploitation . for co2 injection , where large volumes of fluid have to be injected , high injectivity limitsthe increase in pressure near the well and simplifies the operation .tight formations contain much of the hydrocarbon reserves .the exploitation of these formations has been a driving force for the technology of fracking , which is a more drastic well stimulation than the traditional ones . when increasing the injectivity in a well , it is of vital importance to be able to predict and control fracturing to avoid unwanted fractures or even induced seismic events , which may cause environmental damage as well as the disruption of operations .the key for enabling high injectivity is to induce and control the fracturing process using the coupling between fluid flow , heat and rock mechanics .the failure of the rock and the propagation of fractures depend on both global and local effects , through the stress distribution which is intrinsically global and failure criterias which are local .it is therefore important to have flexible simulation tools that are able to cover both large scale features with complex geometry and include specific fracture dynamics where the fracturing processes occur . numerical methods for simulating fracturing can typically be classified either as continuum or discontinuum based methods .the modeling of fracturing in brittle materials like rock is particularly difficult .it is determined by the stress field in the vicinity of the fracture tip .as showed , failures happen when the global energy release is larger than the energy required by the fracturing process .the first depends on the global stress field while the latter is associated with the energy needed at small scale to create a fracture . for brittle materials where the failure happens at very small scales, linear elasticity governs the behavior in most of the domain but the solution of linear elasticity in the presence of fractures is singular near the fracture tip ( see for a general description ) .this introduces challenges for numerical calculations and often results in artificial grid dependence of the simulated dynamics . from a physical point of view ,such effects are removed when plasticity is introduced , however the length scale of this region may be prohibitively small to be resolved numerically on the original model .several techniques have been introduced to incorporate the singularity at the fracture tip into the numerical calculations explicitly , for example specific tip elements in the finite element fem method . in general , the methods using global energy arguments are less sensitive to the choice of numerical methods than those that use estimates of the strength of the singularity . in the case of the fracturing of natural rock ,the uncertainty in the model is large , small scale heterogeneities are important and several different fracturing mechanisms complicate the structure .discrete modeling techniques have been very successful in this area , in particular if complex behaviors should be simulated . an essential component to model fracturingis therefore the ability to account in a flexible manner for both large and small scale behaviors .this is reflected in the widespread use of tools based on analytic models for hydraulic fracturing ( for a review see ) .however , it becomes a challenge to incorporate interaction of fractures and fine - scale features into those simulation tools . because of their simplicity and flexibility for incorporating different fracturing mechanisms , discrete element methods ( dem ) ,also called in their explicit variants , distinct element methods , have been one of the main techniques used for hydraulic fracturing in commercial simulators .those methods exploit the ability of easily modifying the interactions and connections between the discrete particles or elements . for continuum models ,such behavior is more difficult to account for .however , the parameters in the dem model are not directly related to physical macro - scale parameters and there are restrictions on the range of the parameters that can be simulated . in particular , showed that only poisson s ratios ( in plain stress ) smaller than can be considered . the modified discrete element method ( mdem )was introduced to get rid of this restriction and gives also easier relationships between the macro parameters in the linear elastic domain , while keeping the advantages of dem in the treatment of fracture . in this work ,we show the connection between mdem and the recent development of virtual element methods ( vem ) .such approach provides a simple derivation of the mdem framework , and also highlight the discrepancy of the original dem model from linear elasticity .the linear version of the vem methods for elasticity can be used to extend first - order fem on simplex grids , which was the basis of the mdem method , to general polyhedral grids .we use the fact that both dem , mdem and vem share the same degrees of freedom in the case of simplex grids to derive smooth couplings between these methods .a similar approach has been followed for coupling fem with dem previously .the introduction of vem opens the possibility for flexible gridding on general polyhedral grids in the far - field region while keeping the dem / mdem flexibility in the near fracture domains .geological formations are typically the result of deposition and erosion processes , which lead to layered structures and faults .geometrical models using polyhedral grids , such as corner point grid , skua grid and cut - cell are natural in this context and correspond to grids used in the industry of flow modeling in reservoirs .our proposed method therefore may simplify the incorporation of fracture simulation in realistic subsurface applications .we study the methods for the standard equations of linear elasticity given by where is the cauchy stress tensor , the infinitesimal strain tensor and the displacement field .the linear operator is the fourth - order cauchy stiffness tensor . in kelvin notation , a three - dimensional symmetric tensor is represented as an element of with components ^{{t}}\ ] ] while a two - dimensional symmetric tensor is represented by a vector in given by ^{{t}}$ ] . using this notation be represented by a matrix and inner products of tensors correspond to the normal inner - product of vectors . for isotropic materials, we have the constitutive equations where and denote the lam constants .the elastic energy density is given by where we use the standard scalar product for matrices defined as for any two matrices .discrete element methods consist of modeling the mechanical behaviour of a continuum material by representing it as a set of particles , or discrete elements .the forces in the material are then modeled as interaction forces between the particles .there are several variants of the discrete element method . herewe will use the simple version introduced in where the particles are discs in 2d and spheres in 3d .we will also restrict the treatment to the linear case to compare with linear elasticity , but this is not a restriction of the method . for more in - depth presentation of different variantssee or and the references therein .the starting point of the dem methods has its background in the description of granular media .this has a long history starting from the description of the contact force by hertz and . in this fieldan important question was to study how the effective elastic modulus of the bulk was related to the microscopic description .in dem the basic ideas is to use a microscopic description to simulate the behavior of the bulk modulus . for a complete relation betweengeneral dem method and linear elasticity using shear forces , it is necessary to introduce the material laws for micropolar materials , see .this introduces an extra variable associated with local rotation , as illustrated in figure [ fig : dem_mdem ] . for an isotropicmicropolar media the strain stress relation is where the extra variable describe the local rotation and represent the asymmetric part of the strain tensor , i.e , rigid body rotations . in terms of displacementsit can be written the state variable are the displacement and the local rotation and the total elastic energy is given by by computing the variation of , we obtain the governing equations of the system , that is , the linear momentum conservation equation , [ eq : goveqmicro ] and the angular momentum conservation equation we will use the expression of the stress given by to compare with the dem model which include local rotations . let us consider two particles and which are connected through a contact denoted . for a particle , ,we denote by the position of the particle .the degrees of freedom of the system are the displacement and the microrotation for each particle .furthermore we set we introduce also the distance between the particles , .we use the cross - product to represent the action of a rotation so that the rotation given by a vector is the mapping given by .our description of dem follows with slight differences in the notation .we introduce the normal and shear forces , for a given contact the relative shear and normal displacement are given by where .note that in the case where two adjacent spheres roll one over the other without sliding , we have so that the term accounts only for the sliding part of the tangential component .let us define the total force over a contact as using the definition of stress tensor , we have that , at the contact between the spheres of center and , and assuming that there exists a non - zero contact surface , the force can be written as as points in the normal direction .the cauchy s formula for the stress matrix , which is meant to invert this relation , is given by where denotes the number of contact points , that is the number of spheres in contact .let us consider a linear deformation and write as where the tensor and are respectively the symmetric and skew - symmetric parts of .since is a skew - symmetric matrix , it corresponds to a rotation and , abusing the notations , we will write indifferently or to denote the same rotation operator ( here applied to ) . to proceed with the identification of the stress tensor , we assume small displacement , that is , and are small compared with the identity , and we assume also for some constant . we use and obtain as is skew - symmetric . for ,we have hence , we obtain the following expression for the stress tensor , \\ + k_s\,\left([({e}i^m)\otimes i^m ] + [ ( ( { r}- \theta)\times i^m ) \otimes i^m]\right)\big)\end{gathered}\ ] ] to illustrate the restriction that this expression imposes on the parameters , we consider a square packing in 3d . in this case , there are contact points and , using , and , we obtain note that we do not take equal to the volume of the sphere but , that is the effective volume . in the expression above , and must be seen as matrices and not as vectors as in .we can also identified the parameter of local rotation for micropolar media with the local rotation in the dem model .this gives the following lam coefficients , hence , as , we can conclude that , for square lattices , this is only stable if ( we assume ) .however , this is not a restriction for simplex grids . using the same approach as above but now for regular simplices, it is shown in that since and are naturally positive , this restricts the poisson s ratio to in the 3d case . for the 2d case, we obtain the same expression in the case of plane strain boundary conditions and , in the case of plane stress , we get which implies .these limitations on the physical parameters have been the main motivation for introducing mdem .comparing the expression in equation to the governing equations for a micropolar medium , we see that the conservation of torque is equivalent to the conservation of angular momentum .indeed , we get from and that so that for a square lattice ( , and ) , we get and the requirement that the torque is zero yields , which corresponds to the conservation of angular momentum equation .this also highlights the need for introducing rotational degrees of freedom for the dem method if shear forces are used . if not , one gets the non physical effects that rigid rotations introduce forces .notice that the method which here is referred as dem is a specific version of a lattice model where the edges of a simplex grid are used to calculate force and the normal force is independent of the rotation of the particles .the last statement could be understood as neglecting rolling resistance .we also notice that the introduction of angles has been made in finite element literature for membrane problems . in this context, it is called the `` drilling degree of freedom '' , see for review .the motivation has been to remove the singularity of the stiffness matrix and the angular degree of freedom as a stiffening effect on the structure .in fact in the value for the free parameter associated with the non symmetric part in the variation principle is recommended to be , the shear modulo .the degrees of freedom are completely the same as in dem .the motivation for the introduction of the mdem method is twofold .first , in dem , the relation between macro parameters and the parameters is not simple .secondly , given a configuration of particles , it is not possible to reproduce all the parameters associated with isotropic materials as discussed in the paragraph above .the same type of restrictions also holds for hexahedral and square grids , see . in , thermodynamical considerations are used to show that , for isotropic materials , the value of the poisson s ratio should satisfy . in this perspective, the restriction established above for dem appears very restrictive .the ability to vary the mechanical properties even for this configuration introduces non central forces between the particles , in this context called shear forces . as discussed above thiscan only be done if extra local rotation variables are introduced .this has two disadvantages , first it is more complicated , and secondly the final system is equivalent to a micopolar medium and not a purely elastic medium . restricting oneself to central forcesmay therefore be in some cases preferable but one should remain aware that such assumption comes with very strong restriction on the material parameters . in ,the authors consider the cauchy relations which are known to be necessary for an elastic material where only central forces are present , each atom or molecule is a center of symmetry and the interaction forces are well approximated by an harmonic potential .they show that , for an isotropic material , the cauchy relations imply that .this very strong restriction makes it difficult to consider models only based on central forces .the basic idea of mdem is to use an interaction region , instead of looking at the forces on each particle as a result of interaction with neighboring particles like in dem , see figure [ fig : vem_dem ] .then , the force at a particle is given by the sum of the forces computed at the particle for each interactive region the particle belongs to .in the finite element setting , the interaction region corresponds to an element and a particle to a node .the calculation of the forces is equivalent to the case of linear finite element .the original derivation , was based on explicit representation of the geometry and calculation of forces . herewe will base our derivation on the variational form of linear elasticity .to simplify the derivation we will use the fact that for simplex grids there exists a one to one mapping between non rigid - body linear deformations and the length of the edges . by non rigid - body linear deformations , we mean the quotient space of the space of linear deformations with the space of rigid - body deformations ( translation and rotations ) .such space is in bijection with symmetric matrices , the strain tensors . using the notation of but with all tensorsrepresented in the kelvin notation where the tensor inner product reduces to normal inner - product .for simplices one can relate the non zero strain states to edge length by note that , to simplify the expressions , we use different notations in the previous section where was denoted by .we write the energy of the element as where is a symmetric definite positive matrix to be determined .the tensor , which we will call in the paper the _ mdem stiffness tensor _ , depends on the material parameter .this fulfills the requirement of linear elasticity that rigid motion does not contribute to the energy .the normal forces can be calculate as the generalized forces associated with the variable u , that is from , we can see that assuming that only central forces are present and the shear forces are negligible is equivalent to the requirement that is diagonal . using the analogue definition of stress where we exploit the kelvin notation if we consider the energy of the same system for a linear elastic media assuming constant stress and strain , which is the case for linear elements on simplex grids , the result is where is the representation of the forth order stiffness tensor in kelvin notation .note also that in is meant either as a tensor ( in the first equality ) or as a vector written in kelvin notations ( in the second ) . for the sake of simplicity, we will continue to do the same abuse of notations in the following .we see that one reproduces the energy of a linear elastic media if which gives that the difference between the matrix used in dem and the matrix needed to reproduce linear elasticity used in mdem is that the latter case normally is a full matrix .since dem methods solve newtons s equation with a dissipation term it will minimize this energy .the same is the case of standard galerkin discretization of linear fem on simplices , which by construction have the same energy functional as mdem .consequently , the only difference , if no fracture mechanism is present will be the method for computing the solution to the whole system of equations .the dem methods rewrite the equations in the form of newton laws with an artificial damping term and let time evolve to converge to the solution , see . for fem ,the linear equations are usually solved directly .the advantage of using the mdem formulation compared to fem is that it offers the flexibility to choose independently on each element if a force should be computed using linear elasticity or if a more traditional dem calculation should be used .the ability to associate the edge lengths to the non rigid body motions is only possible for simplex grids .an other important aspect to this derivation is that the degrees of freedom uniquely define all linear motions and no others .the importance of the last part will be more evident after comparison with the vem method .in contrast to fem , the virtual element method seeks to provide consistency up to the right polynomial order of the equation in the physical space .this is done by approximating the bilinear form only using the definition of the degrees of freedom , as described below .the fem framework on the other hand defines the assembly on reference elements , using a set of specific basis functions . this however has disadvantages for general grids where the mappings may be ill defined or complicated .vem avoids this problem by only working in physical space using virtual elements and not computing the galerkin approximation of the bilinear form exactly .this comes with a freedom in the definition of the method and a cost in accuracy measured in term of the energy norm . as the classical finite element method, the ve method starts from the linear elasticity equations written in the weak form of equation [ eq : lin_elast_cont ] , we have also introduced the symmetric gradient given by for any displacement .the fundamental idea in the ve method is to compute on each element an approximation of the bilinear form that , in addition of being symmetric , positive definite and coercive with respect to the non rigid - body motions , it is also exact for linear functions .the correspondence between mdem and vem we study here holds only for a first - order vem method .when higher order methods are used , the exactness must hold for polynomials of a given degree where the degree determines the order of the method .these methods were first introduced as mimetic finite element methods but later developed further under the name of virtual element methods ( see for discussions ) .the degrees of freedom are chosen as in the standard finite element methods to ensure the continuity at the boundaries and an element - wise assembly of the bilinear forms .we have followed the implementation described in . in a first - order ve method ,the projection operator into the space of linear displacement with respect to the energy norm has to be computed locally for each cell .the ve approach ensures that the projection operator can be computed exactly for each basis element .the projection operator is defined with respect to the metric induced by the bilinear form .the projection is self - adjoint so that we have the following pythagoras identity , for all displacement field and ( in order to keep this introduction simple , we do not state the requirements on regularity which is needed for the displacement fields ) . in , an explicit expression for is given so that we do not even have to compute the projection .indeed , we have where is the projection on the space of translations and pure rotations and the projection on the space of linear strain displacement . the spaces and are defined as then , the discrete bilinear form is defined as where is a symmetric positive matrix which is chosen such that remains coercive .note the similarities between and . since and are orthogonal and maps into the null space of ( rotations do not produce any change in the energy ), we have that the first term on the right - hand side of and can be simplified to the expression immediately guarantees the consistency of the method , as we get from that , for linear displacements , the discrete energy coincides with the exact energy .since the projection operator can be computed exactly for all elements in the basis - and in particular for the _ virtual _ basis elements for which we do not have explicit expressions - the local matrix can be written only in terms of the degrees of freedom of the method . in our casethe degrees of freedom of the method are the value of displacement at the node .let us denote a basis for these degrees of freedom .the matrix is given by in , is the projection operator from the values of node displacements to the space of constant shear strain and , which corresponds to a discretization of in , is a symmetric positive matrix which guarantees the positivity of .there is a large amount of freedom in the choice of but it has to scale correctly .we choose the same as in .the matrix in corresponds to the tensor rewritten in kelvin notations so that , in three dimensions , we have finally , the matrices are used to assemble the global matrix corresponding to . in this paper, we use the implementation available as open source through the matlab reservoir simulation toolbox ( mrst ) .the approach of splitting the calculation of the energy in terms of a consistent part block on one side and a higher order block one the other side was also used in the _ free formulation _ of finite elements . in this casethe motivation was to find an alternative element formulation , for simplex grids the regularization term in the expression for the local stiffness matrix in equation [ eq : assembvem ] is zero because in this case the projection operator is equal to the identity .if we introduce the operator from the degrees of freedom for the element to the edge expansions , we can compare the two expression for the local energy , [ eq : ecomp ] and one easily identifies the operators and as the projection operator to the non - rigid body motions represented in kelvin notation type of symmetric strain .the degrees of freedom span exactly the space of linear displacement and do not excite any higher order modes with nonzero energy .an illustration of the different concepts is given in figure [ fig : vem_dem ] .we point that both dem and vem calculate the basic stiffness matrix in real space , contrary to most fem methods which do this on the reference element .when dealing with simplex grids the advantage of using the dem method within an explicit solving strategy ( often called distinct element method ) is that the calculation of the edge length extensions can be calculated for each edge , and only the matrices and the cauchy stiffness tensor are needed locally .these matrices only operate on the small space of non rigid motion with dimension while the operator works on the all the deformations which have dimension .the edge length can thus be seen as an efficient compact representation of the non rigid motions , which holds only on simplices .as we have seen , both mdem and vem can be derived from the calculation of the energy in each element .for the linear elastic part it is not necessary to introduce extra angular degrees of freedom .however , this may be needed for certain dem methods . in this case, we refer to the use of drilling elements in combination with the use of the free formulation of fem , which , as discussed earlier , shares some fundamental ideas with vem , such as _ energy orthogonality _( which corresponds ) , and _ rc - modes exactness _ ( whic ) and the freedom in choosing the stabilization term .we introduce a coupling with a fluid flow through the biot s equations .the biot s equations are given by [ eq : poroelast ] where denotes the fluid content .the fluid content depends on the storativity , the fluid pressure and on the rock volume change given by which is weighted by the biot - willis constant . in, denotes the permeability and the fluid viscosity . for flow , andin particular if multiphase behaviors are considered , the most successful methods have been based on finite volume methods .the basic time discretization using the two point flux method or multi point flux methods can be written as \right ] = q.\ ] ] here is a discrete gradient operator from cell pressures to face , is the corresponding discrete divergence acting on face fluxes .the source term represents the injection of fluids , see for more details on those discrete operators .given an implicit time discretization the coupling term in the biot case requires a discrete divergence operator for displacement field .note that this discrete operator can be implemented exactly for first order vem , see for more details .the semi - discrete equations are & = & -f\\ \alpha { \text{div}\ifx\empty d\else _ { \hspace{-1pt}d}\xspace } \left[{\bm{u}}^{n+1}\right ] & + & s_c p^{n+1}-\delta t { \text{div}\ifx\empty f\else _ { \hspace{-1pt}f}\xspace } \left[\frac{k}{\mu_v } { \text{grad}\ifx\empty p\else _ { \hspace{-1pt}p}\xspace } \left[p^{n+1}\right]\right ] & = & \alpha { \text{div}\ifx\empty d\else _ { \hspace{-1pt}d}\xspace } \left[{\bm{u}}^{n}\right ] + s_c p^{n}+ q. \end{array}\ ] ] here system matrix of the mechanical system , is the divergence operator acting on the nodal displacement and gives a volume expansion of a cell and is the biot parameter depending on the ratio between the rock and fluid compressibility . in the context of mdemwhen the simulation of fracturing is the main purpose , we normally approximate only the volume expansion term in the transport equation for the fractured cells , where the expansion is also the largest . except for this terman explicit update of pressure is used .this approximation also avoids problems due to small permeabilities which can cause numerical locking and artificial oscillations in the fluid pressure , see .in the mdem method , before an element is fractured , it behaves as in fem and the mdem stiffness tensor is obtained from the cauchy stiffness tensor through the relation established in .depending on the physical situation a fracturing criteria based on stress is used , for example mohr - coulomb . in the examples in this paper we will use the simple tensile failure criteria , namely where is the tensile stress . after failure, we use a central force model , where the forces are calculated individually for each edge as where denotes the diagonal matrix such that if a fracture is closing , then the effective force will in this case be as for dem using only central forces . as for all methods trying to simulate fracturing ,the critical point is how to avoid grid dependent fracturing , due to the singularity of the stress field near the fracture front . in this workhowever , the main aim is to see how the far - field solution can be simulated using general grid , independently of the fracturing modeling .the solution method in mdem is chosen to be similar to the one used in dem . that is explicit time integration of newton s laws . to get fast convergence to the physical stationary state , the local artificial damping term that can be found is often preferable .this is not a physical damping mechanism , but it avoids large differences in local time steps restrictions .the advantage of this approach is that it is less sensitive to global changes than the traditional fem approach which solves directly the stationary state by solving the linearized equations .this is particularly important when discontinuous changes of the forces due to changes in the medium is present . for mdemthis is the case for the situation of initial fracturing , equation or in contact properties for fracture cells , equation .the result in all cases is that the forces are discontinuous with respect to the degrees of freedom .explicit methods have been shown to have advantages for such problems even if the main dynamics is globally elliptic , because the non - linearities in the problem impose stronger time - step size requirement that those needed for the explicit integration of the elliptic part .as the damping criteria depends on the concept of total nodal forces , it can also be used on the nodes connected with vem type of force calculations .no other modification apart from the force calculations are needed . [ cols="^,^,^ " , ]we demonstrate the features of the presented framework with two examples .first we show how the effective parameters of linear elasticity in simple dem with only normal forces depend on the particular choice of the grid cells .second , we use vem , mdem and dem on a general a polyhedral grid to demonstrate how this can be combined within a uniform framework . when a fracture has occurred in a cell , but the whole system evolves in such a way that the fracture closes again , then we should have forces normal to the fracture faces and , depending on the fracture model , forces along the fracture . here , we choose to model this by an effective stiffness tensor .indeed , we keep using the dem model ( and solver ) after the fracture closes , meaning that the materiel parameters for the cell are given by the diagonal mdem stiffness tensor as defined in . from section [ subsec :connec ] , we know that it also corresponds to a unique cauchy stiffness tensor .let us study the effect of such choice and measure the difference between the original and this _ post - fracturing _ stiffness tensor .if we denote by , the one - to - one transformation from the mdem stiffness tensor to the cauchy stiffness tensor , we compute , for a given , the difference between and .we consider a equilateral triangle and an isotropic material with young s modulus and poison ratio . for this value of and this shape , the matrix is diagonal , so that .this reference triangle is plotted in yellow in figure [ fig : c_dem_effective_grid ] .we keep the same cauchy stiffness tensor but modify the shape of the triangle by translating one of the corners . for each configuration that is obtained ,we get a different post - fracturing mdem stiffness tensor given by and we plot the six component of the tensor defined as the difference .we notice that the changes in and is zero when on the line in the figure .this show that in this case as expected the effective model has biaxial symmetry .we also notice that there is quite strong changes in the effective parameters even for relatively small changes in the triangles .it should also be noted that a break of the edges along the x - axis , which in the mdem fracture model result in putting one of the corresponding diagonal element to zero only changes the value of . this is because this only acts in the x direction .we use mrst to generate the unstructured grid presented in figure [ fig : vem_dem_fracture ] and set up an example which combines the use of vem for the general cell shapes and the use of mdem for the triangular cells that can easily be switched to a dem model when a fracture is created .the total grid size is . in the middle within a diameter have placed cells associated with a well .the permeability used was , porosity of , the compressibility of the fluid is similar to water , , and it is injected fluid corresponding to the pore volume of all well cells in an hour . the solution is shown after minutes .the initial condition was given by the mechanical solution with a force of at the top and rolling all other places .the initial condition for pressure is constant pressure equal to .the mechanical parameters are given by and .the well cells are set to have young s modulo and finally the tensile strength is .we observe that the fracture propagates in the direction so that the fracture plane ( or line in 2d ) is aligned with the maximum stress plane ( or line in 2d ) .we get slight grid orientation effect since there is no way a planar fault in the y direction can be obtain using the given triangular grid .the interface between the grids has large steps in cell sizes and include hanging nodes , but no effects due to these features are observed as long as the fracture does not reach the interface . near the tip of the fracture we observe oscillation of the stress on cells , which is a well known problem for first order triangular elements .however the values associate with the nodes is better approximated and patch recovery techniques can be used to get better stress fields as seen in figure [ fig : vem_dem_fracture_small ] .a note is that the dynamics of dem or mdem , is associated with the sum of all forces from all elements around a node , not individual stresses for cells .in this paper we have shown how mdem and vem for linear elasticity share the same basic idea of projection to the states of linear non - rigid motions , although with different representations , length extension for mdem and polynomial basis for vem .both are equivalent to linear fem on simplices , but the viewpoint presented here gives a more direct way on how they relate .since both share the same degrees of freedom , except possibly the angular degree of freedom of mdem / dem , we combine these methods with minimal implementation issues .this is used to simulate fracture growth , where the near field regions is described by a simplex grid which is suited for dem and mdem , while the general polyhedral grids is used in the far - field region .the coupling between the grids which can contain hanging nodes and significant changes in cell shapes and sizes , can be done without introducing large errors .we see this method as a valuable contribution to flexible coupling of mdem / dem methods with traditional reservoir modeling grids .this publication has been produced with support from the kpn project _ controlled fracturing for increased recovery_. the authors acknowledge the following partners for their contributions : lundin and the research council of norway ( 244506/e30 ) .p. g. bergan , m. k. nygrd , and r. o. bjrum ., chapter free formulation elements with drilling freedoms for stability analysis of shells , pages 164182 .springer berlin heidelberg , berlin , heidelberg , 1990 .emmanuel j. gringarten , guven burc arpat , mohamed aymen haouesse , anne dutranois , laurent deny , stanislas jayr , anne - laure tertois , jean - laurent mallet , andrea bernal , and long x. nghiem .new grids for robust reservoir modeling . , 2008 .stein krogstad , knut - andreas lie , olav myner , halvor mll nilsen , xavier raynaud , brd skaflestad , et al .mrst - ad an open - source framework for rapid prototyping and evaluation of reservoir simulation problems . in _spe reservoir simulation symposium_. society of petroleum engineers , 2015 .knut - andreas lie , stein krogstad , ingeborg skjelkvle ligaarden , jostein roald natvig , halvor nilsen , and brd skaflestad .open - source matlab implementation of consistent discretisations on complex grids ., 16:297322 , 2012 .xavier raynaud , halvor mll nilsen , and odd andersen .virtual element method for geomechanical simulations of reservoir models . in _ ecmor xv15theuropean conference on the mathematics of oil recovery , amsterdam , netherlands _ , 2016 .
simulation of fracturing processes in porous rocks can be divided into two main branches : ( i ) modeling the rock as a continuum which is enhanced with special features to account for fractures , or ( ii ) modeling the rock by a discrete ( or discontinuous ) approach that describes the material directly as a collection of separate blocks or particles , e.g. , as in the discrete element method ( dem ) . in the modified discrete element ( mdem ) method , the effective forces between virtual particles are modified in all regions , without failing elements , so that they reproduce the discretization of a first order finite element method ( fem ) for linear elasticity . this provides an expression of the virtual forces in terms of general hook s macro - parameters . previously , mdem has been formulated through an analogy with linear elements for fem . we show the connection between mdem and the virtual element method ( vem ) , which is a generalization of fem to polyhedral grids . unlike standard fem , which computes strain - states in a reference space , mdem and vem compute stress - states directly in real space . this connection leads us to a new derivation of the mdem method . moreover , it gives the basis for coupling ( m)dem to domain with linear elasticity described by polyhedral grids , which makes it easier to apply realistic boundary conditions in hydraulic - fracturing simulations . this approach also makes it possible to combine fine - scale ( m)dem behavior near the fracturing region with linear elasticity on complex reservoir grids in the far - field region without regridding . to demonstrate the simulation of hydraulic fracturing , the coupled ( m)dem - vem method is implemented using the matlab reservoir simulation toolbox ( mrst ) and linked to an industry - standard reservoir simulator . similar approaches have been presented previously using standard fem , but due to the similarities in the approaches of vem and mdem , our work is a more uniform approach and extends these previous works to general polyhedral grids for the non - fracturing domain .
constraint - preserving boundary conditions is an active research topic in numerical relativity . during the first half of this decade ,many conditions have been proposed , adapted in each case to some specific 3 + 1 evolution formalism : fritelli - reula , kst , bssn - nor , or z4 .the focus changed suddenly after 2005 by the impact of a breakthrough : the first long term binary - black - hole simulation , which was achieved in a generalized - harmonic formalism .a series of constraint - preserving boundary conditions proposals in this framework started then , and continues today .we will retake in this paper the 3 + 1 approach to constraint - preserving boundary conditions , following the way opened very recently for the bssn case .more specifically , we will revisit the z4 case , not just because of its intrinsic relevance , but also for its relationship with other 3 + 1 formulations ( bssn , kst , see refs . for details ) . also , the close relationship between the z4 and the generalized - harmonic formulations suggest that our results could provide a different perspective in this other context .this was actually what happened with the current constraint - damping terms : first derived in the z4 context and then applied successfully in generalized - harmonic simulations .our results are both at the theoretical and the numerical level . in sectionii , we consider the first - order z4 formalism in normal coordinates ( zero shift ) for the harmonic slicing case . this case was known to be symmetric - hyperbolic for a particular choice of the parameter which controls the ordering of space derivatives .we extend this result to a range of this ordering parameter , by providing explicitly a positive - definite energy estimate .then we use this estimate for deriving algebraic constraint - preserving boundary conditions both for the energy and the normal momentum components . in section iiiwe consider the dynamical evolution of constraint violations ( subsidiary system ) .following standard methods , we transform algebraic boundary conditions of the subsidiary system into derivative boundary conditions for the main system . we introduce a new basis of dynamical fields in order to revise the constraint - preserving conditions proposed in refs . for the z4 formalism , including also a new coupling parameter which affects the propagation speeds of the ( modified ) incoming modes . in the case of the energy constraint, we get a closed subsystem for the principal part , allowing an analytical stability study at the continuum level which is presented in appendix b. a simple numerical implementation of the proposed conditions is given in section iv , where we test the stability in the linear regime , by considering small random - noise perturbations aroun flat space ( robust stability test ) .the results show the numerical stability of the proposed boundary conditions in this regime for many different combinations of the parameters .the space discretization scheme is the simplest one with the summation - by - parts ( sbp ) property . in this way we avoid masking the effect of our conditions ( at the continuum level ) with the effect of more advanced space - discretization algorithms , like fdoc devised to reduce the high frequency noise level in long - term simulations , which has recently been applied to the black - hole case . for a comparison ,we run also with periodic boundary conditions , where the noise level keeps constant .the proposed boundary conditions produce instead a very effective decreasing of ( the cumulated effect of ) energy and momentum constraint violations . in the case of cartesian - like grids, we also compare the standard a la olsson treatment , with a modified numerical implementation which does not use the corner and vertex points , avoiding in this way some stability issues and providing much cleaner evidence of constraint preservation . in sectionv , we test the non - linear regime with the gowdy waves metric , one of the standard numerical relativity code tests , as we have done recently for the energy constraint case .we endorse in this way some recent claims ( by winicour and others ) that the current code cross - comparison efforts should be extended to the boundaries treatment .a convergence test is performed against this exact strong - field solution , showing the expected convergence rate ( second order for our simple sbp method ) .testing the proposed boundary conditions results into a stable and constraint - preserving behavior , in the sense that energy and momentum constraint violations remain similar or even smaller than the corresponding effects with exact ( periodic or reflection ) boundary conditions for the gowdy metric .we will consider here the z4 evolution system : more specifically , we will consider the first - order version in normal coordinates , as described in refs . . for further convenience, we will recombine the basic first - order fields in the following way : so that the new basis is .note that the vector can be recovered easily as with this new choice of basic dynamical fields , the principal part of the evolution system gets a very simple form in the harmonic slicing case : where the dots stand for non - principal contributions , and we have noted for short \,,\ ] ] where is a space - derivatives ordering parameter and round brackets denote index symmetrization .the first - order version of the z4 system is known to be symmetric - hyperbolic in normal coordinates with harmonic slicing , at least for the usual ordering .it follows from ( [ evol a]-[evol e ] ) that this result can be extended to the following range of the ordering parameter which covers the symmetric ordering case ( ) .the corresponding symmetrizer , or energy estimate , can be written as : where we have noted for short allowing for ( [ evol a]-[evol e ] ) , we get and the divergence theorem can be used in order to complete the proof . the positivity proof for for the interval ( [ range ] )is given in appendix a. we can consider now some specific space surface , in order to identify the constraint modes by looking at the evolution equations for and in the system ( [ evol a]-[evol e ] ) .it follows from ( [ evol b ] , [ evol c ] ) that the energy - constraint modes are given by the pair with propagation speed ( the index meaning the projection along the unit normal ) . also , allowing for ( [ zfrom mu],[evol e ] ) , we can easily recover the evolution equation for , namely so that we can identify the momentum - constraint modes with the three pairs , with propagation speed , note that , allowing for ( [ pimu a ] ) , the normal component does correspond with the transverse - trace component of the extrinsic curvature .we give for completeness the remaining modes , the fully tangent ones , with propagation speed , ( capital indices denote a projection tangent to the surface ) , and the standing modes ( zero propagation speed ) : we can take advantage of the positive - definite energy estimate ( [ eestimate ] ) in order to derive suitable algebraic boundary conditions .we can integrate ( [ div term ] ) in space and , by applying the divergence theorem , we get a positivity condition for the boundary terms , namely where stands for the boundary surface ( being here its outward normal ) .the contribution of the fully tangent modes ( [ transverse pairs ] ) , independent of the energy and momentum sectors , is given by \,,\ ] ] so that the contribution of these modes to the boundary term in ( [ bound term ] ) will be non - negative if we impose the standard algebraic boundary - conditions : the case corresponding to maximal dissipation .a less strict condition is obtained by adding an inhomogeneous term , namely this can cause some growth of the energy estimate but , provided that the array consists of prescribed spacetime functions , the growth rate can be bounded in a suitable way so that a well - posed system can still be obtained ( see for instance refs . ) .this simple strategy , when applied to the energy and momentum modes ( [ energy pair ] , [ momentum pairs ] ) is not compatible with constraint preservation in the generic case ( see also ref . ) . for the energy sector ,constraint preservation is obtained only for the extreme case : which will reflect energy - constraint violations back into the evolution domain .these conditions would be then of a limited practical use in realistic simulations. a different approach can be obtained by realizing that the contribution to the boundary term in ( [ bound term ] ) would have the right sign if one uses the following logical gate condition : ( -gate in ref .it is clear that the boundary condition ( [ thetagate ] ) preserves the energy constraint , as it modifies just the values , by setting them to zero when the condition is fulfilled , without affecting any other dynamical field .the same strategy can work for normal components of the momentum modes ( [ momentum pairs ] ) , at least for the symmetric choice of the ordering parameter .allowing for ( [ lambda ] ) , one has so that a constraint - preserving ( reflection ) condition can be obtained in the extreme case as well . in the logical gate approach , the contribution of the modes ( [ znmode ] ) to the boundary term in ( [ bound term ] ) will have the right sign if one uses the condition ( case only ) : which clearly preserves the normal component of the momentum constraint . for the tangent momentum modes ( tangent to the boundary surface ) ,however , the contribution in ( [ bound term ] ) will be where is inhomogeneous in for any value of the ordering parameter .moreover , the inhomogeneous terms are not prescribed functions , but rather some combinations of dynamical fields . a different strategymust then be devised in this case , as we will see below .the time evolution of the energy - momentum constraints can be easily derived by taking the divergence of the z4 field equations ( [ z4 ] ) , that is we can write down the second order equation ( [ wavez ] ) as a first order system and impose then maximally dissipative boundary conditions on ( the first derivatives of ) the components . in this way, the boundaries will behave as one - way membranes for constraint - violating modes , at least for the ones propagating along the normal direction .the procedure can be illustrated with the energy - constraint , that is the time component of ( [ wavez ] ) : a first - order version can be obtained as usual by considering first - order derivatives as independent quantities , namely we can write then ( [ thetasub ] ) as the following first - order symmetric - hyperbolic system boundary conditions for ( the incoming modes of ) the subsidiary system can be enforced then in the standard way .we will consider here for simplicity the maximal dissipation condition , that is ( we assume that the boundary is on the right ) : now we can use it as a tool for setting up boundary conditions for the energy modes of the main evolution system .one can for instance enforce directly ( [ thetabound ] ) , as in ref . .we will rather use ( [ thetabound ] ) as a tool for getting ( derivative ) boundary conditions for the incoming energy mode of the evolution system ( [ evol a ] - [ evol e ] ) . to do this, we can use the evolution equation ( [ evol b ] ) for transforming ( [ thetabound ] ) into a convenient version of the energy constraint , namely : we can now use ( [ energy constraint ] ) in order to modify the evolution equation of the incoming energy mode , that is : the whole process is equivalent to the simple replacement : where is the solution of the advection equation ( [ thetabound ] ) .the choice corresponds to the standard recipe of trading space normal derivatives by time derivatives , in the incoming modes .this implies that the modified mode gets zero propagation speed along the given direction . in this case , allowing for ( [ e- modif ] ) , the time derivative of would actually vanish , modulo non - principal terms ; this amounts to freezing the incoming modes to their initial values ( maximal dissipation on the right - hand - side ) , which is a current practice in some numerical relativity codes .note however that constraint preservation requires using the right non - principal terms , that can be deduced from the full expression ( [ e- repl ] ) .the choice would imply instead that the modified mode gets the same positive speed ( ) than the outgoing one .we show in appendix b that this choice will lead to a weakly - hyperbolic ( ill - posed ) boundary system .our results confirm that is actually a safe choice , although other values in the interval lead also to a strongly hyperbolic system with non - negative speeds for all energy modes ( see appendix b for details ) .the same method can be applied to the momentum constraint modes , although in a less straightforward way . let us start from the evolution equation ( [ evol z ] ) for , andtake one extra time derivative .we get in this way which , after some cross - derivatives cancellations , leads to the space components of ( the principal part of ) the covariant equation ( [ wavez ] ) . a first - order version of ( [ zsub ] )can be obtained again by considering first - order derivatives as independent quantities . for the time derivative we will take the obvious choice the treatment of space derivatives , however , is complicated by the fact that we are dealing with a first - order formulation , so that there are additional ordering constraints to be allowed for . following refs . , we will define for further convenience }- \partial_{[\,k}\,d_{i ] } - ( 1-\zeta)\,\gamma^{rs}\,\partial_{[\,r}\,d_{k]\,is } + ( 1+\zeta)\,\gamma^{rs}\,\partial_{[\,r}\,d_{i]\,ks}\,,\ ] ] where we have noted for short .a closer look to ( [ zkiexplicit ] ) shows that is just the space derivative of , modulo ordering constraints . in the notation of this paper : } + ( 1+\zeta)~[\,\partial_r\,{\mu_{(ki)}}^r+\partial_{(k}\ , z_{i)}\ , ] + \cdots\ ] ] we can write now ( [ zsub ] ) in the first - order form which is a symmetric - hyperbolic first - order version of the momentum - constraint evolution system ( other versions could be obtained by playing with the ordering constraints in a different way ) . the vanishing of the incoming modes of this subsidiary system can be enforced now in the same way as for the energy constraint , namely : this is obviously a maximal dissipation constraint - preserving condition for the subsidiary system , which can be used for to get a derivative boundary condition for the main evolution system , as we did for the energy modes in the preceding subsection . to be more specific, we can use the evolution equation ( [ evol z ] ) for transforming ( [ zbound ] ) into a convenient version of the momentum constraint , that is + \cdots \label{momentum constraint}\end{aligned}\ ] ] and use it for modifying the evolution equation of the incoming momentum modes , namely : which amounts to the following replacement : where is the solution of the advection - like equation ( [ zbound ] ) . the choice would imply again that the modified modes get zero propagation speeds along the normal direction , whereas the choice would imply instead that the modified modes get the same positive speed ( ) than the outgoing ones .this result requires the extra ordering terms in ( [ zkiexplicit ] ) : this was actually the reason for including them .note that we can consider different values of the coupling parameter for the energy modes ( ) , and even for the normal and tangent momentum modes ( , respectively ) . for any value , the modified modes can be computed consistently from inside .the momentum system however is too complicated for a full hyperbolicity analysis , like the one we provide in appendix b for the energy sector .part of the complication comes from the coupling with the non - constraint modes , which require their own boundary conditions .let us remember at this point that the boundary conditions presented in this section are derivative , not algebraic .this means that , even in the symmetric hyperbolic cases , proving well - posedness is by no means trivial .for that reason , we will rather follow the approach of ref . , focusing in the stability of small perturbations around smooth solutions , which can be tested numerically .we start in the following section , by performing a robust stability test in order to check the numerical stability of high - frequency perturbations around the minkowsky metric . as a full set of boundary conditionsis required , even in this weak - field test , we supplement our conditions for the constraint - related modes with the freezing of the initial values of the incoming non - constraint modes ( maximal dissipation on the right - hand - side ) .let us test now the stability and performance of the proposed conditions in the linear weak - field regime , by considering a small perturbation of minkowski space - time , which is generated by taking random data both in the extrinsic curvature and in the constraint - violation quantities .in this way the initial data violate the energy - momentum constraints , but preserve all ordering constraints .the level of the random noise will be of the order , small enough to make sure that we will keep in the linear regime during the whole simulation ( robust stability test , see ref . for details ) .we will use the standard method of lines as a finite difference algorithm , so that space and time discretization will be treated separately .the time evolution will be dealt with a third - order runge - kutta algorithm .the time step is kept small enough to avoid an excess of numerical dissipation that could distort our results in long runs . for space discretization, we will consider a three - dimensional rectangular grid , evenly - spaced along every space direction , with a space resolution .we will use there the simplest centered , second - order - accurate , discretization scheme . at the points next to the boundary , where we can not use the required three - points stencil, we will switch to the standard first - order upwind ( outgoing ) scheme .this combination is the simplest one with the summation - by - parts ( sbp ) property . in this waywe expect that the theoretical properties derived from symmetric - hyperbolicity will show up in the simulations in a more transparent way .for the same reason , we avoid adding extra viscosity terms that could mask the effect of our conditions ( at the continuum level ) with the dissipative effects of the discretization algorithm . just to make sure, we run also with periodic boundary conditions , where the noise level keeps constant : any decrease of the constraint - violation level will then be due to the proposed conditions , not to the discretization scheme .let us be more specific about the boundary treatment . at boundary points ,we use the first - order upwind algorithm in order to get a prediction for every dynamical field .once we have got this prediction , we perform the characteristic decomposition along the direction normal to the boundary .the predicted values for the outgoing modes , for which the upwind algorithm is known to be stable , will be kept ( this includes the standing modes , with zero characteristic speed ) .the ( unstable ) incoming modes will be replaced instead by the values arising from our boundary conditions , as described in the preceding section .we start with simulations in which the proposed conditions are applied just to the face , whereas we keep periodic boundary conditions along the and directions . in this way we can detect instabilities which are inherent to the proposed boundary conditions on smooth boundaries ( no corners ) , allowing at the same time for some non - trivial dynamics along at least one tangent direction ., with just one face open ( periodic boundaries are implemented along the transverse directions ) .the fully periodic boundaries result ( dashed lines ) is also included for comparison .we see some growing mode onset in the case , whereas the constraint - preserving case ( continuous line ) is very efficient at reducing the initial noise level.,width=377,height=226 ] we plot in fig .[ robusttheta ] the maximum norm of the energy - constraint - violating quantity for two different choices of the coupling parameter of the energy mode : .we can see that , after crossing times , the case starts showing the effect of the linear modes predicted by our hyperbolicity analysis in appendix b , by departing from the maximal dissipation pattern of decay .we plot for comparison the results obtained by applying periodic boundary conditions , so we can see how , for the choice , the proposed constraint - preserving conditions are extremely effective at draining out energy constraint violations .the rate of decay is actually the same as the one obtained by applying maximal dissipation conditions on the right - hand - side also to the energy modes , as expected from the analysis given in the previous section .in what follows , we will fix for this coupling parameter .components ( left and right panels , respectively ) . in both cases , the periodic boundaries results ( dashed lines )are included for comparison .the initial noise in the momentum constraint gets reduced very efficiently in both the and the cases , although there is a slight difference , more visible in the longitudinal case ( left panel).,title="fig:",width=302,height=226 ] components ( left and right panels , respectively ) . in both cases , the periodic boundaries results ( dashed lines )are included for comparison .the initial noise in the momentum constraint gets reduced very efficiently in both the and the cases , although there is a slight difference , more visible in the longitudinal case ( left panel).,title="fig:",width=302,height=226 ] we plot in fig .[ robust1 ] both the maximum norm of the longitudinal ( left panel ) and transverse components ( right panel ) of the momentum - constraint - violating vector for the choice of the coupling parameter of the momentum modes .we include again for comparison the results obtained by applying periodic boundary conditions , so we can see how the proposed constraint - preserving conditions are very effective at draining out energy constraint violations .the plots are slightly , sensitive to the ordering parameter . in the case , the rate of decay is actually the same as the one obtained by applying instead maximal dissipation conditions on the right - hand .- side for the momentum modes .the results are qualitatively the same for other components of and for other parameter combinations .( left panel ) , and ( right panel ) .the proposed boundary conditions are applied here to all faces , including corners and vertices .some amount of numerical dissipation has been added , so that the periodic boundaries plots ( dashed lines ) get a visible negative slope .the choice for the energy modes is still clearly stable ( left panel ) .the choice for the momentum modes ( right panel ) shows a growing mode onset . for comparison , a plot with the maximal dissipation results is also included in the right panel ( bottom line).,title="fig:",width=302,height=226 ] ( left panel ) , and ( right panel ) .the proposed boundary conditions are applied here to all faces , including corners and vertices .some amount of numerical dissipation has been added , so that the periodic boundaries plots ( dashed lines ) get a visible negative slope .the choice for the energy modes is still clearly stable ( left panel ) .the choice for the momentum modes ( right panel ) shows a growing mode onset . for comparison , a plot with the maximal dissipation resultsis also included in the right panel ( bottom line).,title="fig:",width=302,height=226 ] in order to perform a full test for cartesian - like grids , including corner and vertex points , we will repeat the same simulations , but this time with the proposed boundary conditions applied to all faces , not just to the ones . a standard treatment of corner points a la olsson , like the one presented in previous works , results into numerical instability issues .a simple cure is to add some extra dissipation at the interior points , at the price of masking the theoretical results , et the continuum level , with the numerical viscosity effects , as shown in fig .[ robustolsson ] .we can see there that opening all faces makes the effects to appear much faster . the expected instability of the choice of the energy coupling parameter , which was just an onset in fig .[ robusttheta ] , shows up manifestly here ( left panel ) .also , a growing mode onset is clearly visible for the choice of the momentum - constraint coupling parameter ( right panel ) .the case looks stable , although no strong conclusion can be drawn because of the added numerical dissipation .maximal dissipation results are also shown for comparison in the right panel ( bottom line ) .[ 0.3 ] we will present here an alternative numerical treatment . at boundary points , tangent derivatives are computed at the next - to - last layer .the corresponding stencil is shown in fig .[ stencil ] . in this waythe corner points are not required .this avoids the reported code stability issues , even without adding extra numerical dissipation terms .note that transverse derivatives are still computed using the standard three - point sbp algorithm , like in the smooth boundaries case .as every space derivative can be considered separately ( we are dealing with a first - order system ) the sbp property should still follow for our modified scheme . the price for the shift of the transverse derivatives to the next - to - last layer is getting just first - order accuracy at the boundary , but the longitudinal derivatives there were yet only first - order accurate anyway . ( left panel ) , and ( right panel ) .the proposed boundary conditions are applied to all faces .corner points are avoided in the way shown in fig .[ stencil ] .no extra numerical dissipation has been added , so that the periodic boundaries plots ( dashed lines ) keep flat .the absence of extra dissipation clarifies the features shown in the previous figure.,title="fig:",width=302,height=226 ] ( left panel ) , and ( right panel ) .the proposed boundary conditions are applied to all faces .corner points are avoided in the way shown in fig .[ stencil ] .no extra numerical dissipation has been added , so that the periodic boundaries plots ( dashed lines ) keep flat .the absence of extra dissipation clarifies the features shown in the previous figure.,title="fig:",width=302,height=226 ] this discretization variant allows getting stable results , at least for the value of the ordering parameter .we plot in fig .[ robust3d ] the maximum norm of the constraint - violation quantities .we can see there that removing the extra numerical dissipation makes the features more transparent .the instability of the choice of the energy coupling parameter , appears now instantly .the downfall rate in the stable case , increased as the constraint violations are drained out in all three directions now , can be seen in a more unambiguous way . concerning the momentum constraint ( right panel ) , the standard ordering shows now clearly its unstable behavior , which was masked by the added dissipation in the standard treatment ( see fig .[ robustolsson ] ) . the centered ordering choice recovers instead the manifest stable behavior shown in single - face simulations ( see fig .[ robust1 ] ) , close to the maximal dissipation case ( left panel , bottom line ) .our results show the numerical stability of the proposed boundary conditions in the linear regime for suitable combinations of the coupling and/or ordering parameters .the proposed boundary conditions produce instead a very effective decreasing of ( the cumulated effect of ) energy and momentum constraint violations which compares with the one obtained by applying maximal dissipation boundary conditions to ( the right - hand - side of ) the constraint related modes .although the results of the preceding are encouraging , let us remark that we were just testing the linear regime around minkowsky spacetime .this is not enough , as high - frequency instabilities can appear in generic , strong field , situations ( see for instance ref . ) . in order to test the strong field regime, we will consider the gowdy solution , which describes a space - time containing plane polarized gravitational waves .this is one of the test cases that is used in numerical code cross - comparison with periodic boundary conditions .one of the advantages is that it allows for periodic and/or reflecting boundary conditions , which can be applied to the modes which are not in the energy - momentum constraint sector .a first proposal for this selective testing of the constraint - related modes has been presented recently .the gowdy line element can be written as where the quantities and are functions of and only and periodic in , that is , \end{aligned}\ ] ] so that the lapse function is constant in space at any time at which vanishes .let us now perform the following time coordinate transformation so that the expanding line element ( [ gowdy_line ] ) is seen in the new time coordinate as collapsing towards the singularity , which is approached only in the limit .this singularity avoidance property of the coordinate follows from the fact that the resulting slicing by surfaces is harmonic .we will launch our simulations in normal coordinates , starting with a constant lapse at ( ) .the discretization is performed like in the preceding section , but with a space resolution . allowing for the fact that the only non - trivial space dependence in the metric is through , the numerical grid is fitted to the range . in this way, the exact solution admits either periodic or reflection boundary conditions .we can use these exact boundary conditions as a comparison with the constraint - preserving ones that we are going to test . as the gowdy metric components depend on just one coordinate, we will apply here the proposed constraint - preserving conditions only to the faces , keeping periodic boundary conditions along the transverse directions .also , like in the preceding section , we show the results for the coupling parameters combination , although other choices of lead to similar results .is plotted at for two different resolutions ( left panel ) .we see second - order convergence in the interior region , decreasing up to first - order at points causally connected to the boundary , as expected from our sbp algorithm .we plot in the right panel the relative error of the metric component , evolved up to .the boundary - induced accuracy reduction is not even visible yet ., title="fig:",width=302,height=226 ] is plotted at for two different resolutions ( left panel ) .we see second - order convergence in the interior region , decreasing up to first - order at points causally connected to the boundary , as expected from our sbp algorithm .we plot in the right panel the relative error of the metric component , evolved up to .the boundary - induced accuracy reduction is not even visible yet ., title="fig:",width=302,height=226 ] we start with a simple convergence test . as we know the exact solution ( [ gowdy_line ] ), we can directly compute the relative error of every simulation .then , only two different resolutions are required for checking convergence .we will take and for our gowdy wave simulations .we plot in fig .[ gowdy conv ] the energy - constraint - violation quantity at some early time ( left panel ) .we see the expected second - order accuracy at the interior points which are yet causally disconnected with the boundaries .we see just first - order accuracy at the boundary , plus a smooth transition zone .this accuracy reduction at boundaries is inherent to simple sbp algorithms , which require a lower - order discretization at boundary points .one could keep instead the accuracy level of the interior points by using more accurate predictions for boundary values , but at the price of loosing the sbp property . in our test case , however , this issue is not affecting the metric components , even at a much later time , as we can see in the right panel of fig .[ gowdy conv ] . and are plotted as indicators of the accumulated error due to energy - momentum constraint violations .reflection boundaries results are also plot for comparison ( continuous lines ) .dotted lines correspond to the proposed boundary conditions , whereas dashed lines correspond to the same conditions with the extra damping terms discussed in this section , with .,title="fig:",width=302,height=226 ] and profiles are plotted as indicators of the accumulated error due to energy - momentum constraint violations .reflection boundaries results are also plot for comparison ( continuous lines ) .dotted lines correspond to the proposed boundary conditions , whereas dashed lines correspond to the same conditions with the extra damping terms discussed in this section , with .,title="fig:",width=302,height=226 ] we show in fig .[ gwperiodic ] the results of the resolution simulation for the boundary profiles of and , indicating the accumulated amount of energy and momentum constraint violations ( left and right panels , respectively ) .we apply in both cases the proposed boundary conditions at the faces to the constraint - related modes , while keeping exact ( reflection ) boundary conditions for the other modes .we can see that the constraint - preserving conditions result , in this strong - field test , into an accumulated amount of constraint violations ( dotted lines ) that is similar or even slightly better than the one produced by the interior points treatment , which can be seen in simulations with ( exact ) reflection boundaries for all faces ( continuous lines ) .note that the reflection conditions anchor to zero at the boundary points , which is always more accurate in this test , although not very useful in more realistic cases .these results confirm that the proposed boundary conditions are indeed constraint - preserving , in the sense that their contribution to energy and momentum constraint violations keeps within the limits of the truncation error of the discretization algorithms , even in this strong field scenario .this good behavior can be further improved by introducing constraint - damping terms in the evolution equations for the boundary quantities ( [ thetabound ] , [ zbound ] ) that is the resulting values can then be used in the replacements ( [ e- repl ] ) and ( [ m- repl ] ) , respectively .we have included the corresponding results in fig .[ gwperiodic ] ( dashed lines ) .the amount of both energy and constraint violations becomes even lower than the one for the ( exact ) reflection boundaries simulations even for a small value of the damping parameter .the effect is specially visible in the energy constraint case ( left panel ) .the work presented in this paper revises and improves the previous results for the z4 case in many different ways . on the theoretical side ,we have proposed a new symmetrizer , which extends the parametric domain for symmetric hyperbolicity from the single value to the interval .we have identified in the process a new basis for the dynamical field space ( [ pimu a]-[pimu d ] ) which allows a clear - cut separation between the constraint - related modes and the remaining ones .regarding the boundary treatment , we have also generalized the way in which boundary conditions can by used for modifying the incoming modes , by introducing a new parameter which , at least for the momentum constraint modes , can depart from the standard value without affecting the stability of the results . on the numerical side, the use of the new basis definitely improves the stability of the previous z4 results . in the single face case , where we use periodic boundary conditions along transverse directions, we see that the linear modes previously reported in the robust stability test for the symmetric ordering case ( ) no longer show up .moreover , we have devised a simple finite - differences stencil for the prediction step at the boundaries which avoids the corner and vertex points even in cartesian - like grids , providing an interesting alternative to the standard ( olsson ) corners treatment .the proposed boundary conditions have been also tested in a strong field scenario , the gowdy waves metric , so that the effect of non - trivial metric coefficients can be seen in the simulation results .the convergence test in this non - linear regime provides strong evidence of numerical stability for some suitable parameter combinations .our simulations actually confirm that the proposed boundary conditions are constraint - preserving : the accumulated amount of energy - momentum constraint violations is similar or even better than the one generated by either periodic or reflection conditions , which are exact in the gowdy waves case .now it remains the question of how these interesting results can be extended to other 3 + 1 evolution formalisms and/or gauge conditions .let us remember that all our symmetric hyperbolicity results apply as usual just to the harmonic slicing , not to the 1+log class of slicing conditions which are customary in bssn black - hole simulations .there is no problem , however , in extending the proposed boundary conditions to this case : in our new basis the gauge sector is clearly separated from the constraint - related one , so that one can keep using the replacements ( [ e- repl ] , [ m- repl ] ) even in this non - harmonic case .the shift , however , introduces new couplings and would require a detailed case - by - case investigation : even the strong hyperbolicity of the system can depend on the specific choice of shift condition .concerning the extension from the z4 to the bssn formalism , the momentum constraint treatment can be derived from the simple condition which relates the additional bssn quantity with the space vector .the replacement ( [ m- repl ] ) can then be used for getting a suitable boundary condition in this context .the case of the energy constraint is more challenging , as the bssn formalism does not contain any supplementary quantity analogous to .one could follow , however , the line recently proposed in : a slight modification of the original bssn equations allows to include the new quantity , so that the correspondence with the z4 formalism is complete .the replacement ( [ e- repl ] ) can then be used directly in such context .a major challenge is posed by the fact that most bssn implementations are of second order in space .this has some advantages in this context , as the ordering constraints do not show up and this removes the main source of ambiguities in the constraint - violations evolution system . as a result ,the boundary conditions ( [ thetabound ] , [ zbound ] ) become simply advection equations so that we can expect a more effective constraint - violation draining rate . the problem , however , is that second - order implementations do not have the algebraic characteristic decomposition which is crucial in the first - order ones .the boundaries treatment takes quite different approaches in second - order formalisms , although the evolution equations for the constraint - related quantities are still of first order in the z4 case , even at the continuum level , and this suggests that the results presented here can be still helpful in this case .we are currently working in this direction .this work has been jointly supported by european union feder funds and by the spanish ministry of science and education ( projects fpa2007 - 60220 , csd2007 - 00042 and eci2007 - 29029-e ) . c. bona - casas acknowledges the support of the spanish ministry of science , under the fpu/2006 - 02226 fellowship .we have derived in section ii an generalized energy estimate for the z4 system , namely : where we noted in order to check the positivity of ( [ ae ] ) , let us consider the decomposition of the three - index tensor into its symmetric and antisymmetric parts , that is allowing for the identities , the rank - three terms contribution to can the be written as ( the dots stand for lower - rank components ) .it follows that a necessary condition for positivity is . we can now rewrite ( [ ae ] ) as allowing for ( [ amut ] ) , which implies in turn we see that we can rewrite again ( [ aebis ] ) as where it follows from the final expression ( [ aefinal ] ) that the energy estimate is positive definite in the whole interval note that for , that is , we recover the estimate given in ref .we can analyze the hyperbolicity of the boundary evolution system , by considering the characteristic matrix along a generic oblique direction , which is related to the normal direction by where we have taken the strong hyperbolicity requirement amounts to demand that the characteristic matrix is fully diagonalizable and has real eigenvalues ( propagation speeds ) for any value of the angle . in order to compute the characteristic matrix, we will consider the standard form of ( the principal part of ) the evolution system as follows where stands for the array of dynamical fields and is the array of fluxes along the direction .we will restrict ourselves here to the energy - modes subsystem , which consists in the fields the index meaning here a projection along the direction orthogonal both to and .it is clear that the two components are eigenvectors of the characteristic matrix with zero propagation speed .the non - trivial fluxes are then : where we have allowed for the modified evolution equation ( [ e- modif ] ) .we can see that the one of the characteristic speeds is zero and the other two are be given by the solutions of which has real distinct solutions for .the degenerate case is not diagonalizable .it follows that the boundary evolution subsystem given by the above fluxes is strongly hyperbolic for and weakly hyperbolic for .
a set of energy - momentum constraint - preserving boundary conditions is proposed for the first - order z4 case . the stability of a simple numerical implementation is tested in the linear regime ( robust stability test ) , both with the standard corner and vertex treatment and with a modified finite - differences stencil for boundary points which avoids corners and vertices even in cartesian - like grids . moreover , the proposed boundary conditions are tested in a strong field scenario , the gowdy waves metric , showing the expected rate of convergence . the accumulated amount of energy - momentum constraint violations is similar or even smaller than the one generated by either periodic or reflection conditions , which are exact in the gowdy waves case . as a side theoretical result , a new symmetrizer is explicitly given , which extends the parametric domain of symmetric hyperbolicity for the z4 formalism . the application of these results to first - order bssn - like formalisms is also considered . .
combinatorial optimization appears in many important fields such as computer science , drug discovery and life - science , and information processing technology .one of the example of such problems is an ising problem to minimize the ising hamiltonian , which is a function of a spin configuration defined as where each spin takes binary values , a real number symmetric matrix denotes a coupling constant , and is the total number of spins . despite its simple statement , it belongs to the non - deterministic polynomial - time ( np)-hard class to find the ground state of the ising model on the three - dimensional lattice .similarly , a maximum cut ( max - cut ) problem in the graph theory is to find the size of the largest cut in a given undirected graph . here, a cut is a partition of the vertices into two disjoint subsets and the size of the cut is the total weight of edges with one vertex in and the other in .the size of the cut can be counted by assigning the binary spin values to express which subset the vertex belongs to : where is an ising hamiltonian defined in eq .( [ eq : ising ] ) with .it indicates that the max - cut problem is equivalent to the ising problem except for the constant factor .the max - cut problem belongs to the np - hard class in general , even though there are graph topologies which can be solved in polynomial time .many attempts have been made to approximately solve np - hard max - cut problems , but the probabilistically checkable proof ( pcp ) theorem states that no polynomial time algorithms can approximate max - cut problems better than .currently , the approximation ratio of achieved by the goemans - williamson algorithm ( gw ) based on semidefinite programming ( sdp ) is the best value for performance guarantee .this algorithm is a well - established benchmark to evaluate any new algorithms or computing methods . besides, there exist several heuristic algorithms to tackle these np - hard max - cut problems .the simulated annealing ( sa ) was designed by mimicking the thermal annealing procedure in metallurgy .a quantum annealing technique was also formulated and was shown to have competitive performance against sa .independently , novel algorithms which are superior either in its speed or its accuracy are proposed .we recently proposed a novel computing system to implement the np - hard ising problems using the criticality of laser and degenerate optical parametric oscillator ( dopo ) phase transition .the architecture of this machine is motivated by the principle of laser and dopo in which the mode with the minimum loss rate is most likely to be excited first .the energy of the ising hamiltonian can be mapped onto the total loss rate of the laser or dopo network .the selected oscillation mode in the laser or dopo network corresponds to the ground state of a given ising hamiltonian , while the gain accessible to all other possible modes is depleted due to the cross - gain saturation .this means that a mode with the lowest loss rate reaches a threshold condition first and clumps the gain at its loss rate , so that all the other modes with higher loss rates stay at sub - threshold conditions .moreover , the dopo is in the linear superposition of -phase state and -phase state at its oscillation threshold .the coupled dopos form quantum entanglement in spite of their inherent dissipative natures , so that some form of quantum parallel search could be embedded in a dopo network . in this article , the validity of the cim for max - cut problems is tested against the representative approximation algorithms .the dopo signal pulse amplitudes in cim , which are interpreted as the solution , are described by the c - number stochastic differential equations ( csde ) as presented in section [ sec : mf ] .then we conduct numerical simulations for max - cut problems in section [ sec : maxcut ] with the number of vertices up to .it is , of course , difficult to compare the performance of the proposed system as a max - cut solver with the representative approximation algorithms which can be run on current digital computers mainly because the unit of `` clock '' can not be uniquely defined .thus we defined the feasible system clock which dominates the computing process in cim as mentioned later .moreover , here we evaluated the computational ability under either time or accuracy was fixed , while a preliminary benchmark study done in the previous paper is focused on the performance after physical convergence .a standard cim based on multiple - pulse dopo with all - optical mutual coupling circuits is shown in fig .[ fig : cim ] .the system starts with a pulsed master laser at a wavelength of . a second harmonic generation ( shg ) crystal produces the pulse trains at a wavelength of which in turn generate multiple dopo pulses at a wavelength of inside a fiber ring resonator .if the round trip time of the fiber ring resonator is properly adjusted to times the pump pulse interval , we can simultaneously generate independent dopo pulses inside the resonator .each of these pulses is either in -phase state or -phase state at well above the oscillation threshold and represents an ising spin of up or down . in order to implement an ising coupling in eq .( [ eq : ising ] ) , a part of each dopo pulse in the fiber ring resonator is picked - off and fed into an optical phase sensitive amplifier ( psa ) , followed by optical delay lines with intensity and phase modulators .using such optical delay lines , ( arbitrary ) -th pulse can be coupled to ( arbitrary ) -th pulse with a coupling coefficient .such an all - optical coupling scheme has been demonstrated for and cims . in sections[ sec : sdp ] and [ sec : sa ] , we assume a cim with a fiber length of ( or cavity round trip time of 10 ) and pulse spacing of 10 cm ( or pulse repetition frequency of ) , thus independent dopo pulses can be prepared for computation .the system clock frequency for the cim should be defined by the cavity circulation frequency ( inverse of cavity round trip time ) .one clock cycle ( round trip ) includes every elements of computation , such as parametric amplification , out - coupling port , and coherent feedback .thus the clock frequency of the cim assumed for the present benchmark study is 100 khz since the round trip time of 2 km fiber ring is .we fixed this system clock frequency , just like any digital computer has a fixed clock frequency and chose the appropriate pulse interval to pack the desired number of pulses in the fiber .the in - phase and quadrature - phase amplitudes of a single isolated dopo pulse obey the following c - number stochastic differential equations ( csde ) : the above csde are derived by expanding the dopo field density operator with the truncated wigner distribution functions .an alternative approach is to use two coherent states in the generalized ( off - diagonal ) -representation for the field density matrix .the two approaches by the truncated wigner function and the generalised p - representation are equivalent for highly dissipative systems such as ours .the pump field is adiabatically eliminated in ( [ eq:1 csde ] ) and ( [ eq:2 csde ] ) by assuming that the pump photon decay rate is much larger than the signal photon decay rate .the term is the dopo field amplitude at a normalized pump rate , and is the second order nonlinear coefficient associated with the degenerate optical parametric amplification .the variable is a normalized time , while is a real time in seconds .the term is the pump field amplitude and is the threshold pump field amplitude .finally , and are two independent gaussian noise processes that represent the incident vacuum fluctuations from the open port of the output coupler and the pump field fluctuation for in - phase and quadrature - phase components , respectively .the vacuum fluctuation of the signal channel contributes to the 1/2 term and the quantum noise of the pump field contributes to in the square - root bracket in ( [ eq:1 csde ] ) and ( [ eq:2 csde ] ) . when the -th signal pulse is incident upon the output coupler , the output - coupled field and remaining field inside a cavity are written as where is the power transmission coefficient of the output coupler and is the incident vacuum fluctuation from the open port of the coupler .the out - coupled field and the signal amplitude after psa can be from these out - coupled pulse stream , the intensity and phase modulators placed in the delay lines produce the mutual coupling pulse , which is actually added to the -th signal pulse by an injection coupler . here, is the effective coupling coefficient from the -th pulse to the -th pulse , determined by the transmission coefficient of the injection coupler . in the highly dissipative limit of a mutual coupling circuit , such as in our scheme, we can use the csde supplemented with the noisy coupling term . since the transmission coefficient of the injection coupler should be much smaller than one, we do not need to consider any additional noise in the injected feedback pulse .the csde ( [ eq:1 csde ] ) can be now rewritten to include the mutual coupling terms \ , dt \nonumber\\ + \frac{1}{a_\mathrm{s}}\sqrt{c_i^2+s_i^2+\frac{1}{2}}dw_i.\label{eq:6 coupler 1,2}\end{aligned}\ ] ] the summation in eq .( [ eq:6 coupler 1,2 ] ) represents the quantum measurement - feedback term including the measurement error given by eq .( [ eq:5 coupler 1 _ ] ) .the vacuum fluctuation coupled to the -th pulse in the output coupler is already taken into account in the last term of right - hand side of eq .( [ eq:6 coupler 1,2 ] ) together with the pump noise .we conducted the numerical simulation of the coupled csde ( [ eq:6 coupler 1,2 ] ) to evaluate the performance of the cim .the max - cut problem on cubic graphs , in which each vertex has exactly three edges , is called max - cut-3 problem and also belongs to np - hard class .the smallest simple max - cut-3 problem is defined on the complete graph with four vertices and six edges with identical weight , where anti - ferromagnetic couplings have frustration so that the ground states are highly degenerate .the solution to this problem are the set of two - by - two cuts , which contains six degenerate ground states of the ising hamiltonian , i.e. , . figure [ fig : maxcut4 ] shows the time evolution of when and .a correct solution spontaneously emerges after several tens of round trips .the statistics of obtaining different states against 1000 sessions of such a numerical simulation are shown in fig .[ fig:4mcut ] , in which six degenerate ground states appear with almost equal probabilities with no errors found .normalized dopo signal amplitudes as a function of normalized time ( in unit of cavity round trip numbers ) for a simple max - cut-3 problem .each color corresponds to the four different dopos indexed with .small window is enlarged to indicate the status of signal amplitude inside a cavity at three components ( as in fig .[ fig : cim ] ) ; a : opa gain medium ( ppln waveguide ) , b : out - coupler , and c : injection coupler of the mutual coupling pulse . the two flat regions between b and c and between c and a are the passive propagation in a fiber.,width=297 ] distribution of output spin configurations in 1000 trials of numerical simulations against a simple max - cut-3 problem of graph order .all trials were successful to find one of the six degenerate ground states.,width=297 ] if the interaction is not a standard two - body ising interaction type but rather a four - body interaction such as where , the coupled field into the -th pulse is no longer given by but by . in this case , the csde ( [ eq:6 coupler 1,2 ] ) can be rewritten to include the four - body coupling term \ , dt \nonumber\\+\frac{1}{a_\mathrm{s}}\sqrt{c_i^2+s_i^2+\frac{1}{2}}dw_i.\end{aligned}\ ] ] when the four - body coupling coefficient is ( multi - body anti - ferromagnetic coupling ) , there are eight degenerate ground states , i.e. , and their inverse spin configurations . figure [ fig : amp4 ] shows the time evolution of when and .one of the eight degenerate ground states emerges spontaneously after several tens of round trips .the statistics of observing different states in 1000 independent sessions of the numerical simulation of eq .( [ eq:4bodycsde ] ) are shown in fig .[ fig:4body ] , in which eight degenerate ground states are obtained with no errors found .normalized dopo pulse amplitudes under the interaction between four - body ising coupling expressed by eq .( [ eq:4body]).,width=297 ] distribution of output spin configurations in 1000 trials of numerical simulation against a four - body ising model of .all trials were successful to find one of the eight degenerate ground states.,width=297 ] in this subsection , we will review the four representative approximation algorithms for max - cut problems .the goemans - williamson algorithm ( gw ) based on sdp has a -performance guarantee for np - hard max - cut problems .it achieves the optimal approximation ratio for max - cut problems under the assumptions of and the unique games conjecture .the sdp relaxation of the original max - cut problem is a vector - valued optimization problem as maximizing , where is a unit sphere in and ( or : number of vertices ) .there exist polynomial time algorithms to find the optimal solution of this relaxation problem ( with error ) , and its value is commonly called the sdp upper bound .a final solution to the original max - cut problem is obtained by projecting the solution vector sets to randomly chosen one - dimensional euclidean spaces ( i.e. , dividing the sphere by random hyperplanes ) .there are three types of computational complexities of the best - known algorithms for solving the sdp relaxation problem .if a graph with vertices and edges is regular , the sdp problem can be approximately solved in almost linear time as using the matrix multiplicative weights method , where represents the accuracy of the obtained solution . however , slower algorithms are required for general graphs .if the edge weights of the graph are all non - negative , the fastest algorithm runs in time based on the lagrangian relaxation method . for graphs with both positive and negative edge weights ,the sdp problem is commonly solved using the interior - point method , which scales as .besides , low rank formulation of sdp is effective when the graph is sparse . in our computational experiments ,the copl_sdp based on the interior point method was used as a solver for max - cut problems .the sdp upper bound and the solution were obtained using the following parameters : interior point method was used until the relative gap reached , where and are the objective functions of the primal and dual of the sdp problem , respectively .random projection onto one - dimensional space was executed times . for many practical applications , heuristic algorithms are more convenient to use , since the gw algorithm generally requires long computation time .metropolis et al .introduced a simple algorithm that can be used to provide an efficient simulation of a collection of atoms in equilibrium at a given temperature .kirkpatrick et al .applied the algorithm to optimization problems by replacing the energy of the atomic system to the cost function of optimization problems and using spin configurations , which is called the simulated annealing algorithm ( sa ) . in each step of this algorithm ,a system is given with a random spin flip and the resulting change in the energy is computed . if , the spin flip is always accepted , and the configuration with the flipped spin is used as the starting point of the next step . if , the spin is treated probabilistically , i.e. , the probability that the new configuration will be accepted is with a control parameter of system temperature .this choice of results in the system evolving into an equilibrium boltzmann distribution . repeating this procedure , with the temperature gradually lowered to zero in sufficiently long time , leads spins to convergence to the lowest energy state . in practical case , with the finite time, the annealing schedule affects the quality of output values . here in our numerical simulations ,the temperature was lowered according to the logarithmic function .note that 1 monte carlo step corresponds to trials of spin flip .sahni and gonzalez constructed a greedy algorithm for max - cut problems , which has 1/2-performance guarantee , and sg3 is a modified version of it . in this algorithm ,nodes are divided into two disjoint subsets sequentially . for each iterative process, the node with the maximum score is selected , and it is put into either set or so as to earn larger cuts . here , the score function of sg3 is defined as .it stops when all the edges are evaluated to calculate the score function , thus sg3 scales as .the power of breakout local search ( bls ) appears in the benchmark result for g - set graphs .it updated almost half of the best solutions in g - set with the specialized data structure for sorting and dedicated procedure to escape from local minima .the algorithm is combination of steepest descent and forced spin flipping : after being trapped by a local minima as a result of steepest descent procedure , three types of forced spin flipping ( single , pair , and random ) are probabilistically executed according to the vertex influence list ( i.e. , which vertex will increase the number of cut most when it s flipped ) on each subset of partition .these algorithms are coded in c / c++ and run on a single thread of a single core on a linux machine with two 6-core intel xeon x5650 ( 2.67 ghz ) processors and 94 gb ram .the cim is simulated based on the coupled csde ( [ eq:6 coupler 1,2 ] ) on the same machine .note that the computation time of cim does not mean the simulation time on the linux machine but corresponds to the actual evolution time of a physical cim .the performance of a cim with dopo network was tested on the np - hard max - cut problems on sparse graphs , so - called g - set .these test instances were randomly constructed using a machine - independent graph generator written by g. rinaldi , with the number of vertices ranging from 800 to 20000 , edge density from to , and topology from random , almost planar , to toroidal .the output cut values of running the cim , sa , gw , and the best known solutions so far we could find for some of g - set graphs are summarized in table [ opo_sdp ] .the results for cim are obtained in 50 ms , which correspond to the performance of an experimental system after 5000 dopo cavity round trips .the best result and ensemble average value for 100 trials are shown . here ,the parameters are set to be , and the coupling constant is normalized by the square root of the graph average degree . the hysteretic optimization method , in which the swinging and decaying zeeman term that flips the signal amplitude ( spin ) back and forth , is implemented four times after 10 ms initial free evolution .each hysteretic optimization takes 10 ms so that the total search takes 50 ms . the result of sais also obtained in 50 ms for each graph . for gw ,the computation time ranged between 2.3 s and s , depending on .the best outputs of the cim were better than gw but worse than sa , and cim found better cut against gw except for a toroidal graph ( g50 ) and a disconnected random graph ( g70 ) . [ cols="<,>,>,>,^,^,^,^,^,^",options="header " , ] as the size of optimization problems increases ,the average accuracy is important for practical applications .table [ opo_sdp ] shows that for all g - set graphs , the average accuracy in 100 trials is 0.94148 to the sdp upper bound , i.e. , the cim can find a cut value larger than 0.94148 of the optimal value for the max - cut problems on average , whereas the average accuracy of the gw is 0.93025 and that of the sa is 0.94692 .note that is always greater than or equal to the optimal value for each max - cut problem . in the previous section , the running time of cim and saare fixed to estimate the computational accuracy .these two algorithms explored the solutions as good as possible in 50 ms .although , if we finish the computation at a certain accuracy , more reasonable computation time can be defined . here , the gw solution was used as the mark of sufficient accuracy because it ensures the 87.856% of the ground states .the cim and sa then competed the computation time to reach the same values obtained by gw .the time and temperature scheduling parameters of the sa were set as follows : inverse temperature increased with logarithmic function .the number of spin flipping was optimized to be times for some , which requires the minimum computation time to achieve the same accuracy as with the gw .computational experiments were conducted on fully connected complete graphs , denoted by , where the number of vertices ranging from 40 to 20000 and the edges are randomly weighted .figure [ fig : energy ] shows the ising energy in eq .( [ eq : ising ] ) as a function of running time . both cim and sarun stochastically due to quantum and thermal noise , respectively , the ensemble average of energies are calculated as follows : for the cim , the energy of all 100 runs was averaged at each round trip . for the sa, the averaged energy was calculated at each point on the time axis with an interpolated value from real time sampling .the parameters for cim are chosen to be , ( for ) , and ( for ) . in fig .[ fig : energy ] ( a ) , where , the gw achieved an energy equal to in , while the cim and sa reached the same energy in and in . in fig .[ fig : energy ] ( b ) , the gw achieved an energy of in , while the cim reached the same energy in and the sa did so in .note that this result of sa and gw comes from a specific computer configuration as mentioned in sec .[ sec : algorithm ] .there is room for an improvement in the computation time in constant factor due to cases like using faster cpus or parallelized codes .similarly , the computation time of cim also depends on the system configuration and can be made faster when we use the higher clock frequency . in this sense , the ratios between time of cim and that of the other algorithms are arbitrary .thus we should study the computation time scaling as a function of the problem size .figure [ fig : time_scaling ] ( a ) shows the computation time versus problem size ( number of vertices ) . the computation time is defined as the cpu time to solve a given max - cut problem in complete graph for gw ; as the cpu time to reach the same accuracy as gw for sa , sg3 , and bls ; and as the time estimated by the ( number of round trips ) ( cavity round trip time ) to obtain the same accuracy as gw for cim .the preparation time needed to input into the computing system , i.e. , the graph i / o time , is not included . for complete graphs of ,the cim exhibits a problem - size independent computation time of less than if we assume the fixed cavity circulation frequency of 100 khz and pulse interval of 10 cm .this means the target accuracy is obtained in the constant number of round trips .it indicate that the computation time of cim is determined by the turn - on delay time of the dopo network oscillation , which in turn depends on the round trip time and the pump rate . in figure[ fig : time_scaling ] ( b ) , the computation time of the cim with different system clock frequency and pulse spacing are shown ( see the sec .[ sec : spec ] for the definition ) .since the solutions are obtained in a constant number of cavity round trips , the computation time is pulse spacing independent but linearly depends on the clock frequency , i.e. , cavity circulation frequency .the number of pulses accommodated in the fiber can be changed to vary the pulse spacing under the fixed clock frequency . on the other hand ,when the pulse spacing is fixed and the fiber length is varied , the maximum number of pulses should be increased in proportional to the fiber length .going back to the figure [ fig : time_scaling ] ( a ) , the time complexity for the gw is dominated by the interior - point method in the goemans - williamson algorithm .the sa seems to scale in , which indicates that it requires the number of spin flips to be proportional to ( i.e. , constant monte carlo steps ) to achieve the optimal performance .each spin flip costs a computation time proportional to the degree , where is equal to for all in the case of complete graphs .thus , the computation time scales as for the sa in the complete graphs .note that cim and sa did nt always reach the energy obtained by gw for the graph of , half of the 100 runs of stochastic algorithms were post - selected to reach that value .sg3 scales as in fig .[ fig : time_scaling ] ( a ) , but the values for are not shown because it did nt reach the accuracy reached by the gw solution .bls exhibits competitive performance against sa . besides, the dopo amplitudes in cim evolve as in fig .[ fig : tod ] ( a ) when .the distribution of computation time for 100 randomly weighted complete graphs of is also shown in fig .[ fig : tod ] ( b ) .computation time of coherent ising machine , simulated annealing algorithm , and goemans - williamson sdp algorithm on random graphs in g - set instances . the computation time for cim and sais defined as the time to reach the same accuracy achieved by gw ( without i / o time ) .note that the computation time of cim is evaluated by ( number of round trips ) ( 10 ) as in sec .[ sec : spec].,width=297 ] computation time for the random graphs in g - set instances is also studied . herethe subset of graphs in which max - cut problems can be solved in polynomial time ( i.e. , planar graphs , weakly bipartite graphs , positive weighted graphs without a long odd cycle , and graphs with integer edge weight bounded by and fixed genus ) are excluded .the execution time of cim is evaluated under the machine spec described in sec .[ sec : spec ] with , , and .again , the computation time of sa and cim is the actual time ( without graph file i / o ) to obtain the same accuracy of solution as gw .figure [ fig : gset ] shows the computation time as functions of the problem size .the computational cost of interior point method dominates the gw algorithm .( note that g - set contains graphs with both positive and negative edge weights so that we must use the slowest interior point method . )then the computation time is almost constant for both sa and cim .the computation time of sa with constant monte carlo step is expected to scale ( here for the random graphs in g - set , ) ) .the computation time of cim here is governed by a turn - on delay time of the dopo network to reach a steady state oscillation condition , which is constant for varying values of as mentioned above .the potential for solving np - hard problems using a cim was numerically studied by conducting computational experiments using the max - cut problems on sparse g - set graphs and fully connected complete graphs of order up to . with the normalized pump rate and coupling coefficient and ,the cim achieved a good approximation rate of 0.94148 on average and found better cut compared to the gw for 69 out of 71 graphs in g - set .the computation time for this sparse graph set , including few sessions of hysteretic optimization , is estimated as ms .the time scaling was also tested on complete graphs of number of vertices up to and number of edges up to .the results imply that cim achieves empirically constant time scaling in a fixed system clock frequency , i.e. , the fixed cavity circulation frequency ( fiber length ) , while sa , sg3 , and bls scale as and gw scales as .those results suggest that cim may find applications in high - speed computation for various combinatorial optimization problems , in particular for temporal networks .the present simulation results do not mean that the cim can get a reasonably accurate solution by a constant time for arbitrary large problem size .as mentioned already , in the cim based on a fiber ring resonator , the number of dopo pulses is determined by the the fiber length and the pulse spacing . in order to implement and dopo pulses in the fiber ring cavity , we must use a pulse repetition frequency to 2 ghz and 20 ghz , respectively .this is a challenge for both optical components and electronic components of cim , but certainly within a reach in current technologies .the authors would like to thank k. inoue , h. takesue , k. aihara , a. marandi , p. mcmahon , t. leleu , s. tamate , k. yan , z. wang , and k. takata for their useful discussions .this project is supported by the impact program of the japanese cabinet office .99 r. m. karp , in _ complexity of computer computations _ , edited by r. e. millera and j. w. thatcher ( plenum , new york , 1972 ) , pp .m. mzard , g. parisi , and m. virasoro , _ spin glass theory and beyond _ ( world scientific , singapore , 1987 ) .d. b. kitchen , h. decornez , j. r. furr , and j. bajorath , nat .drug discov .* 3 * , 935 ( 2004 ) .h. nishimori , _ statistical physics of spin glasses and information processing _( oxford university press , oxford , 2001 ) .f. barahona , j. phys .a * 15 * , 3241 ( 1982 ) .m. r. garey and d. s. johnson , _ computers and intractability : a guide to the theory of np - completeness _( freeman , san francisco , 1979 ) . g. i. orlova , y. g. dorfman , eng . cybern . * 10 * , 502 ( 1972 ) .f. hadlock , siam j. comput . * 4 * , 221 ( 1975 ) .m. grtschel and w. r. pulleyblank , oper .lett . , * 1 * , 23 ( 1981 ) .m. grtschel and g. l. nemhauser , math. program ., * 29 * , 28 ( 1984 ) .a. galluccio , m. loebl , and j. vondrk , math . program . , * 90 * , 273 ( 2001 ) .s. arora , c. lund , r. motwani , m. sudan , and m. szegedy , j. acm * 45 * , 501 ( 1998 ) .j. hstad , j. acm * 48 * , 798 ( 2001 ) .m. x. goemans and d. p. williamson , j. acm * 42 * , 1115 ( 1995 ) .s. kirkpatrick , c. d. gelatt jr . , and m. p. vecchi , science * 220 * , 671 ( 1983 ) .t. kadowaki and h. nishimori , phys . rev .e * 58 * , 5355 ( 1998 ) .g. e. santoro , r. martok , e. tosatti , and r. car , science * 295 * , 2427 ( 2002 ) .e. farhi , j. goldstone , s. gutmann , j. lapan , a. lundgren , and d. preda , science * 292 * , 472 ( 2001 ) . w. van dam , m. mosca , and u. v. vazirani , in _ proceedings of the 42nd ieee symposium on foundations of computer science _ ( ieee computer society , los alamitos , ca , 2001 ) , pp .d. aharonov , w. van dam , j. kempe , z. landau , s. lloyd , and o. regev , siam j. comput .* 37 * , 166 ( 2007 ) .s. sahni and t. gonzalez , j. acm * 23 * , 555 ( 1976 ) .s. kahruman , e. kolotoglu , s. butenko , and i. v. hicks , int . j. comput .eng . * 3 * , 211 ( 2007 ) .u. benlic and j .- k .hao , eng .* 26 * , 1162 ( 2013 ) . s. utsunomiya , k. takata , and y. yamamoto , opt .express * 19 * , 18091 ( 2011 ) .k. takata , s. utsunomiya , and y. yamamoto , new j. phys . * 14 * , 013052 ( 2012 ) .k. takata and y. yamamoto , phys .rev . a * 89 * , 032319 ( 2014 ) .s. utsunomiya , n. namekata , k. takata , d. akamatsu , s. inoue , and y. yamamoto , opt .express * 23 * , 6029 ( 2015 ) .z. wang , a. marandi , k. wen , r. l. byer , and y. yamamoto , phys . rev .a * 88 * , 063853 ( 2013 ) .a. marandi , z. wang , k. takata , r. l. byer , and y. yamamoto , nat .photonics * 8 * , 937 ( 2014 ) .p. d. drummond and c. w. gardiner , j. phys .a * 13 * , 2353 ( 1980 ) .k. takata , a. marandi and y. yamamoto , phys .a * 92 * , 043821 ( 2015 ) .k. takata , ph.d .dissertation , the university of tokyo , tokyo ( 2015 ) .p. kinsler and p. d. drummond , phys .a * 43 * , 6194 ( 1991 ) .p. d. drummond , k. j. mcneil , and d. f. walls , opt .acta * 28 * , 211 ( 1980 ) .e. halperin , d. livnat , and u. zwick , in _ proceedings of the thirteenth annual acm - siam symposium on discrete algorithms _( siam , 2002 ) pp . 506513. s. khot , g. kindler , e. mossel , and r. odonnell , siam j. comput . *37 * , 319 ( 2007 ) .s. arora and s. kale , in _ proceedings of the thirty - ninth annual acm symposium on theory of computing _( acm , 2007 ) pp .p. klein and h .-lu , in _ proceedings of the twenty - eighth annual acm symposium on theory of computing _( acm , 1996 ) pp .f. alizadeh , siam j. optim .* 5 * , 13 ( 1995 ) .m. yamashita , k. fujisawa , m. fukuda , k. kobayashi , k. nakata , and m. nakata , in _handbook on semidefinite , conic and polynomial optimization _ , edited by m. f. anjos and j. b. lasserre ( springer , 2012 ) pp .l. grippo , l. palagi , m. piacentini , v. piccialli , and g. rinaldi , math . program .* 136 * , 353 ( 2012 ) .k. fujisawa , t. endo , y. yasui , and h. sato , in _ proceedings of the 28th ieee international parallel & distributed processing symposium _( ieee , 2014 ) pp .11711180 . y. ye , `` computational optimization laboratory '' , http://web.stanford.edu/~yyye/col.html ( 1999 ) .s. j. benson , y. ye , and x. zhang , siam j. optim . * 10 * , 443 ( 2000 ) .n. metropolis , a. w. rosenbluth , m. n. rosenbluth , a. h. teller , and e. teller , j.chem . phys . * 21 * , 1087 ( 1953 ) .b. hajek , math .. res . * 13 * , 311 ( 1988 ) . c. helmberg and f. rendl , siam j. optim . *10 * , 673 ( 2000 ) .t. ikuta , h. imai and y. yano , `` solving max - cut benchmark by optimization solvers '' ( in japanese ) , ( rims preprint , kyoto , japan , 2015 ) 1941 - 09 . g. zarnd , f. pazmandi , k. f. pal , and g. t. zimanyi , phys .89 * , 150201 ( 2002 ) .
combinatorial optimization problems are computationally hard in general , but they are ubiquitous in our modern life . a coherent ising machine ( cim ) based on a multiple - pulse degenerate optical parametric oscillator ( dopo ) is an alternative approach to solve these problems by a specialized physical computing system . to evaluate its potential performance , computational experiments are performed on maximum cut ( max - cut ) problems against traditional algorithms such as semidefinite programming relaxation of goemans - williamson and simulated annealing by kirkpatrick , et al . the numerical results empirically suggest that the almost constant computation time is required to obtain the reasonably accurate solutions of max - cut problems on a cim with the number of vertices up to and the number of edges up to .
astrophysical applications related to the physics of the early universe , as well as challenges posed by the physics programs at new heavy ion accelerators , have triggered a renewed interest in the understanding of real time processes in the context of quantum field theory . with the advent of new computer technology and the recent success of new computational schemes , non - equilibrium phenomena which have been previously studied only in the framework mean - field theory , are now being revisited , and more complex next to leading order approaches are being used in an attempt to clarify the role played by the rescattering mechanism , which is responsible for driving an out of equilibrium system back to equilibrium . of particular interest is the study of the dynamics of phase transitions and particle production following a relativistic heavy - ion collision .one way of approaching this study is based on solving schwinger dyson equations within the closed time path ( ctp ) formulation .this formalism has been recently shown to provide good approximations of the real time evolution of the system both in quantum mechanics and 1 + 1 dimensional classical field theory , where direct comparisons with exact calculations can be performed .the key element in carrying out such studies is related to the calculation of the two - point green function , which is solved for self - consistently with the equations of motion for the fields .the two - point green function gives rise to volterra - type integral or integro - differential equations . in the process of extending our study to encompass a higher number of spatial dimensions , i.e. 2 + 1 and 3 + 1 field theory , we are faced with the challenge of coping with constraints dictated both by storage and time - related computational limits .thus our interest in designing algorithms which feature spectral convergence in order to achieve convergence with minimum storage requirements .in addition , we also desire these algorithms to scale when ported to massively multiprocessor ( mpp ) machines , so that solutions can be obtained in a reasonable amount of time .algorithms for volterra integral and integro - differential equations usually start out at the lower end of the domain , , and march out from , building up the solution as they go .such methods are _ serial _ by nature , and are , in general , not suitable for parallel implementation on a mpp machine .even so , clever approaches to already existing methods can provide algorithms that take advantage of a parallel processing computer : shaw has shown recently that once the starting values of the approximation are obtained , one can design a _ global _approach where successive approximations of the solution over the entire domain ] .this is obviously a serial process and not a good candidate for parallelization .it can be observed however , that once the starting values are obtained , _ all _ approximations with can simultaneously be evaluated up to and including .after that , once a value of corresponding to a new step is established via the predictor - corrector method , all values with can also be evaluated simultaneously .this observation makes the following algorithm possible : 1 .find the starting values with 2 . + add contributions to corresponding to , where 3 . 1 .predict 2 .estimate from 3 .correct 4 . + update by adding the contribution corresponding to the above numerical algorithm is implemented using the openmp style directives for the portland group s pgf77 fortran compiler , and reportedly shows good scalability on a shared - memory multiprocessor .the speedup of the finite difference method is best for a large number of grid points which , correspondingly , gives a better solution approximation .for example , with n=5120 and 4 processors the speedup is 3.86,a good measure of processor utilization .while the preceding algorithm performs well on a shared memory platform , it does not port easily to an mpp machine . before we comment on the efficiency of the algorithm ,let us make two general comments : firstly , we denote by and the time required to perform a floating - point operation and the time required to send a floating - point number , respectively .secondly , we will ignore for simplicity the effect of message sizes on communication costs , and assume throughout that the ratio is independent of .returning now , to our proposed algorithm , we remark that the communication cost for the corresponding implementation involves only the integral terms .even so , using the message - passing interface ( mpi ) protocol the communication cost is for the starting values and up to for the remainder of the algorithm which gives a total of .the total number of flops depends on the specific application but a reasonable measure is the number of function evaluations which is given by .the ratio of communication to computation approaches a _ constant _ value as gets larger .the communication overhead problem can be relaxed by employing a spectral method discussed in the following section , the improvement being especially significant for a multidimensional problem of the type required by our nonequilibrium quantum field theory calculations .consider the extrema of the chebyshev polynomial of the first kind of degree , .this set defines a non - uniform grid in the interval ] , as with eq .( [ eq : f_approx_b ] ) is exact at _ x _ equal to given by eq .( [ eq : tn_max ] ) . based on eq .( [ eq : f_approx_b ] ) , we can also approximate derivatives and integrals as and in matrix format , we have & \approx & \tilde s \\left [ f \right ] \> , \label{eq : beqn_b } \\ \left [ f'(x ) \right ] & \approx & \tilde d \\left [ f \right ] \> , \label{eq : bteqn_b}\end{aligned}\ ] ] the elements of the column matrix ] , so both linear and nonlinear equations are included .we determine the unknown function using a perturbation approach : we start with an initial guess of the solution that satisfies the initial condition , and write with being a variation obeying the initial condition hence , the original problem reduces to finding the perturbation , and improving the initial guess in a iterative fashion .we use the taylor expansion of ] , we can calculate in parallel for .the algorithm is as follows : 1 .calculate = [ y_0 ] + [ \epsilon] ] ; 3 ._ do _ : 1 .master to slave : send ; 2 . slave : compute ; 3 .slave to master : return . regarding the second step , i.e. solving the linear system of equations , the best choice is to use the machine specific subroutines , which generally outperform hand - coded solutions .when such subroutines are not available , as in the case of a linux based pc cluster for instance , one can use one of the mpi implementations available on the market .we shall see that the efficiency of the equation solver is critical to the success of the parallel implementation of the chebyshev - expansion approach . in order to illustrate this aspect we perform two calculations , first using a lu factorization algorithm , and secondly using an iterative biconjugate gradient algorithm . these are standard algorithms for solving systems of linear equations , but their impact on the general efficiency of the approach is quite different .figure [ fig : time ] depicts the average cpu time required to complete the calculation for the various methods .figure [ fig : conv ] illustrates the convergence of the two numerical methods .the spectral character of the method based on chebyshev polynomials allows for an excellent representation of the solution for .we base our findings on a criteria , where denotes the sum of all absolute departures of the calculated values from the exact ones , at the grid points . the number of iterations required to achieve the desired accuracy in the chebyshev case is depicted in fig .[ fig : iter ] .the number of iterations becomes flat for , and stays constant ( 17 iterations ) even for very large values of n. the higher number of iterations corresponding to the lower values of n , represents an indication of a insufficient number of chebyshev grid points : the exact solution can not be accurately represented as polynomial of degree n for $ ] .it is interesting to note that for , a reasonable lower domain for the representation of the solution using chebyshev polynomials , the reported cpu time is so small that for our test problem there is no real justification for porting the algorithm to a mpp machine .this situation will change for multi - dimensional problems such as those encountered in our nonequilibrium quantum field theory studies .the lu factorization algorithm is an algorithm of order and consequently , most of the cpu time is spent solving the linear system of equations ( see fig .[ fig : time_lu ] ) . as a consequence ,a parallel implementation of the lu algorithm is very difficult .figure [ fig : scale_lu ] shows how the average cpu time changes with the available number of processors .here we use a very simple mpi implementation of the lu algorithm as presented in reference .even though we could certainly achieve better performance by employing a sophisticated lu equation solver , the results are typical .since the actual size of the matrices involved is small , the communication overhead is overwhelming and the execution time does not scale with the number of processors .fortunately , even for dense matrices and small values of the number of grid points , one can achieve a good parallel efficiency . by employing an iterative method such as the iterative biconjugate gradient method, one can render the time required to solve the system of linear equations negligible compared with the time required to initialize the relevant matrices , which in turn is only slightly more expensive than the initialization process of the lu factorization algorithm .the initialization process can be parallelized using the algorithm presented above and the results are depicted in fig .[ fig : scale_cg ] .it appears that by using the biconjugate gradient method the efficiency of the parallel code has improved considerably . however , the average cpu time saturates to give an overall speedup of 3.5 .this can be understood by analyzing the computation and communication requirements for our particular problem .the calculation cost to initialize the matrices and is roughly given by the number of floating - point multiplications and additions , while the communication cost is given by . therefore , the ratio of communication to computation is as in the finite - difference case , this ratio approaches a _ constant _ value as gets larger and it becomes apparent that the communication overhead is still a problem . however , multi - dimensional applications such as those presented in require complicated matrix element calculation . in such cases , the process of initializing the matrices and is quite involved , and the ratio of the communication time relative to the computation time becomes favorable .in addition , the matrix becomes sparse and the size of the linear system of equations is substantially larger , thus one can also take advantage of existing parallel implementation of the iterative biconjugate gradient algorithm .such problems benefit heavily from an adequate parallelization of the code .we will discuss such an example in the following section .schwinger , bakshi , mahanthappa , and keldysh have established how to formulate an initial value problem in quantum field theory .the formalism is based on a generating functional , and the evolution of the density matrix requires both a forward evolution from zero to and a backward one from to zero .this involves both positive and negative time ordered operators in the evolution of the observable operators and the introduction of two currents into the path integral for the generating functional .time integrals are then replaced by integrals along the closed time path ( ctp ) in the complex time plane shown in fig .[ fig : ctp ] .we have using the ctp contour , the full closed time path green function for the two point functions is : in terms of the wightman functions , , where the ctp step function is defined by : for complete details of this formalism and various applications , we refer the reader to the original literature , and we confine ourselves to discussing how our chebyshev - expansion approach is applied to the computation of the two - point green function . for simplicitywe consider now the quantum mechanical limit of quantum field theory ( 0 + 1 dimensions ) . in this limit, we are generally faced with the problem of numerically finding the solution of equation here , the green functions , and , are symmetric in the sense that , and obey the additional condition the function obeys less stringent symmetries which is always the case when has the form where and satisfy ( [ eq : asym ] ) .we can further write eq .( [ eq : asym ] ) as or hence , a green function is fully determined by the component , with .thus , in order to obtain the solution of eq .( [ eq : dqeqn ] ) , we only need to solve we separate the real and the imaginary part of ( [ eq : dqbig0 ] ) and obtain the system of integral equations the above system of equations must be solved for .the two equations are independent , which allows us to solve first for the real part of , and then use this result to derive the imaginary part of . despite their somewhat unusual form ,the above equations are two - dimensional volterra - like integral equations and our general discussion regarding the chebyshev spectral method applies .we will perform a multi - step implementation of the formalism .let be the grid location corresponding to the collocation point of the interval labelled .then , the discrete correspondent of eq .( [ eq : dqbig0 ] ) is { \mathcal{r}e}\ { q_>(t_i , t_{k[=k_0(n-1 ) + k_1 ] } ) \ } { \mathcal{g}}_>(t_k , t_j ) \nonumber \\ & & - \sum_{k_1=1}^n [ 2 \tilde s_{i_1 k_1 } ] { \mathcal{r}e}\ { q_>(t_i , t_{k[=i_0(n-1)+k_1 ] } ) \ } \ { \mathcal{g}}_>(t_k , t_j ) \nonumber \\ & & + \sum_{k_0=0}^{j_0 - 1 } \sum_{k_1=1}^n [ 2 \tilde s_{n k_1 } ] q_>(t_i , t_{k[=k_0(n-1)+k_1 ] } ) { \mathcal{r}e}\ { { \mathcal{g}}_>(t_k , t_j ) \ } \nonumber \\ & & + \sum_{k_1=1}^n [ 2 \tilde s_{j_1 k_1 } ] q_>(t_i , t_{k[=j_0(n-1)+k_1 ] } ) { \mathcal{r}e}\ { { \mathcal{g}}_>(t_k , t_j ) \ } \ > , \nonumber \label{eq : cheby}\end{aligned}\ ] ] with .we will refer now to figs .[ fig : suma ] and [ fig : sumb ] .equation ( [ eq : cheby ] ) involves values of , for which . in such cases, we use the symmetry , which relates to the values the two - point function located in the domain of interest . for the time interval the size of the linear system of equations we need to solve is - \frac{1}{2}i_0(n-1 ) [ i_0(n-1 ) + 1 ] \\ & & = i_0 ( n-1)^2 + \frac{1}{2 } n(n-1 ) \>,\end{aligned}\ ] ] or of order . in practice, the value of is taken between 16 and 32 .tables [ tab : real ] and [ tab : imag ] summarize the number of floating - point operations performed in order to compute the non - vanishing matrix elements corresponding to a given and .we can now calculate the ratio of communication to computation time , by noticing that the numbers in the tables above get multiplied by n , corresponding to the number of collocation points in each time step and summing over the number of steps , i.e. we evaluate \ + \n \sum_{j_0=1}^{i_0 } \\bigl [ \textrm{if}\ j \le i_0 ( n-1 ) \bigr ] \>.\ ] ] in table [ tab : totals ] we summarize all relevant estimates regarding the computation cost for a fixed value of .in order to estimate the _ total _ communication and computation cost , respectively , these numbers must be multiplied by an additional factor of , corresponding to the number of possible values of in a time step .this factor is not relevant for estimating the communication overhead , but it must be remembered when one infers the sparsity of the corresponding system of equations . to conclude we observe that the communication to computation ratio approaches for large values of .therefore for this problem the communication overhead is reduced substantially in the later stages of the calculation . in practice , this ratio is actually much better , as we compute the functions and on the fly , and this adds considerably to the computational effort .finally the sparsity of the resulting systems of equations goes to for large values of and , which supports our choice for an iterative equation solver .we have presented a numerical method suitable for solving non - linear integral and integro - differential equations on a massively multiprocessor machine .our approach is essentially a standard perturbative approach , where one calculates corrections to an initial guess of the solution .the initial guess is designed to satisfy the boundary conditions , and corrections are expanded out in a complete basis of n chebyshev polynomials on the grid of ( n+1 ) extrema of , the chebyshev polynomial of first kind of degree n. the spectral character of the convergence of the chebyshev - expansion approach is the key element in keeping low the number of grid points . from a computational point of view , each iteration involves two stages , namely initializing the relevant matrices and solving the linear system of equations . both stages can be rendered parallel in a suitable manner , and the efficiency of the code increases when applied to complicated multi - step , multi - dimensional problems .the algorithm discussed in this paper represents the backbone of current investigations of the equilibrium and nonequilibrium properties of various phenomenological lagrangeians .in particular we are interested in studying the properties of the chiral phase transition at finite density for a 2 + 1 dimensional four - fermion interaction as well as the dynamics od 2-dimensional qcd , with the ultimate goal of indirectly obtaining insights regarding the time evolution of a quark - gluon plasma produced following a relativistic heavy - ion collision .the work of b.m . was supported in part by the u.s .department of energy , nuclear physics division , under contract no .w-31 - 109-eng-38 .the work of r.s . was supported in part by the natural sciences and engineering research council of canada under grant no .parallel calculations are made possible by grants of time on the parallel computers of the mathematics and computer science division , argonne national laboratory .b.m . would like to acknowledge useful discussions with john dawson and fred cooper .99 vecchio a 1993 highly stable parallel volterra runge - kutta methods , rapp .tecnico n. 102 , istituto per applicazioni della matematica , consiglio nazionale delle ricerche , via p. castellino , 111 , 80131 napoli , italy
we discuss a numerical algorithm for solving nonlinear integro - differential equations , and illustrate our findings for the particular case of volterra type equations . the algorithm combines a perturbation approach meant to render a linearized version of the problem and a spectral method where unknown functions are expanded in terms of chebyshev polynomials ( el - gendi s method ) . this approach is shown to be suitable for the calculation of two - point green functions required in next to leading order studies of time - dependent quantum field theory .
with the intensive exploration of contemporary theories on unification grammars and feature structures in the last decade , the old image of machine translation ( mt ) as a brutal form of natural language processing has given way to that of a process based on a uniform and reversible architecture .the developers of mt systems based on the constraint - based formalism found a serious problem in `` language mismatching , '' namely , the difference between semantic representations in the source and target languages .attempts to design a pure interlingual mt system were therefore abandoned , and the notion of `` semantic transfer'' came into focus as a practical solution to the problem of handling the language mismatching .the constraint - based formalism seemed promising as a formal definition of transfer , but pure constraints are too rigid to be precisely imposed on target - language sentences .some researchers(e.g . , russell ) introduced the concept of _ defeasible reasoning _ in order to formalize what is missing from a pure constraint - based approach , and control mechanisms for such reasoning have also been proposed . with this additional mechanism, we can formulate the `` transfer '' process as a mapping from a set of constraints into another set of mandatory and defeasible constraints .this idea leads us further to the concept of `` information - based '' mt , which means that , with an appropriate representation scheme , a source sentence can be represented by a set of constraints that it implies and that , given a target sentence , the set of constraints can be divided into three disjoint subsets : * the subset of constraints that is also implied by the target sentence * the subset of constraints that is not implied by , but is consistent with , the translated sentence * the subset of constraints that is violated by the target sentence the target sentence may also imply another set of constraints , none of which is in .that is , the set of constraints implied by the target sentences is a union of and , while .when , we have a _ fully interlingual _translation of the source sentence .if , and , the target sentence is said to be _ under - generated _ , while it is said to be _ over - generated _ when , and . in either case , must be empty if a consistent translation is required .thus , the goal of machine translation is to find an optimal pair of source and target sentences that minimizes , and .intuitively , corresponds to essential information , and and can be viewed as language - dependent supportive information . might be the inconsistency between the assumptions of the source- and target - language speakers . in this paper, we introduce _ tricolor dags _ to represent the above constraints , and discuss how tricolor dags are used for practical mt systems . in particular , we give a generation algorithm that incorporates the notion of semantic transfer by gradually approaching the optimal target sentence through the use of tricolor dags , when a fully interlingual translation fails .tricolor dags give a _graph - algorithmic _ interpretation of the constraints , and the distinctions between the types of constraint mentioned above allow us to adjust the margin between the current and optimal solution effectively .a _ tricolor dag _ ( tdag , for short ) is a rooted , directed , acyclic graph with a set of three colors ( red , yellow , and green ) for nodes and directed arcs .it is used to represent a feature structure of a source or target sentence .each node represents either an atomic value or a root of a dag , and each arc is labeled with a feature name . the only difference between the familiar usage of dags in unification grammars and that of tdags is that the color of a node or arc represents its degree of importance : 1 .red shows that a node ( arc ) is essential .yellow shows that a node ( arc ) may be ignored , but must not be violated .green shows that a node ( arc ) may be violated . for practical reasons ,the above distinctions are interpreted as follows : 1 .red shows that a node ( arc ) is derived from lexicons and grammatical constraints .yellow shows that a node ( arc ) may be inferred from a source or a target sentence by using domain knowledge , common sense , and so on .green shows that a node ( arc ) is defeasibly inferred , specified as a default , or heuristically specified . when all the nodes and arcs of tdags are red , tdags are basically the same as the feature structures of grammar - based translation .a tdag is _ well - formed _iff the following conditions are satisfied : 1 .the root is a red node .2 . each red arc connects two red nodes .3 . each red node is reachable from the root through the red arcs and red nodes .4 . each yellow node is reachable from the root through the arcs and nodes that are red and/or yellow . 5 .each yellow arc connects red and/or yellow nodes .no two arcs start from the same node , and have the same feature name .conditions 1 to 3 require that all the red nodes and red arcs between them make a single , connected dag .condition 4 and 5 state that a defeasible constraint must not be used to derive an imposed constraint . in the rest of this paper , we will consider only well - formed tdags .furthermore , since only the semantic portions of tdags are used for machine translation , we will not discuss syntactic features .the _ subsumption _ relationship among the tdags is defined as the usual subsumption over dags , with the following extensions . * a red node ( arc )subsumes only a red node ( arc ) . * a yellow node ( arc ) subsumes a red node ( arc ) and a yellow node ( arc ) . * a green node ( arc ) subsumes a node ( arc ) with any color .the _ unification _ of tdags is similarly defined .the colors of unified nodes and arcs are specified as follows : * unification of a red node ( arc ) with another node ( arc ) makes a red node ( arc ) . *unification of a yellow node ( arc ) with a yellow or green node ( arc ) makes a yellow node ( arc ) . *unification of two green nodes ( arcs ) makes a green node ( arc ) . since the green nodes and arcs represent defeasible constraints , unification of a green node ( either a root of a tdag or an atomic node ) with a red or yellow node always succeeds , and results in a red or yellow node .when two conflicting green nodes are to be unified , the result is _ indefinite _ , or a single non - atomic green node . now , the problem is that a red node / arc in a _ source tdag _ ( the tdag for a source sentence ) may not always be a red node / arc in the _ target tdag _ ( the tdag for a target sentence ) .for example , the _ functional control _ of the verb `` wish '' in the english sentence .... john wished to walk .... may produce the in figure [ tdag ] , but the red arc corresponding to the _ agent _ of the * walk predicate may not be preserved in a target .this means that the target sentence alone can not convey the information that it is john who wished to walk , even if this information can be understood from the context .hence the red arc is relaxed into a yellow one , and any target tdag must have an agent of * walk that is consistent with * john . this relaxation will help the sentence generator in two ways .first , it can prevent generation failure ( or non - termination in the worst case ) .second , it retains important information for a choosing correct translation of the verb `` walk . ''another example is the problem of identifying _ number _ and _ determiner _ in japanese - to - english translation .this type of information is rarely available from a syntactic representation of a japanese noun phrase , and a set of heuristic rules is the only known basis for making a reasonable guess .even if such contextual processing could be integrated into a logical inference system , the obtained information should be defeasible , and hence should be represented by green nodes and arcs in the tdags .pronoun resolution can be similarly represented by using green nodes and arcs .it is worth looking at the source and target tdags in the opposite direction . from the japanese sentence ,.... john ha aruku koto wo nozonda john + subj walk + nom + obj wished .... we get the source in figure [ tdag ] , where functional control and number information are missing . with the help of contextual processing , we get the target , which can be used to generate the english sentence `` john wished to walk . ''as illustrated in the previous section , it is often the case that we have to solve mismatches between source and target tdags in order to obtain successful translations .syntactic / semantic transfer has been formulated by several researchers as a means of handling situations in which fully interlingual translation does not work .it is not enough , however , to capture only the equivalent relationship between source and target semantic representations : this is merely a mapping among red nodes and arcs in tdags .what is missing in the existing formulation is the provision of some margin between _ what is said _ and _ what is translated . _the semantic transfer in our framework is defined as a set of successive operations on tdags for creating a sequence of tdags , , , such that is a source tdag and is a target tdag that is a successful input to the sentence generator .a powerful contextual processing and a domain knowledge base can be used to infer additional facts and constraints , which correspond to the addition of yellow nodes and arcs .default inheritance , proposed by russell et al. , provides an efficient way of obtaining further information necessary for translation , which corresponds to the addition of green nodes and arcs .a set of well - known heuristic rules , which we will describe later in the `` implementation '' section , can also be used to add green nodes and arcs . to complete the model of semantic transfer , we have to introduce a `` painter . ''a _ painter _ maps a red node to either a yellow or a green node , a yellow node to a green node , and so on .it is used to loosen the constraints imposed by the tdags .every application of the painter monotonically loses some information in a tdag , and only a finite number of applications of the painter are possible before the tdag consists entirely of green nodes and arcs except for a red root node .note that the painter never removes a node or an arc from a tdag , it simply weakens the constraints imposed by the nodes and arcs .formally , semantic transfer is defined as a sequence of the following operations on tdags : * addition of a yellow node ( and a yellow arc ) to a given tdag .the node must be connected to a node in the tdag by a yellow arc .* addition of a yellow arc to a given tdag .the arc must connect two red or yellow nodes in the tdag .* addition of a green node ( and a green arc ) to a given tdag .the node must be connected to a node in the tdag by the green arc .* addition of a green arc to a given tdag .the arc can connect two nodes of any color in the tdag .* replacement of a red node ( arc ) with a yellow one , as long as the well - formedness is preserved . *replacement of a yellow node ( arc ) with a green one , as long as the well - formedness is preserved .the first two operations define the logical implications ( possibly with common sense or domain knowledge ) of a given tdag .the next two operations define the defeasible ( or heuristic ) inference from a given tdag .the last two operations define the _painter_. the definition of the painter specifies that it can only gradually relax the constraints .that is , when a red or yellow node ( or arc ) x has other red or yellow nodes that are only connected through x , x can not be `` painted '' until each of the connected red and yellow nodes is painted yellow or green to maintain the reachability through x. in the sentence analysis phase , the first four operations can be applied for obtaining a source tdag as a reasonable semantic interpretation of a sentence .the application of these operations can be controlled by `` weighted abduction'' , default inheritance , and so on .these operations can also be applied at semantic transfer for augmenting the tdag with a common sense knowledge of the target language . on the other hand ,these operations are not applied to a tdag in the generation phase , as we will explain in the next section .this is because the lexicon and grammatical constraints are only applied to determine whether red nodes and arcs are exactly derived .if they are not exactly derived , we will end up with either over- or under - generation beyond the permissible margin .semantic transfer is applied to a source tdag as many times as necessary until a successful generation is made .recall the sample sentence in figure [ tdag ] , where two _ painter _ calls were made to change two red arcs in into yellow ones in .these are examples of the first substitution operation shown above .an addition of a green node and a green arc , followed by an addition of a green arc , was applied to to obtain .these additions are examples of the third and fourth addition operations .before describing the generation algorithm , let us look at the representation of lexicons and grammars for machine translation .a _ lexical rule _ is represented by a set of equations , which introduce red nodes and arcs into a source tdag .a _ phrasal rule _ is similarly defined by a set of equations , which also introduce red nodes and arcs for describing a syntactic head and its complements .for example , if we use shieber s patr - ii notation , the lexical rule for `` wished '' can be represented as follows : v = = wished + = v + = past + = np + = v + = infinitival + = * wish + = + = + = + the last four equations are semantic equations .its tdag representation is shown in figure [ lex ] .it would be more practical to further assume that such a lexical rule is obtained from a type inference system , which makes use of a syntactic class hierarchy so that each lexical class can inherit general properties of its superclasses .similarly , semantic concepts such as * wish and * walk should be separately defined in an ontological hierarchy together with necessary domain knowledge ( e.g. , selectional constraints on case fillers and _ part - of _ relationships .see kbmt-89 . ) a unification grammar is used for both analysis and generation .let us assume that we have two unification grammars for english and japanese . analyzing a sentence yields a source tdag with red nodes and arcs .semantic interpretation resolves possible ambiguity and the resulting tdag may include all kinds of nodes and arcs .for example , the sentence _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the boston office called _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ would give the source tdag in figure [ hobbs ] . by utilizing the domain knowledge , the node labeled * personis introduced into the tdag as a real caller of the action * call , and two arcs representing _ person work - for * office _ and _ office in * boston _ are abductively inferred .our generation algorithm is based on wedekind s dag traversal algorithm for lfg . the algorithm runs with an input tdag by traversing the nodes and arcs that were derived from the lexicon and grammar rules .the termination conditions are as follows : * every red node and arc in the tdag was derived . * no new red node ( arc )is to be introduced into the tdag if there is no corresponding node ( arc ) of any color in the tdag .that is , the generator can change the color of a node ( arc ) to red , but can not add a new node ( arc ) .* for each set of red paths ( i.e. , the sequence of red arcs ) that connects the same pair of nodes , the _ reentrancy _ was also derived .these conditions are identical to those of wedekind except that yellow ( or green ) nodes and arcs may or may not be derived .for example , the sentence `` the boston office called '' in figure [ hobbs ] can be translated into japanese by the following sequence of semantic transfer and sentence generation . 1 .apply the painter to change the yellow of the _ definite _ node and the _ def _ arc to green .2 . apply the painter to change the yellow of the _ singular _ node and the _ num _ arc to green .the resulting tdag is shown in figure [ hobbs2 ] .3 . run the sentence generator with an input feature structure , which has a root and an arc _ pred _ connecting to the given tdag .( see the node marked `` 1 '' in figure [ hobbs2 ] . )the generator applies a phrasal rule , say s np vp , which derives the _ subj _ arc connecting to the subject np ( marked `` 2 '' ) , and the _ agent _ arc .the generator applies a phrasal rule , say np mod np , which derives the _ npmod _ arc to the modifier of the np ( marked `` 3 '' ) and the _ mod _ arc .lexical rules are applied and all the semantic nodes , * call , * office , and * boston are derived ..... ; ; run the generator with input f - structure 0 > * j - gg - start called with ( ( pred " yobu " ) ( cat v ) ( vtype v-5dan - b ) ( subcat trans ) ( asp - type shunkan ) ( : mood ( ( pred " " ) ) ) ( aux ( ( pred " " ) ( : time ( ( pred " " ) ) ) ( : passive ( ( pred " " ) ) ) ) ) ( subj ( ( cat n ) ( pred " jimusho " ) ( xadjunct ( ( xcop " deno " ) ( cat n ) ( pred " boston " ) ) ) ) ) ) ... 3 > * j - gg - s called ; ; < start > -> ... - > < s > 4 > * j - gg - xp called with ; ; subj - filler ( ( case ( * or * " ha " " ga " ) ) ( cat n ) ( neg * undefined * ) ( pred " jimusho " ) ( xadjunct ( ( cop - ) ( cat n ) ( pred " boston " ) ) ) ) 5 > * j - gg - np called ; ; head np of subj ... 10 < * gg - n - root returns ; ; np mod " boston " ; ; " boston " 9 > * j - gg - n called ; ; head np 10 < * gg - n - root returns " jimusho " ... 7 < * 9 ( < ss >< np > ) returns ; ; mod+np " boston deno jimusho " ... 5 < * 1 ( < np > < p > ) returns ; ; np+case - marker " boston deno jimusho ha " 4 < * j - gg - xp returns " boston deno jimusho ha " 4 > * j - gg - s called with ; ; vp part 5 > * j - gg - vp called ; ; stem + 6 > * j - gg - v called ; ; function word chains ( ( subj * undefined * ) ( advadjunct * undefined * ) ( ppadjunct * undefined * ) ( : mood * undefined * ) ( aux ( ( : time ( ( pred " " ) ) ) ( : passive ( ( pred ( * or * * undefined * " " ) ) ) ) ( pred " " ) ) ) ( cat v ) ( type final ) ( asp - type shunkan ) ( vtype v-5dan - b ) ( subcat trans ) ( pred " yobu " ) ) 7 > * j - gg - rentai - past called ; ; past - form ... 14 < * gg - v - root returns " yo " ; ; stem ... 6 < * j - gg - v returns " yobi mashita " 5 < * j - gg - vp returns " yobi mashita " 4 < * j - gg - s returns " yobi mashita " 3 < * j - gg - s returns " boston deno jimusho ha yobi mashita " ... 0 < * j - gg - start returns " boston deno jimusho ha yobi mashita " .... the annotated sample run of the sentence generator is shown in figure [ gen ] .the input tdag in the sample run is embedded in the input feature structure as a set of pred values , but the semantic arcs are not shown in the figure .the input feature structure has syntactic features that were specified in the lexical rules .the feature value * undefined * is used to show that the node has been traversed by the generator . the basic property of the generation algorithm is as follows : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ let be a given tdag , be the connected subgraph including all the red nodes and arcs in , and be the connected subgraph of obtained by changing all the colors of the nodes and arcs to red .then , any successful generation with the derived tdag satisfies the condition that subsumes , and subsumes . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the proof is immediately obtained from the definition of successful generation and the fact that the generator never introduces a new node or a new arc into an input tdag .the tdags can also be employed by the semantic head - driven generation algorithm while retaining the above property .semantic monotonicity always holds for a tdag , since red nodes must be connected .it has been shown by takeda that semantically non - monotonic representations can also be handled by introducing a _ functional _ semantic class .we have been developing a prototype english - to - japanese mt system , called shalt2 , with a lexicon for a computer - manual domain including about 24,000 lexemes each for english and japanese , and a general lexicon including about 50,000 english words and their translations .a sample set of 736 sentences was collected from the `` ibm as/400 getting started '' manual , and was tested with the above semantic transfer and generation algorithm .the result of the syntactic analysis by the english parser is mapped to a tdag using a set of semantic equations obtained from the lexicons .we have a very shallow knowledge base for the computer domain , and no logical inference system was used to derive further constraints from the given source sentences .the japanese grammar is similar to the one used in kbmt-89 , which is written in pseudo - unification equations , but we have added several new types of equation for handling coordinated structures .the japanese grammar can generate sentences from all the successful tdags for the sample english sentences .it turned out that there were a few collections of semantic transfer sequences which contributed very strongly to the successful generation .these sequences include * painting the functional control arcs in yellow . * painting the gaps of relative clauses in yellow . * painting the number and definiteness features in yellow . * painting the passivization feature in green .other kinds of semantic transfer are rather idiosyncratic , and are usually triggered by a particular lexical rule .some of the sample sentences used for the translations are as follows : .... make sure you are using the proper edition for the level of the product .yuuzaa ha seihinno reberu ni user + subj product + pos level + for tekisetsuna han wo siyoushite iru proper edition + obj use + prog koto wo tashikamete kudasai + nom + obj confirm +imp publications are not stocked at the address given below .siryou ha ika de teikyousuru publication + subj following + loc provide adoresu ni sutokku sare masen address + loc stock + passive + neg this publication could contain technical inaccuracies or typographical errors .kono siryou ha gijyutsutekina this publication + subj technical huseikakusa aruiha insatsujyouno eraa wo inaccuracy or typographical error + obj fuku me mashita contain + ability + past .... the overall accuracy of the translated sentences was about 63% .the main reason for translation errors was the occurrence of errors in lexical and structural disambiguation by the syntactic / semantic analyzer .we found that the accuracy of semantic transfer and sentence generation was practically acceptable .though there were few serious errors , some occurred when a source tdag had to be completely `` paraphrased '' into a different tdag .for example , the sentence .... let 's get started ..... was very hard to translate into a natural japanese sentence . therefore , a tdag had to be paraphrased into a totally different tdag , which is another important role of semantic transfer .other serious errors were related to the ordering of constituents in the tdag .it might be generally acceptable to assume that the ordering of nodes in a dag is immaterial . however , the different ordering of adjuncts sometimes resulted in a misleading translation , as did the ordering of members in a coordinated structure .these subtle issues have to be taken into account in the framework of semantic transfer and sentence generation .in this paper , we have introduced tricolor dags to represent various degrees of constraint , and defined the notions of semantic transfer and sentence generation as operations on tdags .this approach proved to be so practical that nearly all of the source sentences that were correctly parsed were translated into readily acceptable sentences . without semantic transfer, the translated sentences would include greater numbers of incorrectly selected words , or in some cases the generator would simply fail extension of tdags for disjunctive information and a set of feature structures must be fully incorporated into the framework .currently only a limited range of the cases are implemented .optimal control of semantic transfer is still unknown .integration of the constraint - based formalism , defeasible reasoning , and practical heuristic rules are also important for achieving high - quality translation .the ability to process and represent various levels of knowledge in tdags by using a uniform architecture is desirable , but there appears to be some efficient procedural knowledge that is very hard to represent declaratively .for example , the negative determiner `` no '' modifying a noun phrase in english has to be procedurally transferred into the negation of the verb governing the noun phrase in japanese .translation of `` any '' , `` yet '' , `` only '' , and so on involves similar problems .while tdags reflect three discrete types of constraints , it is possible to generalize the types into continuous , numeric values such as _ potential energy_. this approach will provide a considerably more flexible margin that defines a set of permissible translations , but it is not clear whether we can successfully define a numeric value for each lexical rule in order to obtain acceptable translations .the idea of the tricolor dags grew from discussions with shiho ogino on the design and implementation of the sentence generator .i would also like to thank the members of the nl group naohiko uramoto , tetsuya nasukawa , hiroshi maruyama , hiroshi nomiyama , hideo watanabe , masayuki morohashi , and taijiro tsutsumi for stimulating comments and discussions that directly and indirectly contributed to shaping the paper .michael mcdonald , who has always been the person i turn to for proofreading , helped me write the final version .
machine translation ( mt ) has recently been formulated in terms of constraint - based knowledge representation and unification theories , but it is becoming more and more evident that it is not possible to design a practical mt system without an adequate method of handling mismatches between semantic representations in the source and target languages . in this paper , we introduce the idea of `` information - based '' mt , which is considerably more flexible than interlingual mt or the conventional transfer - based mt .
our objective is to accurately and precisely measure the quality factor , and resonant frequency of a microwave resonator , using complex transmission coefficient data as a function of frequency .accurate and measurements are needed for high precision cavity perturbation measurements of surface impedance , dielectric constant , magnetic permeability , etc . under realistic experimental conditions ,corruption of the data occurs because of cross - talk between the transmission lines and between coupling structures , the separation between the coupling ports and measurement device , and noise .although there are many methods discussed in the literature for measuring and resonant frequency , we are aware of no treatment of these different methods which quantitatively compares their accuracy or precision under real measurement conditions . in practice , the can vary from 10 to 10 in superconducting cavity perturbation experiments , so that a determination must be robust over many orders of magnitude of . also , it must be possible to accurately determine and in the presence of modest amounts of noise .in this paper we will determine the best methods of evaluating complex transmission coefficient data , i.e. the most precise , accurate , robust in , and robust in the presence of noise .many different methods have been introduced to measure the quality factor and resonant frequency of microwave cavities over the past fifty years .smith chart methods have been used to determine half power points which can be used in conjunction with the value of the resonant frequency to deduce the quality factor of the cavity. in the decay method for determining the quality factor , the fields in the cavity are allowed to build up to equilibrium , the input power is turned off , and the exponential decrease in the power leaving the cavity is measured and fit to determine the quality factor of the cavity. cavity stabilization methods put the cavity in a feedback loop to stabilize an oscillator at the resonant frequency of the cavity. for one port cavities , reflection measurements provide a determination of the half power points and also determine the coupling constant , allowing one to calculate the unloaded . in more recent years , complex transmission coefficient data vs. frequency is found from vector measurements of transmitted signals through the cavity. methods which use this type of data to determine and are the subject of this paper .we have selected seven different methods for determining and from complex transmission coefficient data .we have collected sets of typical data from realistic measurement situations to test all of the and determination methods .we have also created data and added noise to it to measure the accuracy of the methods . in this paperwe consider only random errors and not systematic errors , such as vibrations of the cavity which artificially broaden the resonance. after comparing all of the different methods , we find that the nonlinear least squares fit to the phase vs. frequency and the nonlinear least squares fit of the magnitude of the transmission coefficient to the lorentzian curve are the best methods for determining the resonant frequency and quality factor .the phase vs. frequency fit is the most precise and accurate over many decades of values if the signal - to - noise ratio ( snr ) is high ( snr 65 ) , however the lorentzian fit is more robust for noisier data. some of the methods discussed here rely on a circle fit to the complex transmission coefficient data as a step to finding and .we find that by adjusting this fitting we can improve the determination of the quality factor and resonant frequency , particularly for noisy data . in section ii of this paper ,the simple lumped element model for a microwave resonator is reviewed and developed . a description of our particular experimental setupis then given , although the results of this paper apply to any transmission resonator .we then discuss the data collected and generated for use in the method comparison in section iii .section iv outlines all of the methods that are studied in this paper . it should be noted that each method is tested using exactly the same data .the results of the comparison are presented and discussed in section v. possible improvements for some of the methods follow in section vi , and the concluding remarks of the paper are made in the final section .to set the stage for our discussion of the different methods of determining and resonant frequency , we briefly review the simple lumped - element model of an electromagnetic resonator . as a model for an ideal resonator, we use the series rlc circuit ( see inset of fig .[ lorentzian ] ) , defining as the resonant frequency . the quality factor is defined as 2 times the ratio of the total energy stored in the resonator to the energy dissipated per cycle. for the lumped element model in fig .[ lorentzian ] , the quality factor is .the resonator is coupled to transmission lines of impedance by the mutual inductances and . the complex transmission coefficient , ( ratio of the voltage transmitted to the incident voltage ) , as a function of driving frequency , is given in the limit of weak coupling by: the additional assumption that near resonance simplifies the frequency dependence in the denominator resulting in : where is the maximum of the transmission coefficient which occurs at the peak of the resonance : here is the resistance in the circuit model and this expression again is valid in the weak coupling limit . on the far right side of eq .( [ 2 ] ) , and are the coupling coefficients on ports 1 and 2 , respectively, where , with . the magnitude of the complex transmission coefficient is : the plot of vs. frequency forms a lorentzian curve with the resonant frequency located at the position of the maximum magnitude ( fig . [ lorentzian ] ) .a numerical investigation of with and without the simplified denominator assumption leading to eq .( [ 1 ] ) , shows that even for a relatively low ( ) , the difference between the magnitudes is less than half a percent of the magnitude using eq .( [ 0 ] ) . for larger qthe difference is much smaller , so we take this assumption as valid .all of the analysis methods treated in this paper make use of the simplified denominator assumption , as well as all the data we create to test the methods .the plot of the imaginary part of ( eq . ( [ 1]))versus the real part ( with frequency as a parameter ) , forms a circle in canonical position with its center on the real axis ( fig . [ circles ] ) .the circle intersects the real axis at two points , at the origin and at the location of the resonant frequency .important alterations to the data occur when we take into account several aspects of the real measurement situation .the first modification arises when considering the cross talk between the cables and/or the coupling structures .this introduces a complex translation , of the center of the circle away from its place on the real axis. secondly , a phase shift is introduced because the coupling ports of the resonator do not necessarily coincide with the plane of the measurement .this effect rotates the circle around the origin ( fig .[ circles]). the corrected complex transmission coefficient , , is then given by : it should be noted that the order in which the translation and rotation are performed is unique. any method of determining and from complex transmission data must effectively deal with the corruption of the data represented by eq . ( [ 4 ] ) .in addition , the method used to determine and must give accurate and precise results even in the presence of noise .this is necessary since , in typical measurements , ranges over several orders of magnitude causing the signal - to - noise ratio ( snr , defined in section iii .c. ) during a single data run to vary significantly .further corruption of the data can occur if there are nearby resonances present , particularly those with lower .this introduces a background variation onto the circles shown in fig .[ circles ] and may interfere with the determination of and . in this paperwe consider only single isolated resonances and refer the reader to an existing treatment of multiple resonances. this section we discuss the data we use for making quantitative comparisons of each method .the data is selected to be representative of that encountered in real measurement situations .each trace consists of 801 frequency points , each of which have an associated real and imaginary part of .two types of data have been used for comparing the methods ; measured data and generated data .the measured data is collected with the network analyzer and cavity described below .the generated data is constructed to look like the measured data , but the underlying and resonant frequency are known exactly .all of the methods discussed in the next section are tested using exactly the same data .complex transmission coefficient vs. frequency data is collected using a superconducting cylindrical niobium cavity submerged in liquid helium at 4.2 k. microwave coupling to the cavity is achieved using magnetic loops located at the end of 0.086 coaxial cables .the loops are introduced into the cavity with controllable position and orientation .the coaxial cables come out of the cryogenic dewar and are then connected to an hp8510c vector network analyzer. the cavity design has recently been modified to allow top - loading of the samples into the cavity .a sample is introduced into the center of the cavity on the end of a sapphire rod .the temperature of the sample can be varied by heating the rod , with a minimal perturbation to the superconducting nb walls .the quality factor of the cavity resonator in the te mode can range from about 2 10 to 1 10 , with a resonant frequency of approximately 9.6 ghz . in a typical run with a superconducting crystal , where the temperature varies from 4.2 k to 200 k , decreases by about 10 mhz and changes from about 1 10 to 4 10 .for accurate measurement of the electrodynamic properties of samples , it is important to be able to resolve frequency shifts of the cavity as small as 1 hz at low temperatures .one hundred vs. frequency traces were taken using the network analyzer held at a fixed power and with constant coupling to the cavity .one such data set was made with the source power at dbm ( snr 368 , 9.600242 ghz , 6.39 10 ) , another set was taken with the source power at dbm ( snr 108 , 9.599754 ghz , 6.46 10 ) , a third data set was taken with the source power at dbm ( snr 49 , 9.599754 ghz , 6.50 10 ) .( the approximate values for and are obtained from the phase vs. frequency averages discussed below ) to collect data with a systematic variation of signal - to - noise ratio , we took single traces at a series of different input powers .a power - ramped data set was taken in a cavity where controllable parameters , such as temperature and coupling , were fixed , the only thing that changed was the microwave power input to the cavity .an vs. frequency trace was taken for powers ranging from dbm to dbm , in steps of 0.5 dbm .this corresponds to a change in the signal - to - noise ratio from about 5 to 168 ( 9.603938 ghz , 8.71 10 ) . to check the accuracy of all the methods , we generated data with known characteristics , and added a controlled amount of noise to simulate the measured data .the data was created using the real and imaginary parts of an ideal as a function of frequency eq .( [ 1 ] ) ; where is the diameter of the circle being generated ( see fig . [ circles ] ) , is the quality factor , and is the resonant frequency , which are all fixed .the frequency , is incremented around the resonant frequency to create the circle .there are 400 equally spaced frequency points before and after the resonant frequency , totaling 801 data points .the total span of the generated data is about four 3db bandwidths for all q values . to simulate measured data ,noise was added to the data using gaussian distributed random numbers that were scaled to be a fixed fraction of the radius , of the circle described by the data in the complex plane .the noisy data was then translated and rotated to mimic the effect of cross talk in the cables and coupling structures , and delay ( eq .( [ 4 ] ) ) .a power ramp was simulated by varying the amplitude of the noise added to the circles .a total of 78 vs. frequency traces were created with a variation of the signal - to - noise ratio from about 1 to 2000 ( 9.600 ghz , 1.00 10 , 0.1972,.0877 , 0.2 , /17 ) data with different fixed values were created using the above real and imaginary expressions for .groups of data were created with 100 traces each using : = 10 , 10 , 10 , 10 ( = 9.600 ghz and snr 65 for all sets ) .they include fixed noise amplitude , and were each rotated and translated equal amounts to simulate measured data .( 0.01 , 0.015 , 0.2 , /19 ) the signal - to - noise ratio was found for all data sets by first determining the radius , and center of the circle when plotting the imaginary part of the complex transmission coefficient vs. the real part ( fig . [ circles ] ) .next , the distance to each data point ( 1 to 801 ) from the center is calculated from : the signal - to - noise ratio is defined as : in the case of generated data , where the center and radius of the circle are known , the snr is very well defined .however , the snr values are approximate for the measured data because of uncertainties in the determination of the center and radius of the circles .in this section we summarize the basic principles of the leading methods for determining the and resonant frequency from complex transmission coefficient vs. frequency data .further details on implementing these particular methods can be found in the cited references . because we believe that this is the first published description of the inverse mapping technique, we shall discuss it in more detail than the other methods .the resonance curve area and snortland techniques are not widely known , hence a brief review of these methods is also included .the first three methods take the data as it appears and determine the from the estimated bandwidth of the resonance .the last four methods make an attempt to first correct the data for rotation and translation ( eq .( [ 4 ] ) ) , then determine and of the data in canonical position .the 3 db method uses the vs. frequency data ( fig .[ lorentzian ] ) , where .the frequency at maximum magnitude is used as the resonant frequency , .the half power points are determined on either side of the resonant frequency and the difference of those frequency positions is the bandwidth .the quality factor is then given by : because this method relies solely on the discrete data , not a fit , it tends to give poor results as the signal - to - noise ratio decreases . for this method , the vs. frequency data is fit to a lorentzian curve ( eq .( [ 3 ] ) and fig. [ lorentzian ] ) using a nonlinear least squares fit. the resonant frequency , bandwidth , constant background , slope on the background , skew , and maximum magnitude are used as fitting parameters for the lorentzian : the least squares fit is iterated until the change in chi squared is less than 1 part in 10 .the is then calculated using the values of and from the final fit parameters : .this method is substantially more robust in the presence of noise than the 3 db method . for purposes of comparison with other methods, we shall use the simple expressions for and given above , rather than the values modified by the skew parameter . in an attempt to use all of the data , but to minimize the effects of noise in the determination of , the resonance curve area ( rca ) method was developed. in this approach the area under the curve is integrated to arrive at a determination of . in detail, the rca method uses the magnitude data squared , , versus frequency and fits it to a lorentzian peak ( same form as fig .[ lorentzian ] ) : using the resonant frequency , , and the maximum magnitude squared , , as fitting parameters .the bandwidth is a parameter in the lorentzian fit , but is not allowed to vary .this method iterates the lorentzian fit until chi squared changes by less than 1 part in 10 .next , using the fit values from the lorentzian , the squared magnitude is found at two points on the tails of the lorentzian far from the resonant frequency .the area under the data , , from to ( symmetric positions on either side of the resonant frequency ) is found using the trapezoidal rule: here indicates the magnitude squared data point at the frequency , and is the frequency step between consecutive data points .the quality factor is subsequently computed from the area as follows: this is compared to the previously determined one . if changes by more than 1 part in 10 , the lorentzian fit is repeated using as initial guesses for and , the values of and from the previous lorentzian fit , but the fixed value of the bandwidth becomes . with the new returned parameters from the fit , is again computed by eqs .( [ integral ] ) and ( [ 12 ] ) and compared to the previous one , and the cycle continues until convergence on is achieved .this method is claimed to be more robust against noise because it uses all of the data in the integral given in eq .( [ integral]). all of the above methods assume a simple lorentzian - like appearance of the vs. frequency data .however , the translation and rotation of the data described by eq .( [ 4 ] ) can significantly alter the appearance of vs. frequency .in addition , other nearby resonant modes can dramatically alter the appearance of . for these reasons , it is necessary , in general , to correct the measured data to remove the effects of cross - talk , delay , and nearby resonant modes .the remaining methods in the section all address these issues before attempting to calculate the and resonant frequency . the inverse mapping technique , as well as all subsequent methods in this section , make use of the complex data and fit a circle to the plot of vs. ( fig . [ circles ] ) .the details of fits of complex data to a circle have been discussed before by several authors. the data is fit to a circle using a linearized least - squares algorithm . in the circle fit ,the data is weighted by first locating the point midway between the first and last data point ; this is the reference point ( see fig .[ circles ] ) .next , the distance from the reference point to each data point is calculated .a weight is then assigned to each data point ( 1 to 801 ) as : ^2 \label{13}\ ] ] this gives the points closer to the resonant frequency a heavier weight than those further away .the circle fit determines the center and radius of a circle which is a best fit to the data .we now know the center and radius of the circle which has suffered translation and rotation , as described by eq .( [ 4 ] ) . rather than un - rotating and translating the circle back into canonical position, this method uses the angular progression of the measured points around the circle ( as seen from the center ) as a function of frequency to extract the and resonant frequency. three data points are selected from the circle , one randomly chosen near the resonant frequency ( ) , and two others ( and ) randomly selected but approximately 1 bandwidth above and below the resonant frequency ( see fig . [ taberg ] ( b ) ) .figure [ taberg ] ( a ) shows the complex frequency plane with the measurement frequency axis ( ) and the pole of interest at a position - .the conformal mapping defined by : maps the imaginary frequency axis into a circle in canonical position in the plane ( this mapping is obtained from eq .( [ 1 ] ) by rotating the frequency plane by ) . under this transformation , a line passing through the pole in the complex frequency plane ( such as the line connecting the pole and in fig . [ circles ] ( a ) ) will map into a line of equal but opposite slope through the origin in the plane. in addition , because the magnitudes of the slopes are preserved , the angles between points and ( ) , and points and ( ) , in the plane ( fig .[ taberg ] ( b ) ) are exactly the same as those subtended from the pole in the complex frequency plane ( fig .[ taberg ] ( a)). the angles subtended by these three points , as seen from the center of the circle in the plane , define circles in the complex frequency plane which represent the possible locations of the resonance pole ( dashed circles in fig .[ taberg ] ( a)). the intersection of these two circles off of the imaginary frequency axis uniquely locates the resonance pole .the resonant frequency and are directly calculated from the pole position in the complex frequency plane as and .this procedure is repeated many times by again choosing three data points as described above , and the results for and resonant frequency are averaged .we find that the fit of the complex data to a circle is critically important for the quality of all subsequent determinations of and .hence we experimented with different ways of weighting the data to accomplish the circle fit .the modified inverse mapping technique is identical to the previous inverse mapping , except for a difference in the weighting schemes for the fit of the data to a circle ( fig .[ circles ] ) . herethe weighting on each data point , known as the standard weighting , is : \label{15}\ ] ] and is the square root of the weighting in eq .( [ 13 ] ) . other kinds of weighting will be discussed in section vi . in the phase vs. frequencyfit, the complex transmission data is first fit to a circle as discussed above for the inverse mapping technique .in addition , an estimate is made of the rotation angle of the circle .the circle is then rotated and translated so that its center lies at the origin of the plane ( rather than canonical position ) , and an estimation of the resonant frequency is found from the intersection of the circle with the positive real axis ( see fig .[ phase ] inset ) .the phase angle of every data point with respect to the positive real axis is then calculated .next the phase as a function of frequency ( fig . [ phase ] ) , obtained from the ratio of the two parts of eq .( [ 5 ] ) , is fit to this form using a nonlinear least - squares fit:\ ] ] in this equation , the angle at which the resonant frequency occurs , , and are determined from the fit. a weighting is used in the fit to emphasize data near the resonant frequency and discount the noisier data far from the resonance which shows little phase variation .again we find that the quality of this fit is sensitive to the method of fitting the original data to a circle .as will be shown below , the main weakness of the inverse mapping and phase versus frequency methods is in the initial circle fit of the complex data . to analyze the frequency dependence of the data , or to bring the circle back into canonical position for further analysis , the center and rotation angle ( eq . [ 4 ] )must be known to very high precision .the snortland method makes use of internal self - consistency checks on the data to make fine adjustments to the center and rotation angle parameters , thus improving the accuracy of any subsequent determination of the resonant frequency and .the snortland method starts with a standard circle fit and phase vs. frequency fit ( fig .[ phase ] ) as discussed above .a self - consistency check is made on the data vs. frequency by making use of the variation of the stored energy in the resonator as the frequency is scanned through resonance . as the resonant frequency is approached from below , the current densities in the resonator increase . beyond the resonant frequency they decrease again .hence a sweep through the resonance is equivalent to an increase and decrease of stored energy in the cavity and power dissipated in the sample . in general, there is a slight nonlinear dependence of the sample resistance and inductance on resonator current .this leads to a resonant frequency and quality factor which are current - level dependent .the generalized expression for a resonator with current - dependent resonant frequency and is where and are the resonant frequency and at the point of maximum current in the resonator , .the and resonant frequency are therefore determined at every frequency point on the resonance curve as }\ ] ] /2q_{\max } ] } \ ] ] if it is assumed that the response of the resonator is non - hysteretic as a function of power , then the up and down power ramps must give consistent values for the and resonant frequency at each current level . if the data is corrupted by a rotation in the plane , the slight nonlinear response of and with respect to field strength causes the plots of and vs. the current level to trace out hysteresis curves. by adjusting the rotation phase angle and parameters , one can make the two legs of the and curves coincide , thereby determining the resonant frequency and more precisely. in practice , the resonant frequency is determined from a fit to the non - linear inductance as a function of resonator current through so that . is determined by making the two legs of the curve overlap .the resulting determination of resonant frequency and quality factor are and , respectively .the values of and obtained by each method for a group of data ( e.g. fixed power or fixed ) are averaged and their standard deviations are determined .these results are used to compare the methods .the accuracy of each method is determined using the generated data since , in those cases , the true values for and are known .the most accurate method is simply the one that yields an average ( , )closest to the actual value ( , ) .the standard deviations ( , ) for the measured data are used as a measure of precision for the methods .the smaller the standard deviation returned , the more precise the method . to determine the most robust method over a wide dynamic range of and noise , both accuracy and precision are considered .hence the algorithm that is both accurate and precise over varying or noise is deemed the most robust .figures [ fop10 ] and [ qp10 ] show the values of and respectively , resulting from the lorentzian fit ( b ) , the modified inverse mapping technique ( e ) , and the phase vs. frequency fit ( f ) , for the dbm ( snr 108 ) fixed power run . for ,all three methods return values that are very close to each other .this is verified by the ratios of for those methods shown in table [ precision table ] , which shows the normalized ratio ( normalized to the lowest number ) of the standard deviation of and to their average ( , ) returned by each method on identical data . the difference in from trace to trace , seen in fig .[ fop10 ] is due entirely to the particular noise distribution on that trace . on the other hand ,the determinations of are very different for the three methods . from fig .[ qp10 ] , we see that the phase vs. frequency fit is more precise in finding than both the lorentzian fit and the modified inverse mapping technique ( see also table [ precision table ] ) . thus the fixed power data identifies the phase vs. frequency fit as the best .figures [ foprmp ] and [ qprmp ] show the results for and respectively , from the same methods , for the measured power - ramped data sets .the data are plotted vs. the signal - to - noise ratio ( snr ) discussed in section iii .as the snr decreases , the determination of becomes less precise , but as in the case of fixed power , all of the methods return similar ratios for as confirmed by table [ precision table ] .the determination of also becomes less precise as the snr decreases tending to overestimate its value for noisier data .but , from fig .[ qprmp ] , we see that while the modified inverse mapping technique and phase vs. frequency fit give systematically increasing values of as the snr decreases , the lorentzian fit simply jumps around the average value .this implies that for a low snr , the lorentzian fit is a more precise method .table [ precision table ] confirms this statement by showing that the lorentzian fit has the smallest ratio of .we thus conclude that over a wide dynamic range of snr the lorentzian fit is superior , although the phase vs. frequency fit is not significantly worse . from figures [ foprmp ] and [ qprmp ], we see that the determination does not degrade nearly as much as the determination as snr decreases . here , changes by a factor of 2 , while changes by a factor of 300 as snr decreases from 100 to 3 , so the precision in the determination is much greater than that of . the trend of decreasing as the snr increases beyond a value of about 50 in fig .[ qprmp ] is most likely due to the non - linear resistance of the superconducting walls in the cavity .an analysis of generated data power - ramps does not show a decreasing at high snr .the most precise methods over different fixed powers are the nonlinear least squares fit to the phase vs. frequency ( f ) and the lorentzian nonlinear least squares fit ( b ) ( table [ precision table ] ) .they consistently give the smallest ratios of their standard deviation to their average for both and compared to all other methods . at high power ( snr 350 )the phase vs. frequency fit is precise to about 3 parts in 10 for the resonant frequency and to 3 parts in 10 for the quality factor , when averaged over about 75 traces .when looking at the generated data with snr 65 , the most accurate method for the determination of the resonant frequency is the phase vs. frequency fit , because it returns an average closest to the true value , or as in table [ accuracy table ] , it has the smallest ratio of the difference between the average and the known value divided by the known value ( , ) .the value returned for the resonant frequency is accurate to about 8 parts in 10 for = 10 , and 1 part in 10 for = 10 when averaged over 100 traces . for the quality factor ,the phase vs. frequency fit ( f ) is most accurate ( table [ accuracy table ] ) , with accuracy to about 1 part in 10 for = 10 , and 1 part in 10 for = 10 when averaged over 100 traces .the method most robust in noise is the lorentzian fit ( see the power ramp columns of both tables ) .it provided values for and that were the most precise and accurate as the signal - to - noise ratio decreased ( particularly for snr 10 ) . over several decades of , the most robust method for the determination of the phase vs. frequency fit , which is precise to about 1 part in 10 when = 10 , and to about 1 part in 10 when = 10 , averaged over 100 traces with snr 65 . for the determination of ,the phase vs. frequency ( f ) is also the most robust , providing precision to 2 parts in 10 when = 10 to 10 averaged over 100 traces .the first three methods discussed above ( 3db , lorentzian fit , and rca method ) can be improved by correcting the data for rotation and translation in the complex plane . all of the remaining methods can be improved by carefully examining the validity of the circle fit .we have observed that by modifying the weighting we can improve the fit to the circle for noisy data , and thereby improve the determination of and .for instance , fig .[ weighting ] shows that the standard weighting ( the weighting from the modified inverse mapping technique ) systematically overestimates the radius of the circle for noisy data .below we discuss several ways to improve these fits . by introducing a radial weighting, we can improve the circle fit substantially ( an example is shown in fig .[ weighting ] ) . for the radial weighting , we first do the standard weighting to extract an estimate for the center of the circle ,which is not strongly corrupted by noise . the radial weighting on each point ( 1 to 801 )is then defined as : which reduces the influence of noisy data points well outside the circle .figure [ radwsnr ] shows a plot of the calculated radius versus the signal - to - noise ratio for the generated power - ramped data set .the figure shows plots of the calculated radius using four different weightings : ( eq .( [ 15 ] ) ) , ( eq . ([ 17 ] ) ) , , and . from this plot , it is clear that above a snr of about 30 all of the weightings give very similar radius values .however , below that value we see that the radius from the weighting agrees best with the true radius of 0.2 .therefore , by improving the circle fit with a similar weighting scheme , we hope to extract even higher precision and better accuracy from these methods at lower signal - to - noise ratio .in addition to errors in the fit radius of the circle at low snr , there can also be errors in the fit center of the circle .figure [ centerror ] shows the normalized error , : in the calculation of the center of the circle from weightings : , , , and , vs. the snr in log scaling . here is the true center of the circle and is the calculated center from the circle fit . from fig .[ centerror ] , we see that the calculation of the center of the circle is accurate to within 1% for snr 20 and above using any weighting .however , below snr = 10 , all of the weightings give degraded fits .the inset ( b ) of fig .[ centerror ] shows the angle vs. snr , where is the angle between the vector connecting the true and calculated centers , and the vector connecting the true center to the position of the resonant frequency . from this figurewe see that the angle between these vectors approaches as snr decreases , which means that the fit center migrates in the direction away from the resonant frequency as the data becomes noisy .this indicates that the points on the side of the circle opposite from the resonant frequency have a combined weight larger than those points around the resonant frequency , and thus the center is calculated closer to those points . for data with snr greater that about 10 , all weightings give similar results for the circle fits . for data with snrless than 10 , the best circle fit would make an estimate of the radius of the circle by using the square root radial weighting , and an estimate of the center by weighting data near the resonant frequency more heavily .a further refinement of the inverse mapping method would be to fit the data with an arbitrary number of poles and zeroes to take account of multiple resonances in the frequency spectrum. the snortland method was originally developed to analyze non - linear resonances. our use of it for linear low - power resonances was preliminary , and the results probably do not reflect its ultimate performance .further development of this method on linear resonances has the potential to produce results superior to those obtained with the phase vs. frequency method at high snr .we find that the phase versus frequency fit and the lorentzian nonlinear least squares fit are the most reliable procedures for estimating and from complex transmission data .the lorentzian fit of vs. frequency is surprisingly precise , but suffers from poor accuracy relative to vector methods , except for very noisy data .however , a major advantage of vector data is that it allows one to perform corrections to remove cross talk , delay , and nearby resonances , thus significantly improving the quality of subsequent fits . for the fixed - power measured data sets , the phase vs. frequency fit has the highest precision and accuracy in the determination of and making it the best method overall .all of these methods are good for snr greater than about 10 . below this value ,all methods of determining and resonant frequency from complex transmission coefficient data degrade dramatically . concerning robustness , the phase vs. frequency fit does well for a dynamic range of , while the lorentzian fit does well in the power - ramp ( snr = 1 to 2000 ) .we also find that significant improvements can be made to the determination of resonant frequency and in noisy situations when careful attention is paid to the circle fitting procedure of the complex data .further development of the inverse mapping and snortland methods can greatly improve the accuracy and precision of resonant frequency and q determination in realistic measurement situations .we thank a. schwartz and b. j. feenstra for their critical reading of the manuscript , and h. j. snortland and r. c. taber for many enlightening discussions .this work is supported by the national science foundation through grant number dmr-9624021 , and the maryland center for superconductivity research ..measurements of relative precision of the seven methods used to determine and from complex transmission data . tabulatedare ratios of the standard deviation to the average values for both resonant frequency ( ) and quality factor ( ) normalized to the best value ( given in parentheses ) , for snr 49 , 368 , and ramped from 5 to 168 .all entries are based on measured data .[ precision table ] [ cols= " < , < , < , < , < , < , < " , ] c. g. montgomery , technique of microwave measurements , mit rad .series , vol .11 ( mcgraw - hill , inc . 1947 ) .l. malter , and g. r. brewer , j. appl .* 20 * , 918 ( 1949 ) .e. l. ginzton , _ microwave measurements _ , ( mcgraw - hill inc . 1957 ). m. sucher , and j. fox , _ handbook of microwave measurements _ , vol .ii , 3rd ed .( john wiley & sons , inc . 1963 ) , chapd. kajfez , and e. j. hwan , ieee trans .microwave theory tech ., * mtt-32 * , 666 ( 1984 ) .e. sun , and s. chao , ieee trans .microwave theory tech .* 43 * , 1983 ( 1995 ) .m. b. barmatz , nasa tech brief , * 19 * , no .12 , item # 16 , ( dec . 1995 ) .h. padamsee , j. knobloch , and t. hays , _ rf superconductivity for accelerators _ , ( john wiley & sons , inc .a. f. harvey , _ microwave engineering _ , ( academic press 1963 ) .s. r. stein , and j. p. turneaure , electronics letters , * 8 * , 321 ( 1972 ) .o. klein , s. donovan , m. dressel , and g. grner , international journal of infrared and millimeter waves , * 14 * , 2423 ( 1993);s .donovan , o. klein , m. dressel , k. holczer , and g. grner , international journal of infrared and millimeter waves , * 14 * , 2459 ( 1993);m .dressel , o. klein , s. donovan , and g. grner , international journal of infrared and millimeter waves , * 14 * , 2489 ( 1993 ) a. n. luiten , a. g. mann , and d. g. blair , meas .technol . ,* 7 * , 949 ( 1996 ) . j. e. aitken , proc .iee , * 123 * , 855 ( 1976 ) .j. r. ashley , and f. m. palka , the microwave journal , 35 ( june 1971 ) .k. watanabe , and i. takao , rev .instrum . ,* 44 * , 1625 ( 1973 ) .v. subramanian , and j. sobhanadri , rev .instrum . ,* 65 * , 453 ( 1994 ) .m. c. sanchez , e. martin , j. m. zamarro , iee proceedings , * 136 * , 147 ( 1989 ) .e. k. moser , and k. naishadham , ieee trans .superconductivity , * 7 * , 2018 ( 1997 ) .z. ma , ph .d. thesis , ( ginzton labs report no .5298 ) , stanford university , ( 1995 ) .k. leong , j. mazierska , and j. krupka , 1997 ieee mtt - s proceedings .j. snortland , ph .d. thesis , ( ginzton labs report no. 5552 ) , stanford university , ( 1997 ) .see also http://loki.stanford.edu/vger.html .we used vger version 2.37 for the analysis performed in this paper .j. mao , ph .d. thesis , university of maryland , ( 1995 ) .s. sridhar , and w. l. kennedy , rev .instrum . ,* 59 * , 531 ( 1988 ) . w. h. press , b. p. flannery , s. a. teukolsky , w. t. vetterling , _ numerical recipes _ , ( cambridge university press 1989 ) , pp .105 , 202 - 203 , 523 - 528 .p. r. bevington , _ data reduction and error analysis for the physical sciences _ , ( mcgraw - hill , inc .1969 ) , pp .237 - 240 .t. miura , t. takahashi , and m. kobayashi , ieice trans . electron ., * e77-c * , 900 ( 1994 ) .f. gao , m. v. klein , j. kruse , and m. feng , ieee trans .microwave theory tech ., * 44 * , 944 ( 1996 ) .r. c. taber , hewlett - packard laboratories , private communication .r. v. churchill , and j. w. brown , _ complex variables and applications _ , 5th ed .( mcgraw - hill , inc . , new york )the mapping defined by eq .( [ 14 ] ) maps lines of slope through the pole in the frequency plane to lines of slope through the origin in the plane .note the mapping defined by eq ( [ 14 ] ) is not conformal at the pole because it is not analytic there , however it is an isogonal mapping at that point .see reference 29 .m. h. richardson and d. l. formenti , proc .international modal analysis conference , 167 ( 1982 ) .the factor of 2 in front of the comes from the difference in angle subtended as seen from the origin vs. the center of the circle for a circle in canonical position in the plane ( fig .
precise microwave measurements of sample conductivity , dielectric , and magnetic properties are routinely performed with cavity perturbation measurements . these methods require the accurate determination of quality factor and resonant frequency of microwave resonators . seven different methods to determine the resonant frequency and quality factor from complex transmission coefficient data are discussed and compared to find which is most accurate and precise when tested using identical data . we find that the nonlinear least - squares fit to the phase vs. frequency is the most accurate and precise when the signal - to - noise ratio is greater than 65 . for noisier data , the nonlinear least squares fit to a lorentzian curve is more accurate and precise . the results are general and can be applied to the analysis of many kinds of resonant phenomena .
quantum communication channels are at the heart of quantum information theory . among quantum channels ,the bosonic gaussian channels describe very common physical links , such as the transmission via free space or optical fibers .the fundamental feature of a quantum channel is its capacity , which is the maximal information transmission rate for a given available energy .the capacity can be classical or quantum , depending on whether one sends classical or quantum information ( here , we focus on the former ) . in previous works , it was shown that for certain quantum _ memory _channels , in particular channels with correlated noise , the optimal input symbol state is entangled across successive uses of the channel ; see refs . and references therein . in general , such multimode entangled states may be quite hard to prepare , which motivates the present work . in this paper ,we address the problem of implementing the ( optimal ) input symbol state for gaussian bosonic channels with particular memory models . for this purpose , we study the usefulness of the so - called gaussian matrix - product state ( gmps ) as an input symbol state for the gaussian bosonic channel with additive noise and the lossy gaussian bosonic channel . this translationary - invariant state is heavily entangled and can be generated sequentially , which happens to be crucial for its use as a multimode input symbol state in the transmission via a gaussian bosonic channel .the gmps are known to be a useful resource for quantum teleportation protocols , but , to our knowledge , they have never been considered in the context of quantum channels . in sec .ii , we give an overview of the method used to derive the gaussian capacity of gaussian bosonic memory channels , following our previous work .our original results are presented in sec .iii , where we address the use of gmps in this context . in sec .iii a , we show that the gmps , though not being the optimal input state , is close - to - capacity achieving for gaussian bosonic channels with a markovian and non - markovian noise in a large region of noise correlation strengths . in sec .iii b , we provide a class of noisy channels for which the gmps is the _ exact _ optimal input state .since the gmps is as well the ground state of particular quadratic hamiltonians , this suggests a direct link between the maximization of information transmission in quantum channels and the energy minimization of quantum many - body systems . in sec .iii c , we also observe that the squeezing strengths that are needed to realize the gmps in an optical setup are experimentally feasible . finally , our conclusions are provided in sec .let us now consider an -mode optical channel , which can either be a bosonic additive noise channel or a lossy bosonic channel . in the following , single - mode channel uses will be equivalent to one use of an -mode parallel channel .each mode is associated with the annihilation and creation operators , or equivalently with the pair of quadrature operators and , which obey the canonical commutation relation = i\delta_{ij} ] of any state , along with its covariance matrix ( cm ) } - \bm{j}/2,\\ \textrm{with } \quad \bm{j } & = i \begin{pmatrix}0 & \bm{i}\\-\bm{i } & 0\end{pmatrix } , \end{split}\ ] ] where is the identity matrix . in phase space, a gaussian state is defined as a state having a wigner distribution that is gaussian ; hence , it is fully characterized by its mean and cm . for the channel encoding ,we consider a continuous alphabet , that is , we encode a complex number instead of a discrete index into each symbol state .we encode a message of length into a -dimensional real vector .physically , this encoding corresponds in phase space to a displacement by of the -partite gaussian input state defined by its mean and cm .the modulation of the multipartite input state is taken as a ( classical ) gaussian multipartite probability density with mean and cm .the means of the input state and classical modulation can be set to zero without loss of generality because displacements leave the entropy invariant ; hence , they do not play any role in the capacity formulas defined in sec .ii b. the action of the channel is thus fully characterized in terms of covariance matrices , that is , where and are the cm of the individual output and modulated output states , respectively . for , eq .defines the bosonic gaussian channel with additive noise , where is the cm of a ( classical ) gaussian multipartite probability density describing noise - induced displacements in phase space ( see ref . for details ) . for and , with a beamsplitter transmittance ] , , , , and the additional condition ensuring that the spectrum corresponds to a quantum state .( solid line ) , gmps - rate ( crosses ) and coherent - state rate ( dashed line ) vs. correlation , where from top to bottom .we took .,scaledwidth=50.0% ] ( solid line ) , gmps - rate ( crosses ) and coherent - state rate ( dashed line ) vs. correlation , where from top to bottom .we took .,scaledwidth=50.0% ] ( solid line ) , gmps - rate ( crosses ) and coherent - state rate ( dashed line ) vs. correlation , where from top to bottom .we took and .,scaledwidth=50.0% ] ( solid line , left axis ) and corresponding squeezing ( dashed line , right axis ) vs. correlation ( or ) for ( a ) the channel with additive markov noise , where the crosses depict ; ( b ) the channel with non - markovian noise ( lossy and additive ) , where the crosses depict .we took for both plots and .,scaledwidth=100.0% ] ( solid line , left axis ) and corresponding squeezing ( dashed line , right axis ) vs. correlation ( or ) for ( a ) the channel with additive markov noise , where the crosses depict ; ( b ) the channel with non - markovian noise ( lossy and additive ) , where the crosses depict .we took for both plots and .,scaledwidth=100.0% ] by comparing the spectrum of eq . with the optimal input spectra [ eq . ] for the noise models of eqs . and, one can directly verify that the optimal input state is not a gmps .however , one may use the gmps as an approximation of the optimal input state for both these noise models . by calculating the transmission rates for noise models [ eqs . and ] with the gmps as input state [ using eq . andreplacing by , we find numerically that the highest transmission rate is achieved for a gmps with _nearest neighbor correlations _ .we find that among all gmps given by eq . , which can be generated with the setup defined in fig .[ fig : gmps ] , only the gmps with nearest neighbor correlations has a symmetric spectrum , that is since the noise spectra defined in sec .[ sec : noises ] satisfy the same symmetry , it is intuitively clear that this type of gmps is the most suitable state for these noise models .the optical setup for the three - mode building block that generates this nearest - neighbor gmps is depicted in fig .[ fig : gmps](b ) .more details on it are provided in sec .iii c. from eq . andthe fact that the gmps used as an input is a pure state , i.e. , we find that and .thus , the nearest neighbor correlated gmps has quadrature spectra with the upper ( lower ) sign for the ( ) quadrature .therefore , when looking for the optimal transmission rate , one has to optimize only over the parameter . in order to satisfy the global water filling solution for the gmps, we replace by in eq . , which leads to a modified input energy threshold depending on , that is , .\ ] ] as we require that the input energy , eq .imposes an upper bound on . in figs .[ fig : markrate]-[fig : nonmarklossyrate ] , we plot the rates obtained for the gmps with the spectrum given by eq . calculated via a maximization over , which we denote as . in fig .[ fig : markrate ] , we observe that for the channel with additive markov noise , is close - to - capacity achieving ; in the plotted region , . for the additive channel with non - markovian noise , we conclude from fig .[ fig : nonmarkrate ] that the gmps serves as a very good resource as well ; in the plotted region , .we confirm the same behavior for the lossy channel with non - markovian noise , as shown in fig . [fig : nonmarklossyrate ] for different beamsplitter transmittances .the optimal input correlations for both noise models are approximately given by and , respectively , as can be seen in fig .[ fig : optin](a ) and fig .[ fig : optin](b ) . this can be verified as follows .since the quantum water filling solution holds for the gmps with nearest neighbor correlations , its rate is given by eq .replacing by . in order to find the optimal it is sufficient to minimize only the second term in eq . as only this term depends on .this term is a definite integral of a function whose primitive is not expressed in terms of elementary functions and .however , if the integrand as a function of parameter can be properly minimized for all values of the variable of integration the integral will also be minimized . in order to verify this possibilitywe take the first derivative of the integrand and set it to zero .this leads to the following relation : as it happens in the general case , there is no unique parameter which satisfies eq . for all .nevertheless , it is possible to obtain an approximating equality by neglecting the quadratic and higher order terms in the noise spectra given by eqs . and and in the right - hand side of eq ., i.e. where for the markovian noise and for the non - markovian noise , respectively .this is a valid approximation taking into account that and can be satisfied by a unique parameter for all .namely , we find the simple relations and , as verified in fig .[ fig : optin](a ) and fig .[ fig : optin](b ) , respectively .although we have seen that the gmps is not the optimal input state for the noise models introduced in sec .[ sec : noises ] , it is possible to do better .indeed , for all noises given by where is an matrix that commutes with given in eq . , the gmps is the _ exact _ optimal input state , that is where now trivially .this is a direct result that can be deduced from the shape of the cm and the fact that the cm of the optimal input state ( given by eqs . and )is diagonalized in the same basis as the cm of the noise . furthermore ,as already mentioned , gmps are known to be ground states of particular quadratic hamiltonians .more precisely , is the cm of the ground state of the translationary invariant hamiltonian , given in natural units by where and are the position and momentum operators of an harmonic oscillator at site and the potential matrix is simply given by , where is defined in eq . .a realistic example for a noise of the shape of eq .is given by the cm of the ( gaussian ) state of the system defined in eq ., i.e. , a chain of coupled harmonic oscillators at finite temperature .we assume the system to be described by a canonical ensemble , thus the density matrix of the oscillators is given by the gibbs - state },\ ] ] where .the cm of the gaussian state is given by eq .with ^{-1} ] . therefore ,if we assume the noise of the channel to result from a chain of coupled harmonic oscillators at finite temperature , that is , , then the gmps with cm is both the ground state of the system given by eq . and the _ exact _ optimal input state for .let us finally discuss the required optical squeezing strength to realize the optimal input correlation for both noise models .we first present the mathematical description of the three - mode building block that generates the gmps with nearest neighbor correlations .the cm of this building block is given by with , and , where .the optical scheme for the three mode building block is depicted in fig .[ fig : gmps](b ) , where is a one - mode squeezer with parameter such that .the resulting cm of the -mode pure gmps is given by with , where is the identity matrix , on corresponds to a partial transpose , which however has no effect here as does not contain any correlations . ] where where and , respectively .we observe that the nearest neighbor correlated gmps requires only one squeezing parameter to generate the three - mode building block of fig .[ fig : gmps](b ) .furthermore , we can use finitely entangled tms vacuum states with squeezing . for simplicity ,we set . ] and plot in fig .[ fig : optin ] the squeezing strength needed to generate the optimal input correlation for different noise correlations . for the markov noise , in the plotted regionthe required correlation does not exceed , which can be realized by ( about db squeezing ) . for the non - markovian noise, the required correlation does not exceed , which corresponds to ( about db squeezing ) .this shows that the required squeezing values for the presented setup could be realized with accessible non - linear media for a realistic assumption of noise correlations ( these maximal squeezing values have recently been realized experimentally , see , e.g. , ref .we have demonstrated that a one - dimensional gaussian matrix - product state , a multimode entangled state which can be prepared sequentially , can serve as a very good approximation to the optimal input state for encoding information into gaussian bosonic memory channels .the fact that the gmps can be prepared sequentially is crucial because it makes the channel encoding feasible , progressively in time along with the subsequent uses of the channel . forthe analyzed channels and noise models , the gmps achieves more than 99.9% of the gaussian capacity and may be experimentally realizable as the required squeezing strengths are achievable within present technology .furthermore , we have introduced a class of channel noises , originating from a chain of coupled harmonic oscillators at finite temperature , for which the gmps is the _ exact _ optimal multimode input state .given that gmps are ground states of particular quadratic hamiltonians , our findings could serve as a starting point to find useful connections between quantum information theory and quantum statistical physics .j.s . is grateful to antonio acn , jens eisert , and alessandro ferraro for helpful discussions , and acknowledges a financial support from the belgian fria foundation .the authors acknowledge financial support from the belgian federal government via the iap research network photonics , from the brussels capital region under project cryptasc , and from the f.r.s .- fnrs under project hipercom .n. schuch and j. i. cirac and m. m. wolf , _ proc . of quantum information and many body quantum systems _ , edited by m. ericsson and s. montangero , vol . 5 , ( eidizioni della normale , pisa , 2008 ) pp .129 - 142 ; e - print arxiv : quant - ph/0509166v2
the communication capacity of gaussian bosonic channels with memory has recently attracted much interest . here , we investigate a method to prepare the multimode entangled input symbol states for encoding classical information into these channels . in particular , we study the usefulness of a gaussian matrix - product state ( gmps ) as an input symbol state , which can be sequentially generated although it remains heavily entangled for an arbitrary number of modes . we show that the gmps can achieve more than 99.9% of the gaussian capacity for gaussian bosonic memory channels with a markovian or non - markovian correlated noise model in a large range of noise correlation strengths . furthermore , we present a noise class for which the gmps is the _ exact _ optimal input symbol state of the corresponding channel . since gmps are ground states of particular quadratic hamiltonians , our results suggest a possible link between the theory of quantum communication channels and quantum many - body physics .
many problems in computational science and engineering require the solution of large , dense linear systems .standard direct methods based on gaussian elimination , of course , require work , where is the system size .this quickly becomes infeasible as increases . as a result ,such systems are typically solved iteratively , combining gmres , bi - cgstab or some other iterative scheme with fast algorithms to apply the system matrix , when available . for the integral equations of classical physics ,this combination has led to some of the fastest solvers known today , with dramatically lower complexity estimates of the order or ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* and references therein ) .despite their tremendous success , however , iterative methods still have several significant disadvantages when compared with their direct counterparts : _ the number of iterations required by an iterative solver is highly sensitive to the conditioning of the system matrix . _ill - conditioning arises , for example , in the solution of problems near resonance ( particularly in the high frequency regime ) , in geometries with `` close - to - touching '' interactions , in multi - component physics models with large contrasts in material properties , etc . under these circumstances, the solution time can be far greater than expected .direct methods , by contrast , are robust in the sense that their solution time does not degrade with conditioning .thus , they are often preferred in production environments , where reliability of the solver and predictability of the solution time are important ._ one often wishes to solve a linear system governed by a fixed matrix with multiple right - hand sides ._ this occurs , for example , in scattering problems , in optimization , and in the modeling of time - dependent processes in fixed geometry .most iterative methods are unable to effectively exploit the fact that the system matrix is the same , and simply treat each right - hand side as a new problem .direct methods , on the other hand , are extremely efficient in this regard : once the system matrix has been factored , the matrix inverse can be applied to each right - hand side at a much lower cost ._ one often wishes to solve problems when the system matrix is altered by a low - rank modification . _standard iterative methods do a poor job of exploiting this fact .direct methods , on the other hand , can update the factorization of the original matrix using the sherman - morrison - woodbury formula or use the existing factorization as a preconditioner . in this paper , we present an algorithm for the solution of structured linear systems that overcomes these deficiencies , while remaining competitive with modern fast iterative solvers in many practical situations .the algorithm directly constructs a compressed ( `` data - sparse '' ) representation of the system matrix inverse , assuming only that the matrix has a block low - rank structure similar to that utilized by fast matrix - vector product techniques like the fast multipole method ( fmm ) . such matrices typically arise from the discretization of integral equations , where the low - rank structure can be understood in terms of far - field interactions between clusters of points , but the procedure is general and makes no _ a priori _ assumptions about rank .our scheme is a multilevel extension of the work described in , which itself is based on the fast direct multilevel method developed for 2d boundary integral equations by martinsson and rokhlin .while we do not seek to review the literature on fast direct solvers here , it is worth noting that similar efforts have been ( and continue to be ) pursued by various groups , most notably in the context of hierarchically semiseparable ( hss ) matrices and -matrices . a short historical discussion can be found in as well as in the recent article by gillman _. _ .the latter paper makes several improvements on the algorithm of , and presents a simple framework for understanding , implementing , and analyzing schemes for inverting integral equations on curves ( that is , domains parametrized by a single variable ) .planar domains with corners were treated recently in .applications to electromagnetic wave problems were considered in . finally , it should be noted that gillman s dissertation includes 3d experiments that also extend the martinsson - rokhlin formalism to the case of integral equations on surfaces .the present paper provides a mix of analysis , algorithmic work , and applications .the novelty of our contribution lies : in the use of compression and auxilliary variables to embed an approximation of the original dense matrix into a sparse matrix framework that can make use of standard and well - developed sparse matrix technology ; in providing detailed numerical experiments in both 2d and 3d ; and in demonstrating the utility of fast direct solvers in several applications .we believe that the scheme is substantially simpler to implement than prior schemes and that it leads to a more stable solution process . as in previous schemes ( see , e.g. , ) , the core algorithm in our work computes a compressed matrix representation using the interpolative decomposition ( i d ) via a multilevel procedure that we refer to as _recursive skeletonization_. once obtained , the compressed representation serves as a platform for fast matrix algebra including matrix - vector multiplication and matrix inversion . in its former capacity, the algorithm may be viewed as a generalized or kernel - independent fmm ; we explore this application in [ sec : numerical - examples ] . for matrix inversion ,we show how to embed the compressed representation in an equivalent ( but larger ) sparse system , much in the style of .we then use a state - of - the - art sparse matrix solver to do the rest .we are grateful to david bindel for initially suggesting an investigation of the sparse matrix formalism and rely in this paper on the sparse direct solver software umfpack . as in dense lu factorization ,the direct solver is a two - phase process .first , following the generation of the compressed matrix embedding , a factored representation of the inverse is constructed .second , in the solution phase , the matrix inverse is applied in a rapid manner to a specified right - hand side . as expected, the solution phase is very inexpensive , often beating a single fmm call by several orders of magnitude . for boundary integral equations without highly oscillatory kernels , e.g. , the green s function for the laplace or low - frequency helmholtz equation , both phases typically have complexity in 2d . in 3d ,the complexities in our current implementation are and for precomputation ( compression and factorization ) and solution , respectively .the remainder of this paper is organized as follows . in [ sec : preliminaries ] , we define the matrix structure of interest and review certain aspects of the i d . in [ sec : algorithm ] , we review the recursive skeletonization algorithm for matrix compression and describe the new formalism for embedding the compressed matrix in a sparse format . in [ sec : complexity - analysis ] , we study the complexity of the algorithm for non - oscillatory problems , while in [ sec : error - analysis ] , we give error estimates for applying a compressed matrix and its inverse . in [ sec : numerical - examples ] , we demonstrate the efficiency and generality of our scheme by reporting numerical results from its use as a generalized fmm , as a direct solver , and as an accelerator for molecular electrostatics and scattering problems . finally , in [ sec : generalizations - conclusions ] , we summarize our findings and discuss future work .in this section , we discuss the precise matrix structure that makes our fast solver possible . for this ,let be a matrix whose index vector is grouped into contiguous blocks of elements each , where : then the linear system can be written in the form where and .solution of the full linear system by classical gaussian elimination is well - known to require work .the matrix a is said to be _ block separable _ if each off - diagonal submatrix can be decomposed as the product of three low - rank matrices : where , , and , with .note that in ( [ eq : block separable ] ) , the left matrix depends only on the index and the right matrix depends only on the index .we will see how such a factorization arises below .the term _ block separable _ was introduced in , and is closely related to that of semiseparable matrices and -matrices . in ,the term _ structured _ was used , but block separable is somewhat more informative . off - diagonal block row _ of is the submatrix $ ] consisting of the block row of with the diagonal block deleted ; the _ off - diagonal block columns _ of are defined analogously . clearly , the block separability condition ( [ eq : block separable ] ) is equivalent to requiring that the off - diagonal block row and column have rank and , respectively , for ( see [ sec : algorithm ] for details ) .when is block separable , it can be written as where \in \mathbb{c}^{n \times n}\ ] ] is block diagonal , consisting of the diagonal blocks of , \in \mathbb{c}^{n \times k_{{\mathrm{r } } } } , \qquad r = \left [ \begin{array}{ccc } r_{1}\\ & \ddots\\ & & r_{p } \end{array } \right ] \in \mathbb{c}^{k_{{\mathrm{c } } } \times n}\ ] ] are block diagonal , where and , and \in \mathbb{c}^{k_{{\mathrm{r } } } \times k_{{\mathrm{c}}}}\ ] ] is dense with zero diagonal blocks .it is convenient to let and .we can then write the original system in the form \left [ \begin{array}{c } \mathbf{x}\\ \mathbf{y}\\ \mathbf{z } \end{array } \right ] = \left [ \begin{array}{c } \mathbf{b}\\ \mathbf{0}\\ \mathbf{0 } \end{array } \right ] .\label{eq : sparse - embedding}\ ] ] this system is highly structured and sparse , and can be efficiently factored using standard techniques .if we assume that each block corresponds to unknowns and that the ranks of the off - diagonal blocks are all the same , it is straightforward to see that a scheme based on ( [ eq : compressed - representation ] ) or ( [ eq : sparse - embedding ] ) requires an amount of work of the order . in many contexts ( including integral equations ) , the notion of block separability is applicable on a hierarchy of subdivisions of the index vector .that is to say , a decomposition of the form ( [ eq : compressed - representation ] ) can be constructed at each level of the hierarchy .when a matrix has this structure , much more powerful solvers can be developed , but they will require some additional ideas ( and notation ) .our treatment in this section follows that of .let be the index vector of a matrix .we assume that a tree structure is imposed on which is levels deep . at level ,we assume that there are nodes , with each such node corresponding to a contiguous subsequence of such that we denote the _ finest level _ as level and the coarsest level as level ( which consists of a single block ) .each node at level has a finite number of children at level whose concatenation yields the indices in ( fig .[ fig : tree - struct ] ) . .at each level of the hierarchy , a contiguous block of indices is divided into a set of children , each of which corresponds to a contiguous subset of indices . ]the matrix is _ hierarchically block separable _ if it is block separable at each level of the hierarchy defined by .in other words , it is structured in the sense of the present paper if , on each level of , the off - diagonal block rows and columns are low - rank ( fig .[ fig : mat - struct ] ) .such matrices arise , for example , when discretizing integral equations with non - oscillatory kernels ( up to a specified precision ) . _example 1_. consider the integral operator where is the green s function for the 2d laplace equation , and the domain of integration is a square in the plane .this is a 2d _ volume integral operator_. suppose now that we discretize ( [ eq : integral - operator ] ) on a grid : ( this is not a high - order quadrature but that is really a separate issue . ) let us superimpose on a quadtree of depth , where is the root node ( level ) .level is obtained from level by subdividing the box into four equal squares and reordering the points so that each child holds a contiguous set of indices .this procedure is carried out until level is reached , reordering the nodes at each level so that the points contained in every node at every level correspond to a contiguous set of indices .it is clear that , with this ordering , the matrix corresponding to ( [ eq : integral - operator - discret ] ) is hierarchically block separable , since the interactions between nonadjacent boxes at every level are low - rank to any specified precision ( from standard multipole estimates ) .adjacent boxes are low - rank for a more subtle reason ( see [ sec : complexity - analysis ] and fig . [fig : recur - subdiv ] ) . _example 2_. suppose now that we wish to solve an interior dirichlet problem for the laplace equation in a simply connected 3d domain with boundary : potential theory suggests that we seek a solution in the form of a double - layer potential where is the green s function for the 3d laplace equation , is the unit outer normal at , and is an unknown surface density .letting approach the boundary , this gives rise to the second - kind fredholm equation using a standard collocation scheme based on piecewise constant densities over a triangulated surface , we enclose in a box and bin sort the triangle centroids using an octree where , as in the previous example , we reorder the nodes so that each box in the hierarchy contains contiguous indices .it can be shown that the resulting matrix is also hierarchically block separable ( see [ sec : complexity - analysis ] and ) .we turn now to a discussion of the i d , the compression algorithm that we will use to compute low - rank approximations of off - diagonal blocks .a useful feature of the i d is that it is able to compute the rank of a matrix on the fly , since the exact ranks of the blocks are difficult to ascertain _ a priori_that is to say , the i d is _ rank - revealing_. many decompositions exist for low - rank matrix approximation , including the singular value decomposition , which is well - known to be optimal . here, we consider instead the i d , which produces a near - optimal representation that is more useful for our purposes as it permits an efficient scheme for multilevel compression when used in a hierarchical setting .let be a matrix , and the matrix -norm .a rank- approximation of in the form of an _ interpolative decomposition ( i d ) _ is a representation , where , whose columns constitute a subset of the columns of , and , a subset of whose columns makes up the identity matrix , such that is small and , where is the ( )st greatest singular value of .we call and the _ skeleton _ and _ projection matrices _ , respectively .clearly , the i d compresses the column space of ; to compress the row space , simply apply the i d to , which produces an analogous representation , where and .the row indices that corrrespond to the retained rows in the i d are called the _ row _ or _ incoming skeletons_. the column indices that corrrespond to the retained columns in the i d are called the _ column _ or _ outgoing skeletons_. reasonably efficient schemes for constructing an i d exist . by combining such schemes with methods for estimating the approximation error, we can compute an i d to any relative precision by adaptively determining the required rank .this is the sense in which we will use the i d . while previous related work used the deterministic algorithm of , we employ here the latest compression technology based on random sampling , which typically requires only operations .in this section , we first describe the `` standard '' id - based fast multilevel matrix compression algorithm ( as in ) .the hss and -matrix formalisms use the same underlying philosophy .we then describe our new inversion scheme .let be a matrix with blocks , structured in the sense of [ sec : preliminaries : matrix - structure ] , and a target relative precision .we first outline a one - level matrix compression scheme : for , use the i d to compress the row space of the off - diagonal block row to precision .let denote the corresponding row projection matrix .similarly , for , use the i d to compress the column space of the off - diagonal block column to precision .let denote the corresponding column projection matrix .approximate the off - diagonal blocks of by for , where is the submatrix of defined by the row and column skeletons associated with and , respectively .this yields precisely the matrix structure discussed in [ sec : preliminaries ] , following ( [ eq : block separable ] ) .the one - level scheme is illustrated graphically in fig .[ fig : mat - comp ] .the multilevel algorithm is now just a simple extension based on the observation that by ascending one level in the index tree and regrouping blocks accordingly , we can compress the skeleton matrix in ( [ eq : compressed - representation ] ) in exactly the same form , leading to a procedure that we naturally call _ recursive skeletonization _ ( fig .[ fig : multi - comp ] ) . the full algorithm may be specified as follows : starting at the leaves of the tree , extract the diagonal blocks and perform one level of compression of the off - diagonal blocks .move up one level in the tree and regroup the matrix blocks according to the tree structure .terminate if the new level is the root ; otherwise , extract the diagonal blocks , recompress the off - diagonal blocks , and repeat . the result is a telescoping representation r^{\left ( 1 \right ) } , \label{eq : multilevel - compression}\ ] ] where the superscript indexes the compression level ._ example 3 ._ as a demonstration of the multilevel compression technique , consider the matrix defined by points uniformly distributed in the unit square , interacting via the 2d laplace green s function ( [ eq:2d - laplace ] ) and sorted according to a quadtree ordering . the sequence of skeletons remaining after each level of compression to is shown in fig .[ fig : sparsify ] , from which we see that compression creates a _ sparsification _ of the sources which , in a geometric setting , leaves skeletons along the boundaries of each block. points in the unit square are compressed to relative precision using a five - level quadtree - based scheme . at each level ,the surviving skeletons are shown , colored by block index , with the total number of skeletons remaining given by for compression level , where denotes the original uncompressed system . ]the computational cost of the algorithm described in the previous section is dominated by the fact that each step is global : that is , compressing the row or column space for each block requires accessing all other blocks in that row or column .if no further knowledge of the matrix is available , this is indeed necessary . however , as noted by , this global work can often be replaced by a local one , resulting in considerable savings .a sufficient condition for this acceleration is that the matrix correspond to evaluating a potential field for which some form of green s identities hold .it is easiest to present the main ideas in the context of laplace s equation . for this , consider fig .[ fig : proxy - comp ] , which depicts a set of sources in the plane .we assume that block index corresponds to the sources in the central square .the off - diagonal block row then corresponds to the interactions of all points outside with all points inside .we can separate this into contributions from the near neighbors of , which are local , and the distant sources , which lie outside the near - neighbor domain , whose boundary is denoted by . but any field induced by the distant sources induces a harmonic function inside and can therefore be replicated by a charge density on itself .thus , rather than using the detailed structure of the distant points , the row ( incoming ) skeletons for can be extracted by considering just the combination of the near - neighbor sources and an artifical set of charges placed on , which we refer to as a _proxy surface_. likewise , the column ( outgoing ) skeletons for can be determined by considering only the near neighbors and the proxy surface .if the potential field is correct on the proxy surface , it will be correct at all more distant points ( again via some variant of green s theorem ) .the interaction rank between and is constant ( depending on the desired precision ) from standard multipole estimates . in summary ,the number of points required to discretize is constant , and the dimension of the matrix to compress against for the block corresponding to is essentially just the number of points in the physically neighboring blocks . due to a distribution of exterior sources ( left ) can be decomposed into neighboring and well - separated contributions . by representing the latter via a proxy surface ( center ) , the matrix dimension to compress against for the block corresponding to ( right )can be reduced to the number of neighboring points plus a constant set of points on , regardless of how many points lie beyond . ]similar arguments hold for other kernels of potential theory including the heat , helmholtz , yukawa , stokes , and elasticity kernels , though care must be taken for oscillatory problems which could require a combination of single and double layer potentials to avoid spurious resonances in the representation for the exterior .the compressed representation ( [ eq : multilevel - compression ] ) admits an obvious fast algorithm for computing the matrix - vector product .as shown in , one simply applies the matrices in ( [ eq : multilevel - compression ] ) from right to left . like the fmm, this procedure can be thought of as occurring in two passes : an upward pass , corresponding to the sequential application of the column projection matrices , which hierarchically compress the input data to the column ( outgoing ) skeleton subspace .a downward pass , corresponding to the sequential application of the row projection matrices , which hierarchically project onto the row ( incoming ) skeleton subspace and , from there , back onto the output elements .the representation ( [ eq : multilevel - compression ] ) also permits a fast algorithm for the direct inversion of nonsingular matrices .the one - level scheme was discussed in [ sec : preliminaries ] . in the multilevel scheme ,the system in ( [ eq : sparse - embedding ] ) is itself expanded in the same form , leading to the sparse embedding \left [ \begin{array}{c } \mathbf{x}\\ \mathbf{y}^{\left ( 1 \right)}\\ \mathbf{z}^{\left ( 1 \right)}\\ \vdots\\ \vdots\\ \mathbf{y}^{\left (\lambda \right)}\\ \mathbf{z}^{\left ( \lambda \right ) } \end{array } \right ] = \left [ \begin{array}{c } \mathbf{b}\\ \mathbf{0}\\ \mathbf{0}\\ \vdots\\ \vdots\\ \mathbf{0}\\ \mathbf{0 } \end{array } \right ] .\label{eq : multilevel - embedding}\ ] ] to understand the consequences of this sparse representation , it is instructive to consider the special case in which the row and column skeleton dimensions are identical for each block , say , so that the total row and column skeleton dimensions are . then , studying ( [ eq : sparse - embedding ] ) first and assuming that is invertible , block elimination of and yields where is block diagonal . backsubstitution then yields \mathbf{b}.\ ] ] in other words , the matrix inverse is where and are all block diagonal , and is dense . note that is equal to the skeleton matrix with its diagonal blocks filled in .thus , ( [ eq : compressed - inverse ] ) is a compressed representation of with minimal fill - in over the original compressed representation ( [ eq : compressed - representation ] ) of . in the multilevel setting , one carries out the above factorization recursively , since can now be inverted in the same manner : \mathcal{r}^{\left ( 1 \right)}. \label{eq : multilevel - inverse}\ ] ] this point of view is elaborated in . in the general case, the preceding procedure may fail if happens to be singular and ( more generally ) may be numerically unstable if care is not taken to stabilize the block elimination scheme using some sort of pivoting .thus , rather than using the `` hand - rolled '' gaussian elimination scheme of to compute the telescoping inverse ( [ eq : multilevel - inverse ] ) , we rely instead on the existence of high - quality sparse direct solver software .more precisely , we simply supply umfpack with the sparse representation ( [ eq : multilevel - embedding ] ) and let it compute the corresponding factorization .numerical results show that the performance is similar to that expected from ( [ eq : multilevel - inverse ] ) .for the sake of completeness , we briefly analyze the complexity of the algorithm presented in [ sec : algorithm ] for a typical example : discretization of the integral operator ( [ eq : integral - operator ] ) , where the integral kernel has smoothness properties similar to that of the green s function for the laplace equation .we follow the analysis of and estimate costs for the `` hand - rolled '' gaussian elimination scheme .we ignore quadrature issues and assume that we are given a matrix acting on points distributed randomly in a -dimensional domain , sorted by an orthtree that uniformly subdivides until all block sizes are .( in 1d , an orthtree is a binary tree ; in 2d , it is a quadtree ; and in 3d , it is an octree . ) for each compression level , with being the finest , let be the number of matrix blocks , and and the uncompressed and compressed block dimensions , respectively , assumed equal for all blocks and identical across rows and columns , for simplicity .we first make the following observations : the total matrix dimension is , where , so .each subdivision increases the number of blocks by a factor of roughly , so .in particular , , so . the total number of points at level is equal to the total number of skeletons at level , i.e. , , so .furthermore , we note that is on the order of the interaction rank between two adjacent blocks at level , which can be analyzed by recursive subdivision of the source block to expose well - separated structures with respect to the target ( fig .[ fig : recur - subdiv ] ) . assuming only that the interaction between a source subregion separated from a target by a distance of at least its own sizeis of constant rank ( to a fixed precision ) , we have where , clearly , , so from [ sec : preliminaries : interpolative - decomposition ] , the cost of computing a rank- i d of an matrix is .we will only consider the case of proxy compression , for which for a block at level , so the total cost is the cost of applying is , while that of applying or is .combined with the cost of applying , the total cost is we turn now to the analysis of the cost of factorization using ( [ eq : multilevel - inverse ] ) . at each level ,the cost of constructing and is , after which forming , , and all require operations ; at the final level , the cost of constructing and inverting is .thus , the total cost is which has complexity ( [ eq : complexity - cm ] ) .finally , we note that the dimensions of , , , and are the same as those of , , , and , respectively .thus , the total cost of applying the inverse , denoted by , has the same complexity as , namely ( [ eq : complexity - mv ] ) . in our umfpack - based approach ,the estimation of cost is a rather complicated task , and we do not attempt to carry out a detailed analysis of its performance . suffice it to say , there is a one - to - one correspondence between the `` hand - rolled '' gaussian elimination approach and one possible elimination scheme in umfpack .since that solver is highly optimized , the asymptotic cost should be the same ( or better ) .for some matrices , it is possible that straight gaussian elimination may be unstable without pivoting , while umfpack will carry out a backward - stable scheme .this is a distinct advantage of the sparse matrix approach although the complexity and fill - in analysis then becomes more involved .an important issue in direct solvers , of course , is that of storage requirements . in the presentsetting the relevant matrices are the compressed sparse representation ( [ eq : multilevel - embedding ] ) and the factorization computed within umfpack . this will be ( [ eq : complexity - mv ] ) for the forward operator and , in the absence of pivoting , for the sparse factorization as well .if pivoting is required , the analysis is more complex as it involves some matrix fill - in and is postponed to future work .we now state some simple error estimates for applying and inverting a compressed matrix .let be the original matrix and its compressed representation , constructed using the algorithm of [ sec : algorithm ] such that for some .note that this need not be the same as the specified local precision in the i d since errors may accumulate across levels .however , as in , we have found that such error propagation is mild .let and be vectors such that .then it is straightforward to verify that for , where is the condition number of .furthermore , if , then in particular , if is well - conditioned , e.g. , is the discretization of a second - kind integral equation , then , so this section , we investigate the efficiency and flexibility of our algorithm by considering some representative examples . we begin with timing benchmarks for the laplace and helmholtz kernels in 2d and 3d , using the algorithm both as an fmm and as a direct solver , followed by applications in molecular electrostatics and multiple scattering .all matrices were blocked using quadtrees in 2d and octrees in 3d , uniformly subdivided until all block sizes were , but adaptively truncating empty boxes during the refinement process .only proxy compression was considered , with proxy surfaces constructed on the boundary of the supercell enclosing the neighbors of each block .we discretized all proxy surfaces using a constant number of points , independent of the matrix size : for the laplace equation , this constant depended only on the compression precision , while for the helmholtz equation , it depended also on the wave frequency , chosen to be consistent with the nyquist - shannon sampling theorem .computations were performed over instead of , where possible .the algorithm was implemented in fortran , and all experiments were performed on a 2.66 ghz processor in double precision . in many instances ,we compare our results against those obtained using lapack / atlas and the fmm .all fmm timings were computed using the open - source fmmlib package , which is a fairly efficient implementation but does not include the plane - wave optimizations of or the diagonal translation operators of .we first consider the use of recursive skeletonization as a generalized fmm for the rapid computation of matrix - vector products .we considered two point distributions in the plane : points on the unit circle and in the unit square , hereafter referred to as the 2d surface and volume cases , respectively .we assumed that the governing matrix corresponds to the interaction of charges via the green s function ( [ eq:2d - laplace ] ) .the surface case is typical of layer - potential evaluation when using boundary integral equations . since a domain boundary in 2dcan be described by a single parameter ( such as arclength ) , it is a 1d domain , so the expected complexities from [ sec : complexity - analysis ] correspond to : work for both matrix compression and application .( see for a detailed discussion of the case . ) in the volume case , the dimension is , so the expected complexities are and for compression and application , respectively . for the 3d laplace kernel ( [ eq:3d - laplace ] ) , we considered surface and volume point geometries on the unit sphere and within the unit cube , respectively .the corresponding dimensions are and .thus , the expected complexities for the 3d surface case are and for compression and application , respectively , while those for the 3d volume case are and , respectively .we present timing results for each case and compare with lapack / atlas and the fmm for a range of at .detailed data are provided in tables [ tab : apply - lap-2ds][tab : apply - lap-3dv ] and plotted in fig .[ fig : apply - lap ] . .for lp and rs , the computation is split into two parts : precomputation ( pc ) , for lp consisting of matrix formation and for rs of matrix compression , and matrix - vector multiplication ( mv ) .the precision of the fmm and rs was set at .dotted lines indicate extrapolated data . ]it is evident that our algorithm scales as predicted ..numerical results for applying the laplace kernel in the 2d surface case at precision : , uncompressed matrix dimension ; , row skeleton dimension ; , column skeleton dimension ; , matrix compression time ( s ) ; , matrix - vector multiplication time ( s ) ; , relative error ; , required storage for compressed matrix ( mb ) . [ cols=">,>,>,^,^,^,^",options="header " , ] as expected , more iterations were necessary for smaller , though the difference was not too dramatic .the ratio of the total solution time required for all solves was for the unpreconditioned versus the preconditioned method .we have presented a multilevel matrix compression algorithm and demonstrated its efficiency at accelerating matrix - vector multiplication and matrix inversion in a variety of contexts .the matrix structure required is fairly general and relies only on the assumption that the matrix have low - rank off - diagonal blocks . as a fast direct solver for the boundary integral equations of potential theory, we found our algorithm to be competitive with fast iterative methods based on fmm / gmres in both 2d and 3d , provided that the integral equation kernel is not too oscillatory , and that the system size is not too large in 3d .in such cases , the total solution times for both methods were very comparable .our solver has clear advantages , however , for problems with ill - conditioned matrices ( in which case the number of iterations required by fmm / gmres can increase dramatically ) , or those involving multiple right - hand sides ( in which case the cost of matrix compression and factorization can be amortized ) .the latter category includes the use of our solver as a preconditioner for iterative methods , which we expect to be quite promising , particularly for large - scale 3d problems with complex geometries .a principal limitation of the approach described here is the growth in the cost of factorization in 3d or higher , which prohibits the scheme from achieving optimal or nearly optimal complexity .it is , however , straightforward to implement and quite effective .all of the hierarchical compression - based approaches ( hss matrices , -matrices and skeletonization ) are capable of overcoming this obstacle .the development of simple and effective schemes that curtail this growth is an active area of research , and we expect that direct solvers with small pre - factors in higher dimensions will be constructed shortly , at least for non - oscillatory problems .it is clear that all of these techniques provide improved solution times for high - frequency _ volume _ integral equations , due to the compression afforded by green s theorem in moving data from the volume to the boundary .more precisely , the cost of solving high - frequency volume wave scattering problems in 2d are and for precomputation and solution , respectively . for related work , see . finally ,although all numerical results have presently been reported for a single processor , the algorithm is naturally parallelizable : many computations are organized in a block sweep structure , where each block can be processed independently .this is clearly true of the recursive skeletonization phase using proxy surfaces ( with a possible loss of in performance since there are levels in the hierarchy ) . as for the solver phase, arguments can be made in support of both the original `` hand - rolled '' gaussian elimination approach and our framework that relies on sparse embedding .we expect that , by making use of umfpack and other state - of - the - art parallel sparse solvers ( e.g. , superlu , mumps , pardiso , wsmp ) , our overall strategy will help simplify the implementation of skeletonization - based schemes on high - performance computing platforms as well .we would like to thank zydrunas gimbutas and mark tygert for many helpful discussions . 00 , _ a fully asynchronous multifrontal solver using distributed dynamic scheduling _ , siam j. matrix anal .appl . , 23 ( 2001 ) , pp .1541 . , _ lapack users guide _ , society for industrial and applied mathematics , philadelphia , pa , usa , 3rd ed . , 1999 ., _ protonation of interacting residues in a protein by a monte carlo method : application to lysozyme and the photosynthetic reaction center of rhodobacter sphaeroides _ , proc .usa , 88 ( 1991 ) , pp ., _ a fast direct solver for the integral equations of scattering theory on planar curves with corners _ ,j. comput .phys . , 231 ( 2012 ) , pp .18791899 . , _ the amber biomolecular simulation programs _ , j. comput, 26 ( 2005 ) , pp .16681688 . , _ a fast solver for hss representations via sparse matrices _ ,siam j. matrix anal .appl . , 29 ( 2006 ) , pp .6781 . , _ a fast decomposition solver for hierarchically semiseparable representations _ , siam j. matrix anal .appl . , 28 ( 2006 ) , pp .603622 . , _ a fast , direct algorithm for the lippmann - schwinger integral equation in two dimensions _ , adv .comput . math ., 16 ( 2002 ) , pp .175190 . , _ a wideband fast multipole method for the helmholtz equation in three dimensions _, j. comput ., 216 ( 2006 ) , pp ., _ on the compression of low rank matrices _ , siam j. sci . comput. , 26 ( 2005 ) , pp .13891404 . ,_ fast and efficient algorithms in computational electromagnetics _ , artech house , boston , ma , usa , 2001 ., _ algorithm 832 : umfpack v4.3an unsymmetric - pattern multifrontal method _ ,acm trans . math .softw . , 30 ( 2004 ) , pp .196199 . ,_ an unsymmetric - pattern multifrontal method for sparse lu factorization _ , siam j. matrix anal .appl . , 18 ( 1997 ) , pp . 140158 . , _pdb2pqr : an automated pipeline for the setup of poisson - boltzmann electrostatics calculations _ , nucleic acids res . , 32 ( 2004 ) , pp .w665w667 . ,_ structure of a b - dna dodecamer : conformation and dynamics _ , proc .usa , 78 ( 1981 ) , pp ., _ fast direct solvers for elliptic partial differential equations _ , ph.d . thesis , department of applied mathematics , university of colorado at boulder , 2011 . , _ a direct solver with complexity for integral equations on one - dimensional domains _ , front . math .china , 7 ( 2012 ) , pp .217247 . , _fmmlib : fast multipole methods for electrostatics , elastostatics , and low frequency acoustic modeling _ , in preparation . , _ a generalized fast multipole method for nonoscillatory kernels _ , siam j. sci. comput . , 24 ( 2003 ) ,, _ matrix computations _ , the johns hopkins university press , baltimore , md , usa , 3rd ed . , 1996 ., _ fast direct solvers for integral equations in complex three - dimensional domains _ , acta numer . , 18 ( 2009 ) , pp . 243275 . , _ a fast algorithm for particle simulations _ , j. comput . phys . , 73 ( 1987 ) , pp .325348 . ,_ a new version of the fast multipole method for the laplace equation in three dimensions _ , acta numer . , 6 ( 1997 ) ,229269 . , _ partial differential equations of mathematical physics and integral equations _, prentice - hall , englewood cliffs , nj , usa , 1988 . , _ wsmp : watson sparse matrix package .part ii direct solution of general sparse systems _ , technical report rc 21888 , ibm t. j. watson research center , 2000 ., _ a sparse matrix arithmetic based on -matrices .part i : introduction to -matrices _ , computing , 62 ( 1999 ) , pp .89108 . , _ data - sparse approximation by adaptive -matrices _ , computing , 69 ( 2002 ) , pp . 135 . , _ a sparse -matrix arithmetic .part ii : application to multi - dimensional problems _ , computing , 64 ( 2000 ) , pp ., _ updating the inverse of a matrix _ , siam rev . , 31 ( 1989 ) , pp ., _ high - order corrected trapezoidal quadrature rules for singular functions _ , siam j. numer ., 34 ( 1997 ) , pp .13311356 . , _an overview of superlu : algorithms , implementation , and user interface _ , acm trans . math .softw . , 31 ( 2005 ) ,302325 . , _randomized algorithms for the low - rank approximation of matrices _ , proc .usa , 104 ( 2007 ) , pp ., _ fast multipole boundary element method : theory and applications in engineering _ , cambridge university press , new york , ny , usa , 2009 . , _ fast evaluation of electro - static interactions in multi - phase dielectric media _ ,j. comput .phys . , 211 ( 2006 ) ,, _ a fast direct solver for boundary integral equations in two dimensions _ ,j. comput .phys . , 205 ( 2005 ) , pp .123 . , _ an accelerated kernel - independent fast multipole method in one dimension _, siam j. sci .comput . , 29 ( 2007 ) ,11601178 . , _ a fast direct solver for scattering problems involving elongated structures _ , j. comput .phys . , 221 ( 2007 ) , pp ., _ fast multipole accelerated boundary integral equation methods _ , appl ., 55 ( 2002 ) , pp .299324 . , _multipole for scattering computations : spectral discretization , stabilization , fast solvers _ , ph.d .thesis , department of electrical and computer engineering , university of california , santa barbara , 2004 . , _ rapid solution of integral equations of scattering theory in two dimensions _ , j. comput .phys . , 86 ( 1990 ) , pp .414439 . ,_ gmres : a generalized minimal residual algorithm for solving nonsymmetric linear systems _ , siam j. sci ., 7 ( 1986 ) , pp ., _ reduced surface : an efficient way to compute molecular surfaces _ ,biopolymers , 38 ( 1996 ) , pp .305320 . , _ solving unsymmetric sparse systems of linear equations with pardiso _ , future gener .20 ( 2004 ) , pp .475487 . , _ bi - cgstab : a fast and smoothly converging variant of bi - cg for the solution of nonsymmetric linear systems _ , siam j. sci ., 13 ( 1992 ) , pp .631644 . , _ a fast direct matrix solver for surface integral equation methods for electromagnetic wave problems in _ , in proceedings of the 27th international review of progress in applied computational electromagnetics , williamsburg , va , usa , 2011 , pp121126 . , _ automated empirical optimization of software and the atlas project _ , parallel comput. , 27 ( 2001 ) , pp .335 . , _ a multilevel fast direct solver for em scattering from quasi - planar objects _ , in proceedings of the international conference on electromagnetics in advanced applications , torino , italy , 2009 , pp640643 . ,_ a fast randomized algorithm for the approximation of matrices _ , appl ., 25 ( 2008 ) , pp .335366 . ,_ superfast multifrontal method for large structured linear systems of equations _ , siam j. matrix anal .appl . , 31 ( 2009 ) , pp .13821411 . , _ a kernel - independent adaptive fast multipole algorithm in two and three dimensions, j. comput .phys . , 196 ( 2004 ) ,
we present a fast direct solver for structured linear systems based on multilevel matrix compression . using the recently developed interpolative decomposition of a low - rank matrix in a recursive manner , we embed an approximation of the original matrix into a larger , but highly structured sparse one that allows fast factorization and application of the inverse . the algorithm extends the martinsson / rokhlin method developed for 2d boundary integral equations and proceeds in two phases : a precomputation phase , consisting of matrix compression and factorization , followed by a solution phase to apply the matrix inverse . for boundary integral equations which are not too oscillatory , e.g. , based on the green s functions for the laplace or low - frequency helmholtz equations , both phases typically have complexity in two dimensions , where is the number of discretization points . in our current implementation , the corresponding costs in three dimensions are and for precomputation and solution , respectively . extensive numerical experiments show a speedup of for the solution phase over modern fast multipole methods ; however , the cost of precomputation remains high . thus , the solver is particularly suited to problems where large numbers of iterations would be required . such is the case with ill - conditioned linear systems or when the same system is to be solved with multiple right - hand sides . our algorithm is implemented in fortran and freely available . fast algorithms , multilevel matrix compression , interpolative decomposition , sparse direct solver , integral equations , fast multipole method 65f05 , 65f50 , 65r20 , 65y15 , 65z05
some tropical diseases are amplified by one or several reservoirs .this is the case in diseases such as chagas disease and leishmaniasis .indeed , chagas disease has a domiciliary cycle , where domestic animals act as reservoirs , and a sylvatic cycle , where mammals like rodents are reservoirs .regarding leishmaniasis , the main reservoirs of the disease in countries of south america are dogs , but other mammals could also act as reservoirs . in this paper , we are interested in diseases that have a network of reservoirs .we are also interested in representing those diseases in a simple mathematical model where we can measure the amplification effects of the reservoirs through the basic reproductive number . in mathematical models of infectious diseases based on ordinary differential equations ,the basic reproductive number of the disease is frequently obtained using the method of the next generation matrix ( ngm ) presented in .different interpretations of the ngm can lead to different basic reproductive numbers . in section [ sappendix ] we present the construction of the ngm that it is used in this work . as an example , we consider the system represented in figure [ feje1 ] and equations ( [ eje1 ] ) .this system represents the transmission of a disease between vectors and humans with transmission rates , and mortality rates and .the disease can also be transmitted among humans with transmission rate .this model assumes that both populations are constant , so the model is determined by the equations of the infectious populations in ( [ eje1 ] ). represents the infectious vector and the node represents the infectious humans .the expressions next to the arrows represent the number of infections in the ending node caused by and individual in the initial node during its generation . ] the basic reproductive number that is obtained using the ngm depends on the interpretation of which infections are considered as new . in the system presented abovewe could defined human and vector infections as new infections , or only human or vector infections as new . from these three interpretationwe get three basic reproductive numbers ( see subsection [ ssngm ] ) . from the theorem [ umbral ] that is proven in , these three numbers are greater than one ( in this case the disease free equilibrium is locally asymptotically stable ) , or the three numbers are less than one ( in this case the disease free equilibrium is unstable ) . in consequence , to check the stability of a possible endemic state of a system we could take an appropriate interpretation of ngm guided by the simplicity of the calculations . in section [ smodel ] we propose an epidemiological model of a vector - borne disease that has a network of reservoirs that infect one another . in section [ sresults ]we show the basic reproductive number of the simplified system ( omitting infections between different reservoirs ) in terms of the basic reproductive number of the human cycle and the reservoirs cycles .we also present an application to chagas disease based on data in colombia taken from .it is shown that the disease is getting extinct as long as synergistic control is made in the number of vectors and reservoirs . in section [ sdiscusion ]we present the discussion and conclusions of the results presented in the section [ sresults ] . in section [ sappendix ]we present the method of the ngm and the mathematical justification of results in section [ sresults ] .we propose a mathematical model of a vector - borne disease that has a network of reservoirs .the state variables of the system are the human population ( ) , the vector ( ) and the reservoirs ( ) .we suppose that all the populations are constant ( humans , vectors and reservoirs of the species , ) .we assume that in each reservoir species there could be self infection .besides , the reservoirs can infect one another but there is no infection between reservoirs and human as the lines in figure [ complete ] shows .the parameters of the model are presented in table [ param1 ] and the system of differential equations for the infectious populations of humans , reservoirs and vectors ( respectively ) that describes the model is given in ( [ ecompleto ] ) .llll parameter & meaning & units + & number of human infections caused by & /([time]*[v]) ] + & one infectious human per unir of time & & + & number of infections of reservoir caused by & /([time]*[v]) ] + & one infectious reservoir per unir of time & + & number of infections of reservoir caused by & /([r_i]*[time]) ] + & mortality rate of vectors & ] + the graph in figure [ complete ] we define the weight of each edge as the expression next to it . for example , the weight of the edge from to is .the meaning of the weight of the edge from node to node is the number of infections in node that one individual of node can cause during its generation . for a cycle of the graph, we say that its weight is the geometric mean of the weights of its edges .for example , the two nodes cycle formed by and has a weight .we denote the weight of the cycle with nodes and by . the following result due to friedland ( * ? ? ?* theorem 2 ) gives us upper and lower bounds of the basic reproductive number in terms of the weights of the cycles of the graph .let be a matrix with nonnegative entries and for , we define .if and , then })^{1/r } = max_{\sigma \in \omega_n } \mu_\sigma(a)\ ] ] and [ teoumbral ] if is the ngm of an epidemiological model , then determines the pairs of species where there is infection .moreover , if is the heaviest cycle , we get that : in particular , this shows that a cycle with node and has basic reproductive number greater than one , i.e. , , then the basic reproductive number of the whole system is also greater than one . constructing the ngm of the model presented in section [ smodel ] as it is explained in subsection [ ssngm ] of appendix , we obtain that if we consider the infection of all species as new , the matrices and that define the ngm are : in consequence , the ngm of the system is : let us consider the system presented in the previous section with for all . in this case , the spectral radius of the matrix in ( [ gsimple2 ] ) is the greatest root of the equation in ( [ pcsimple2 ] ) for . in general , equation ( [ pcsimple2 ] ) is not easy to solve .if we omit self infection in all reservoirs , i.e. , for , we get that the greatest solution of ( [ pcsimple2 ] ) is given by ( [ r0msimple ] ) . in this scenario , the set of cycles would only have two nodes cycles .moreover , if , the inequalities in ( [ cotag ] ) would give us the obvious bounds in ( [ cotasimple1 ] ) . if we take into account the self infection in all reservoirs , the inequalities in ( [ cotag ] ) would turn into the inequalities in [ umbralsimple2 ] if if , the disease free equilibrium would be unstable ( see theorem [ umbral ] in appendix ) .if , the disease free equilibrium would be locally asymptotically stable .nonetheless , the inequalities in ( [ umbralsimple2 ] ) does not let us determine whether or when and . to solve this problem we interpret the ngm to obtain matrices and that ease the computation of the spectral radius of . for simplicity of the explanation ,let us consider the model presented in section [ smodel ] with only one reservoir , as the graph in figure [ hvr1 ] represents .considering only one reservoir . ]we define the threshold values in table [ tvalues1 ] from their respective interpretation of the next generation matrix .llll value & system form by & new infections + & & + & & + & & + as it is shown in the subsection [ ssapr0 ] , we obtain the equation ( [ r0simple ] ) . in consequence , if and only if . using this equivalence we could determine whether or based on the values .in figure [ bif1 ] we fix and for different values of we plot the stable points of the three infectious populations . in this figurewe find a bifurcation in . in this examplewe observe how the weights of the cycles could be small but using we can determine whether or . .we consider values of the parameters for the reservoir taken from table [ parameters ] . ] in the general scenario of the model presented in section [ smodel ] , we can obtain the same result in equation ( [ r0simple ] ) using the values defined in table [ tvalues2 ] .llll value & system form by & new infections + & , & + & , & + & & + & & + as it is shown in the subsection [ ssapr0 ] , we obtain the equation ( [ r0general ] ) . if we also have that , , we obtain the equation ( [ r0simple2 ] ) . in consequence , if and only if . furthermore , if , we get that if and only if .this equivalence lets us determine whether or for small cycle weights , improving the informations obtained from the inequality in [ umbralsimple2 ] . from , we can take some parameters for chagas disease in table [ parameters ] . that paper considers a model with two kind of non - human host ;the domiciliary hosts and the sylvatic hosts .we consider the model presented in section [ smodel ] with two reservoirs where there is no self infection ( ) and there is no transmission between reservoirs ( ) .figure [ chagascycles ] is the graph of this model. represents domiciliary reservoirs and the species represent the sylvatic reservoirs of the disease .] lllll parameter & units & estimate + & fraction of vectors infected by & ) ] & /100 + & one infectious vector per year & & + & fraction of vectors infected by & ) ] & /10 + & one infectious vector per year & & + & fraction of vectors infected by & ) ] & /5 + & one infectious vector per year & & + & number of individuals & ] & 0.001 + & number of humans & ] & + & one infectious vector per unir of time & & + & number of vector infections caused by & /(years*[h]) ] & + & one infectious vector per unir of time & & + & number of vector infections caused by & /(years*[r_i])$ ] & + & one infectious reservoir per unir of time & & + if we define as in table [ tvalues2 ] , from equations ( [ r0general ] ) and ( [ r0simple2 ] ) we have that if and only if .let us define , and .we have that . using the parameters in table [ parameters ] we obtain , and . in consequence , if and only if we must remark that is always greater than if is negative .this is telling that if we would want to attack the disease , we first must control the vector . in the case that , the number of reservoirs determine whether according to the inequality in ( [ restreshold ] ) , as figure [ line ] shows .based on the model of section [ smodel ] , the endemicity of the disease in one reservoir could entail the endemicity of the disease in human population .we also conclude that human endemicity of a disease in our model could not be only explained considering the dynamics of the infection within an specific system of hosts . in an specific system, we could get a small basic reproductive number that does not explain the endemicity of the disease . in our model , we observe how the basic reproductive numbers of the cycles between each reservoir and the vector could be less than one separately .however , the sum of the effects of the reservoirs can lead to endemicity of the disease in all species . in consequence , we conclude that a large enough system of hosts that contribute to spread the infection must be identified to get rid of the endemicity of the disease . as an example of control of a disease , using the data of chagas diseasewe conclude that only dropping the abundancy of the reservoirs can not extinguish the disease .the abundancy of the vectors must be dropped under certain threshold for the intervention of the reservoirs to work .the basic reproductive number of an infectious disease can be defined as the expected number of secondary cases produced in a susceptible population that are caused by an infectious individual .the ngm method lets us compute in an epidemiological model where the individuals are classified in different compartments and the dynamics of the size of those compartments is described by a system of ordinary differential equations ( this method is explained in ) .the number that we get is a threshold for the local asymptotic stability of the disease free equilibrium .let us assume that we have types of individuals and that represents the number of individuals in infectious compartments and represents the number of individuals in non - infectious compartments . in the model presented in section [ smodel ]we assume that all species populations are constant , so the system is only determined by the equations of the infectious compartments . for simplicity ,we omit the equations for non - infectious compartments in this explanation . for a general exposition of the ngm , see .let us assume that the number of infectious individuals follows the system of equations in ( [ xi ] ) . in this system represents the new infections rate for the compartment and represents the rate of change of the size of this compartment due to other reasons , such as recovery , death or movement from other compartment due to causes different from new infection , like a disease stage . the functions and on the interpretation of which infectious individuals are regarded as new infections .different interpretations will lead us to different versions of ngm .let us assume that is the number of infections in the compartment that are caused by an individual in the compartment per unit of time in a susceptible population . in terms of the system in ( [ xi ] ) , we get .let be the matrix .let us also assume that is the time that an individual from the compartment will be in the compartment .it turns out that , where and . if is the number of infections in the compartment caused by an individual in the compartment in during its generation in a susceptible population , we should have . each term accounts for the infections caused by the individual that started in the compartment and spent a time in the compartment be the matrix .we call the next generation matrix of the system in ( [ xi ] ) with its respective interpretation contained in the functions .finally , we define the basic reproductive number ( which depends on ) as the spectral radius of , i.e. , .llllll interpretation & & & & + ( new infections ) & & & & + & & & & + & & & & + & & & & , where + & & & & .+ using theorem [ umbral ] we have that is unstable . in consequence , in order to verify the possible endemicity of the disease we could consider interpretations that simplify calculations . for the general model defined in section [ smodel ] we consider the numbers , , and defined from the systems and interpretation in table [ tvalues2 ] .we can get the equations ( [ er0happ ] ) and ( [ er0riapp ] ) in a similar way to the numbers presented in table [ texamplengm ] in the previous subsection .let us prove that .the adjugate matrix of enables us to obtain .in particular , we are interested in the entry in ( [ t21 ] ) , where denotes the matrix that is obtained omitting the row and the column of and is the block matrix of formed by the entries where and .pauline van den driessche and james wat- mough .reproduction numbers and sub - threshold endemic equilibria for compartmental models of disease transmission .mathematical biosciences , 180(1):29?48 , 2002 .
vector - borne diseases with reservoir cycles are complex to understand because new infections come from contacts of the vector with humans and different reservoirs . in this scenario , the basic reproductive number of the system where the reservoirs are not included could turn out to be less than one , yet , an endemic equilibrium be observed . indeed , when the reservoirs are taken back into account , the basic reproductive number , of only vectors and reservoirs , explains the endemic state . furthermore , reservoirs cycles with a small basic reproductive number could contribute to reach an endemic state in the human cycle . therefore , when controlling for the spread of a disease , it could not be enough to focus on specific reservoir cycles or only on the vector . in this work , we created a simple epidemiological model with a network of reservoirs where is a bifurcation parameter of the system , explaining disease endemicity in the absence of a strong reservoir cycle . this simple model may help to explain transmission dynamics of diseases such as chagas , leishmaniasis and dengue .
since liebes ( 1964 ) and refsdal ( 1964 ) have reported meaningful aspects of gravitational lensing phenomenon , many researchers rushed into the field of gravitational - lensing study , and presented many interesting results .this situation is not altered in these days .one of the most interesting gravitational - lens phenomena is quasar lensing .this is caused by a lensing galaxy ( or galaxies ) intervening observer and quasar . in the context of cosmology, it will be possible to estimate hubble s constant from a time delay of the quasar variations between gravitationally - lensed , split images .the most successful study is by kundi et al .( 1997 , hereafter k97 ) .they monitored q0957 + 561 for a long time and performed robust determination of the time delay . from their own result, they evaluate hubble s ( ) constant as based on the lens model constructed by grogin and narayan ( 1996 , hereafter gn ) . on the other hands , concerning the structure of quasar , we will discriminate the structure of central engine according to the effect of a finite source size .recently , yonehara et al .( 1998 , 1999 ) performed realistic simulations of quasar microlensing , and showed that multi - wavelength observations will reveal the structure of accretion disk believed to be situated in the center of quasars . furthermore , using precise astrometric technique , lewis and ibata ( 1998 ) indicated that it is also possible to probe the structure of quasar from image - centroid shift caused by microlensing .observationally , in the case of q2237 + 0305 , mediavilla et al .( 1998 ) detected a difference between an extent of the continuum source and that of the emission - line source by two - dimensional spectroscopy , and limit the size of these regions .thus , quasar - lensing phenomena are a useful tool to probe not only for cosmology but also for the structure of quasar . following these interesting researches , we propose a method to estimate , in this , the effect of a finite source size on time delays of the observed quasar variations between each gravitationally - lensed , split image , and to judge whether it is negligibly small or not and to limit the whole size of the source of quasar variability .this is important because no such limitation has been done yet although the size of each variation , `` one shot '' , had already been obtained order of days assuming causality in the individual source of variations . in section 2 ,i describe the basic concept of this work , and simply estimate the time delay difference .next , i present some results of calculation for the case of q0957 + 561 in section 3 .finally , section 4 is devoted to discussion .the basic idea that we wish to present in this is schematically illustrated in figure [ tdsfig ] .suppose the situation that a quasar is macrolensed by lensing objects so that its image is split into two ( or more ) images .the angular separation between these images is large enough to observe individually , say apparent angular separation is . if we observe such quasar images , we will realize the intrinsic variabilities of quasar in each image as in the case of an ordinary , not gravitationally lensed quasar ( e.g. , recent optical monitoring results are shown in sirola et al .because of the macrolensing effect , generally , the variabilities in such a quasar are not observed in both images at the same time .there is a time delay between these quasar images caused by a light path difference from the light path without lensing objects which originates from gravitational lens effect ( e.g. , see schneider , ehlers , & falco 1992 , hereafter sef ) .these facts are nicely demonstrated by k97 .however , previous studies related to the time delay caused by gravitational lensing were not so much concerned with the source of variabilities , and the source of variabilities was treated as a point source . this treatment is reasonable , if the whole source size is negligibly small compared with the typical scale length over which a time delay changes . in contrast , actually , we only know that the source of quasar variabilities is smaller than the limit of the observational resolution , say ( e.g. , in the case of _ hst _ observations , ) , and we do not know whether the whole source size is small or large compared with the scale length over which a time delay changes .therefore , first , we should try to consider the effect of a finite source size on the expected observed light curve in quasar images . then, if we include such an effect , what do we expect to see ?the answer is easily understood by figure [ tdsfig ] . for simplicity ,i consider only two images ( image a and b ) of the lensed quasar , and the source exhibit only two bursts ( `` burst 1 '' and `` burst 2 '' , they occur in this order on the source plane ) with some time interval ( ) .the origin and separation of such bursts are not specified , we assume that these two bursts are not physically correlated , in other words they appear randomly . additionally , we set a time delay difference between the position of the `` burst 1 '' and the `` burst 2 '' on image a as and that on image b as . in the case of ,light curves of two images show apparently very similar feature , instead of its time delay at the very center .although the shape of light curves is altered from intrinsic one by the effect of finite source size as is depicted in lower left part of figure [ tdsfig ] , we can easily identify these two light curves are intrinsically the same one .thus , we are able to obtain a robust time delay between two images . on the other hands , in the case of , a previous fact does not hold any more . in this case ,time interval between two bursts is significantly modified by the effect of its apparently large time delay difference ( ) .in such a situation , we can no longer conclude that light curves from two images have the same origin , even if we include an effect of time delay for the case of point source .we may seek for the reason for this to microlensing or something exotic .in other words , there will be no good correlation between light curves of two images .this is a serious problem not only to determine time delay or but also to construct a quasar structure , to determine the origin of variabilities , or some other problems . here, i will make a simple estimate of time delay difference between different parts of the source , i.e. , the effect of a finite source . in this estimate ,i define , , as angular positions of the source center and those of the centers of two images .therefore , and are the solutions of well - known lens equation ( e.g. , sef ) , where , is a bending angle caused by intervening lens object(s ) , i.e. , gravitational lens effect .furthermore , time delay from un - lensed light path ( ) in the case of the image position is and the source position is is written as here , is redshift from observer to lens , is effective lens distance that by using angular diameter distance from observer to lens ( ) , from observer to source ( ) , and from lens to source ( ) , written as , and is so - called `` effective lens potential '' ( e.g. , sef ) .insert each image position into equation ( [ eq : delay ] ) and subtract one equation from the other , we obtain the well - known time delay expression between images a and b ( ) , additionally , if we assume the position that is offset by from the center of the source and write and as image positions from the center of the image , these variables should fulfill the lens equation ( [ eq : lens ] ) again , i.e. , or , subtracting this from equation ( [ eq : lens ] ) and adopt taylor expansion to , we obtain another expression of equation ( [ eq : lensenh ] ) , . subtracting from , c.f ., equation ( [ eq : delayab ] ) , and using equation ( [ eq : lens ] ) and equation ( [ eq : lensenh ] ) , we are able to obtain the time delay difference between the center of source and the other position offset by from the center of the source ( ) on the source plane .moreover , by definition of effective lens potential , bending angle is related to the through the derivative of effective lens potential as .since we are considering the origin of quasar variabilities , the source size is at most and the distance from observer is typical cosmological scale .thus , its apparent angular size is .this seems to be small compared with image separation and the scale of bending angles which is typically a few arcsec . for ,accordingly , we can adopt a taylor expansion to and , neglect the higher terms than first order assuming and . after using some algebra and putting as actual off - centered distance on the source plane , i.e. , , we are able to evaluate time delay difference as follows , \nonumber \\ & \sim & \frac{(1 + z_{\rm ol})}{c}{\cal d } \left| \left ( { \bf \theta_{\rm b } } - { \bf \theta_{\rm a } } \right ) \cdot d{\bf \beta } \right| \label{eq : direction } \\ & \sim & 12 \left ( \frac{1 + z_{\rm ol}}{2 } \right ) \left ( \frac{d_{\rm ol}}{d_{\rm ls } } \right ) \left ( \frac{|{\bf \theta_{\rm b } } - { \bf \theta_{\rm a}}| } { 1 { \arcsec } } \right ) \left ( \frac{r}{1~{\rm kpc } } \right ) { \rm day } \label{eq : delayest}\end{aligned}\ ] ] this one - dimensional evaluation is somewhat overestimated , however , for the calculations above , we did not use any restriction about lens model , and equation ( [ eq : delayest ] ) seems to be appropriate for any lens models and lensed systems except in some special situations , e.g. , in the vicinity of caustics ( or critical curves ) . consequently , considering the fact that quasar optical intrinsic variabilities have timescale , equation ( [ eq : delayest ] ) indicates that correlation between light curves of two images shown , in worst cases , will disappear , if the origin of quasar variabilities is extended over , i.e. , maximum off - centered burst occurs at from the center of quasar .finally , we will show some impressive result for the case of q0957 + 561 which is the first detected lensed quasar by walsh , carswell , & weymann , ( 1979 ) . to demonstrate how the extended source effect works on the time delay determination in an actual lensed quasar , here , i will present simulation results of q0957 + 561 as one example . using equation ( [ eq : delayest ] ) , we are able to estimate a time delay difference between same source positions at different lensed images . in this case ,as is well known , if we use , , ( e.g. , gn ) and assumed that , we will obtain for the source with a size of !furthermore , to obtain more realistic results , we used isothermal spls galaxy with compact core as an example of lens model for q0957 + 561 ( details are shown in gn ) , adopted parameters listed in table 7 in gn as `` isothermal '' spls and calculated time delay difference between images center and off - centered part of images ( ) . for this calculation , we set for simplicity and took on convergence and to reproduce the observed time delay following k97 .the resultant time delay contours compared with the image centers on the source plane are depicted in figure [ tddiffig ] . on image a ( left panel ) , a gradient of the contour is almost in the negative y - direction , although that of image b ( right panel , this time delay advanced ) is almost in the positive y - direction .additionally , time difference between the same position on the source reaches order of for the case of the source with a size , therefore , we expect disappearance of correlations between the light curves of image a and that of image b. here , from equation ( [ eq : direction ] ) , we can easily understand why do the contour lines show almost straight and perpendicular to y - direction .product of and in equation ( [ eq : direction ] ) means that time delay difference determined mostly by the element which parallel to of displacement .therefore , the time delay difference significantly alters along the direction and almost constant along the perpendicular direction . moreover , we simulated expected light curves of variabilities in both quasar images using superposition of simple bursts with triangled shape and duration of days which are randomly distributed in time , in space , and in amplitude . for the whole source size , i consider three cases , , and .using the same procedure to produce figure [ tddiffig ] , we calculated time delay from the center of image a over both images , randomly produce bursts , sum up all bursts and finally obtain expected light curves as presented in figure [ tdlcfig ] .`` residual light curves '' produced by subtracting properly - shifted light curve of image b from that of image a , are also shown in the figure . in the case of the smallest source , , and still in the case of middle source size , , we can easily recognize the coherent pattern in the light curves of images a and b but with time delay of advanced light curve of image b. time delay between two image centers is able to be determined fairly well .however , in the case of largest source , , it seems no correlation between two light curves even if we already know the time delay between two image centers , and we may misunderstand that the variabilities did not originate in source itself ! this feature is far from the observed properties that the time delay between two images is determined easily even if we fit them by eyes .therefore , i conclude that the size of source that is origin of quasar variabilities should be smaller than , namely , maximum acceptable size is order of from this simple simulation .as we examined , if we include the finite source - size effect to the time delay determination from quasar variabilities , correlation between expected light curves of each lensed image will disappear in the case of the size is sufficiently large , say .using this fact , we can limit the size of the region where quasar variabilities are produced , from the correlation between light curves of multiple lensed quasar images .furthermore , since the size of the origin of intrinsic quasar variability reflects a physical origin of the variabilities , we can also determine the origin of the variabilities , e.g. , whether it is disk instability ( kawaguchi et al .1998 ) or star burst ( aretxaga , cid fernandes , & terlevich , 1996 ) .particularly , in the case of q0957 + 561 , the origin of variabilities has a size smaller than .this value is consistent with disk instability model , because of its small size ( for schwarzschild radius accretion disk surrounding supermassive black hole ) .starburst model can be rejected , since starburst region is .hence , for the origin of intrinsic quasar variabilities , the disk instability model is more preferable , as was indicated by kawaguchi et al .( 1998 ) already . to draw this conclusion more critically, we should do this study more precisely in future .additionally , the fact that a larger source size tends to reduce a good correlation between the light curves of each image provides an answer to the question why time delay determination from radio flux gave a wrong answer except recent works , e.g. , haarsma et al ( 1999 ) .generally , radio emitting region is believed to have a larger size than that of optical photon because of the existence of large radio lobe and/or jet component , and the effect we have shown in this may be significant .thus , robust determination of the time delay seems to be difficult . if such a effect is significant in the well - known lensed quasar q2237 + 0305 , microlens interpretation of individual variabilities ( e.g. , see irwin et al .1989 ) will be rejected .fortunately , however , this may not be the case because for this source , caused by its quite nice symmetry of lensed image , the effect seems not to be so significant and intrinsic variabilities will be expected to appear in every images with good correlations .the author would like to express his thanks to toshihiro kawaguchi , jun fukue for their number of comments , and to referee for his / her meaningful suggestions .the author also grateful to shin mineshige for his valuable discussions and kind reading of the previous draft .this work was supported in part by the japan society for the promotion of science ( 9852 ) .
in the case of gravitationally - lensed quasars , it is well - known that there is a time delay between occurrence of the intrinsic variabilities in each split image . generally , the source of variabilities has a finite size , and there are time delays even in one image . if the origin of variabilities is widely distributed , say over as whole , variabilities between split images will not show a good correlation even though their origin is identical . using this fact , we are able to limit the whole source size of variabilities in a quasar below the limit of direct resolution by today s observational instruments .
particulate flows are of importance in many industrial and environmental applications , and subsequently are subject of research nowadays . though an apparently simple problem , the settling at low reynolds number of mono - disperse macroscopic solid particles in a newtonian fluid is not completely understood . due to the long range nature of hydrodynamics interactions ,the velocity disturbance caused by the motion of a particle decays as slowly as ( with the distance from the particle center ) . in the case of the simultaneous settling of several particles ,the resulting many - body interactions lead to complex trajectories . indeed , in absence of inertia and in an unconfined newtonian fluid , one isolated particle settles at the stokes velocity , where , , , and are respectively the particles density , the fluid density , the fluid viscosity , the acceleration due to gravity and the radius of the particles .for a suspension of spheres of volume fraction , randomly and independently dispersed in a newtonian fluid , batchelor calculated a correction to the first order in , with average settling velocity . however , for confined suspensions of volume fraction larger than a few percents , there is no theoretical model available , and is often described using the empirical correlation , where ( ] were prepared by adding spherical polystyrene particles , of density and average radius , to distilled water , of density at .a small amount of sds surfactant has been added to the mixture to reduce surface tension in the solid - liquid interface .characterizations of the diameter and sphericity of the particles were performed using a morphology g3 equipment ( from malvern instrument ) , and are displayed on fig .[ exp_setup].b and c. as one can see , the standard deviation is approximately for an average diameter , and less than of the particles have a circularity ( ratio of the two axes of the ellipse which best fits the perimeter of the particles ) bellow .suspensions were stirred and then transferred to the injection syringe that was held always vertical to minimize deposition of particles .then , the suspension was injected into the cell .finally , once the suspension saturated the cell , valves were closed and the suspension settles freely .this procedure took approximatively 5 which corresponds to 10 stokes time , largely bellow the duration of the transient regime ( ) .the motion of the particles was captured using a 8-bit ccd camera , located cm from the cell , with its optical axis perpendicular to the cell plates .the imaging window , situated in the middle of the hele - shaw cell , was 2.7 width by 3.6 high .the depth - of - field allowed one to visualize particles all across the cell thickness ( _ e.g. _ fig.[exp_setup2].a ) .the positions of the particle centers were detected using a hough transformation with an uncertainty of less than one pixel ( or 1/8 particle diameters ) . however , if two particles are separated by a distance shorter than , the hough transform technique used is not capable of detecting both of them .stacks of 300 images , captured with a time interval between images , were used to obtain particles trajectories , with a minimal total square displacement rule , and their velocities , using a second order scheme . for each experiment ,five stacks were recorded with a time interval between them , to improve statistics . finally , to avoid any transient effects , the first stack is acquired ( ) after the beginning of the sedimentation .the number of the detected particles in the imaging window was approximately constant in time . within a stack , the standard deviation of divided by its average was , while when considering the five acquired stacks .these fluctuations of the number of particles might be considered negligible , and subsequently , the particle volume fraction was approximately constant during the sedimentation . despite the steadiness of the average volume fraction at the scale of the imaging window in time , fig.[exp_setup2].a evidences that , as already reported, the spatial distribution of the particles might not be homogeneous .as a first step to characterize this distribution , the pair correlation function was computed for the particle positions as detected by the camera .this function represents the probability of finding the center of a particle at a dimensionless distance away from a given reference particle .figure [ exp_setup2].b displays in continuous lines the experimental for different and combinations . for the same and combinations , in dashed linesit is shown the calculated numerically for a random configuration of particles , situated independently , and following a uniform spatial distribution ( except for the hard sphere excluded volume ) . in all cases , the for a uniform distribution shows no evident structure .in contrast , the for the experiments presents a well defined peak near , and some second order structure , independently of .this reveals an existing microstructure in the settling suspension .the non - null value of for , which might suggest the overlapping of particles , is in fact due to the projection of the actual 3d particle configuration in the plane , as detected by the camera .the peak near implies that a significant fraction of the particles settle side by side : during the sedimentation , particles were not isolated but assemble into clusters , with their centers likely to be away from each other . within these clusters , the fluid should have roughly the same velocity as the particles .the presence of a peak at has already been reported , using mri techniques , for a macroscopic suspension settling in a large cell. .besides , clusters are also clearly visible on fig.4 of the study of bergougnoux and guazzelli, which shows the location of particle centers during the sedimentation of a suspension of glass spheres with and .clusters were detected neighbour by neighbour " , _i.e. _ , all the particles with centers closer than from a reference particle were searched recursively .once all the particles in a given cluster were identified , it was verified that no one was counted more than once .this procedure allowed one to sort all the particles in sets of clusters of particles . for completeness, isolated particles were considered as a cluster with =1 . in the following ,we analyse the population distribution of the clusters as function of and of .it should be noted that the clusters were identified on the acquired images , in which the real 3d particle spatial configuration was projected in the plane by the camera .this causes the measured distances between particles to be smaller than the real ones .the projection error in such a measurement increases as increases ( would be non - existent for = 2 because all particles would lay in the plane with perfect match between real and projected configurations ) .this projection error may lead to an overestimation of the number of the particles in a cluster . while this error can not be calculated directly , because it depends in the actual 3d spatial configuration of the particles , which is unknown , an upper bound for it was estimated as 0.15 ( 15% relative error ) for ( the largest ratio in the experiments ) .the error decreases with decreasing .details of this estimation are provided in section [ sec : appendix ] . for a given combination of and ,the number of clusters made of particles decreases with .this behavior is illustrated on fig.[clus].a , which displays , for and = 15 , the number of clusters of particles as a function of .once normalized , this distribution corresponds to the probability density function of cluster population . is displayed in the inset of fig .[ clus].a as a function of in a semilog scale .as one can see , is rather well fitted with an exponential law ( solid line ) , where is the average of .one should note that if the probability that a particle belongs to a given cluster is independent of the population of the latter , this would lead to follow a poisson law .the dashed line on the inset of fig.[clus].a , which represents this law , shows that this hypothesis is not verified in our experiments .this is confirmed by the evolution of the standard deviation as a function of , displayed on fig.[clus].b for and and for . the data collapse onto a single curve , and for large enough ( ) , in agreement with following an exponential law .however , for ( roughly isolated particles ) , we observe a small departure from this linear relation .the velocity fluctuations were characterized by the standard deviation of the particle velocities normalized by their average .these magnitudes were calculated over all the particles in the last 299 images of each stack .the velocity fluctuations obtained in this way for the five different stacks captured in each experiment were in agreement within a variation and , in the following , corresponds to an average over the five stacks .the relatively small variation of over the different stacks confirms that , in the steady - state regime, the velocity fluctuations have no significant evolution during the sedimentation .figure [ deltav].a displays in logarithmic scale , as a function of for the four values of studied . for all , , in agreement with previous studies and a theoretical prediction that accounts for the presence of confining walls. indeed , for the volume fraction was large enough , ( and for and respectively ) so that particles interact with each other more than with the walls .it has to be noted that this trend exists even for while .moreover , best fits of the evolution of with leads to and for and respectively , in agreement with reported in a numerical study. as a function of ( a ) , and as a function of the standard deviation of ( b ) . , and . , , and correspond respectively to best fits over , , and with the expression .these fits yield and .the solid line in ( b ) is .,title="fig : " ] as a function of ( a ) , and as a function of the standard deviation of ( b ) . , and . , , and correspond respectively to best fits over , , and with the expression .these fits yield and .the solid line in ( b ) is .,title="fig : " ] to study the connection between velocity fluctuations and the cluster population distribution , we followed caflish, hinch, and rouyer _et al._ who related the velocity fluctuations to the statistical fluctuations of the spatial distribution of the particles by considering a `` blob '' , _i.e. _ a given region of space with an excess of particles .balancing the apparent weight of the blob with its stokes drag , they calculated its excess of velocity .using the same approach , we consider here a cluster of particles .the apparent weight of the cluster is : where and are the densities of the particles and the fluid respectively , while is the volume of a particle . assuming spherical clusters of radius , the stokes drag may be written as : where is the velocity of the cluster , the fluid viscosity and is the hindering function that takes into account the backflow due to the confinement .then , balancing the drag with the apparent weight yields .finally , writing , where is the effective volume fraction of the cluster , one obtains : and subsequently , since , the standard deviation of the cluster velocities reads : where is the standard deviation of .figure [ deltav].b displays the standard deviation of the vertical velocities normalized with the average settling velocity as a function of .the data collapse fairly well onto a single master curve with a linear trend , for all and . the continuous line on fig .[ deltav].b has a slope of 0.86 which would correspond to spherical clusters of a random close packing of spheres ( for ) .as one can see , experimental data are slightly above the prediction for spherical clusters . the fact that velocity fluctuations can be strongly related to inhomogeneities in the particle spatial distributionhas been shown theoretically , and some authors characterized this inhomogeneity in a fixed inspection window .this result extends the validity of previous findings by determining the relation between velocity fluctuations and the population of particle clusters , rather than particle distribution in a fixed inspection window .the spatial distribution of particles in a settling suspension has been studied .the pair correlation function of the particle positions have revealed a peak for a distance of 2.2 particle radius between particle centers , which suggested a cutoff length for defining clusters of settling particles .the distribution of the number of particles in the clusters have been found to follow an exponential law .the average and the standard deviation of this distribution increase with the particle volume fraction , while the ratio appears to have only weak influence in the range studied .the measured velocity fluctuations were rather well predicted assuming that particles assemble in spherical clusters .the discrepancy between the experimental result and the predicted value of 0.86 ( figure [ deltav].b ) could be related to the fact that the particle diameter distribution ( fig . [exp_setup].b ) presented a small degree of polidispersity , which might increase the value of compared with monodisperse spheres .however , results by other authors indicate that for such a narrow distribution , this increase is likely to be negligible .another possible explanation is that clusters are not perfectly spherical but more prolate spheroids : a prolate spheroid with its longest axis aligned with the gravity direction would settle faster than one with its longest axis perpendicular to the gravity direction .the resulting fluctuations in the settling velocity , if both axis alignments coexist , would then be larger than for spherical only clusters . to be conclusive on this aspect of the velocity fluctuations ,one should calculate the probability density function of the clusters aspect ratio and of their orientation with respect to gravity , which would require a large amount of detected clusters to achieve a good statistical sampling .while such a description is beyond the scope of the present study , it constitutes an interesting motivation for future work .the authors would like to thank n. torres cabrera for contributing to the development of the tracking code and for preliminary experiments , and l. oger , d. salin , f. rouyer and g. drazer for fruitful discussions .this research has been supported by pip 0246 conicet , anpcyt pict-2013 - 2584 , and the lia - fmf in physics and mechanics of fluids .the measurement error due to projection could not be directly calculated , because it depends in the actual 3d spatial configuration of the particles , which is unknown . if particles are closer to each other than in the case of a uniform random distribution , forming clusters , as suggested by the peak in the experimental , and as it is the thesis of the present work , the error calculated for such a distribution may provide an upper bound for the error in the experimental configurations .as stated in the manuscript , two particles participate in a cluster if they are less than away from each other .the projection error can be then quantified by comparing the probability for two particles being less than away from each other in the 2d projection , with the same probability in the actual 3d particle spatial configuration .as the first probability exceeds the second one , the error increases . from the curves shown in figure [ exp_setup2].b ,it can be noted that , in the 2d projection , if two particles are separated by a distance shorter than , the hough transform technique used is not capable of detecting both of them .taking this into account , the first probability reads : where is the calculated from the 2d projection of the particle positions as viewed by the camera , and is the of the actual particle distances in 3d spatial configuration . the relative error can be written as : . for and ( the largest ratio in the experiments ) ,this estimation yields or a 15% relative error .the error decreases as decreases .20ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * ( ) * * , ( ) * * , ( ) link:\doibase 10.1146/annurev - fluid-122109 - 160736 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.79.2574 [ * * , ( ) ] * * , ( ) in _ _ , vol . , ( , ) p. * * , ( ) link:\doibase 10.1017/s0022112091001763 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * ( ) link:\doibase 10.1103/physreve.69.066310 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.110.178302 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physrevlett.88.048301 [ * * , ( ) ] _ _ , ph.d .thesis , ( ) link:\doibase 10.1063/1.870350 [ * * , ( ) ] * * , ( )
a study on the spatial organization and velocity fluctuations of non brownian spherical particles settling at low reynolds number in a vertical hele - shaw cell is reported . the particle volume fraction ranged from 0.005 to 0.05 , while the distance between cell plates ranged from 5 to 15 times the particle radius . particle tracking revealed that particles were not uniformly distributed in space but assembled in transient settling clusters . the population distribution of these clusters followed an exponential law . the measured velocity fluctuations are in agreement with that predicted theoretically for spherical clusters , from the balance between the apparent weight and the drag force . this result suggests that particle clustering , more than a spatial distribution of particles derived from random and independent events , is at the origin of the velocity fluctuations .
this is the first part of a manuscript series about the properties of a novel family of time - frequency representations .the paper at hand is concerned with the introduction of warped time - frequency systems and the derivation of necessary and sufficient conditions for a warped time - frequency system to form a frame .such systems are of particular importance , since they allow for stable recovery of signals from their inner products with the elements of the function system . to demonstrate the flexibility and importance of the proposed time - frequency systems , we provide illustrative examples recreating ( or imitating ) classical time - frequency representations such as gabor , wavelet or -transforms .while this paper considers the setting of ( discrete ) hilbert space frames , the properties of continuous warped time - frequency systems are investigated in the second part of this series .time - frequency ( or time - scale ) representations are an indispensable tool for signal analysis and processing .the most widely used and most thoroughly explored such representations are certainly gabor and wavelet transforms and their variations , e.g. windowed modified cosine or wavelet packet transforms .the aforementioned transforms unite two very important properties : there are various , well - known necessary and/or sufficient conditions for stable inversion from the transform coefficients , i.e. for the generating function system to form a frame .in addition to the perfect reconstruction property , the frame property ensures stability of the synthesis operation after coefficient modification , enabling controlled time - frequency processing .furthermore , efficient algorithms for the computation of the transform coefficients and the synthesis operation exist for each of the mentioned transforms . while providing a sound and well - understood mathematical foundation , gabor and wavelet transformsare designed to follow two specified frequency scales , i.e. linear and logarithmic , respectively .a wealth of approaches exists to soften this restriction , e.g. decompositions using filter banks , in particular based on perceptive frequency scales .adaptation over time is considered in approaches such as modulated lapped transforms , adapted local trigonometric transforms or ( time - varying ) wavelet packets .techniques that jointly offer flexible time - frequency resolution and redundancy , the perfect reconstruction property and efficient computation are scarce however .the setting of so - called nonstationary gabor transforms , a recent generalization of classical gabor transforms , provides the latter properties while allowing for freely chosen time progression and varying resolution . in this construction , the frequency scale is still linear , but the sampling density may be changed over time .the properties of nonstationary gabor systems are a matter of ongoing investigation , but a number of results already exist . when desiring increased flexibility along frequency , generalized shift - invariant systems , or equivalently ( nonuniform ) _ filter banks _ , provide the analogous concept .they offer full flexibility in frequency , with a linear time progression , but flexible sampling density across the filters .analogous , continuously indexed systems are considered in indeed , nonstationary gabor systems are equivalent to filter banks via an application of the ( inverse ) fourier transform to the generating functions . note that all the widely used transforms mentioned in the previous paragraph can be interpreted as filter banks . in this contribution , we investigate a particular family of filter bank frames over , where the filter frequency responses are obtained from a system of integer translates of a single prototype function by a nonlinear evaluation process , determined by a _ warping function _ that specifies the desired frequency scale / progression .we will show that appropriate sampling steps for the individual filters can be obtained from the derivative of the warping function .a core asset of our construction is the ease with which tight filter bank frames with bandlimited filters can be created .the idea of a logarithmic warping of the frequency axis to obtain wavelet systems from a system of translates is not entirely new and was first used in the proof of the so called painless conditions for wavelets systems .however , the idea has never been relaxed to other frequency scales so far .while the parallel work by christensen and goh focuses on exposing the duality between gabor and wavelet systems via the mentioned logarithmic warping , we will allow for more general warping functions to generate time - frequency transformations beyond wavelet and gabor systems .the warping procedure we propose has already proven useful in the area of graph signal processing . a number of other methods for obtaining _ warped filter banks _ have been proposed , e.g. by applying a unitary basis transformation to gabor or wavelet atoms . although unitary transformation bequeaths basis ( or frame ) properties to the warped atoms , the resulting system is not anymore a filter bank .instead , the warped system provides an undesirable , irregular time - frequency tiling , see .closer to our own approach , braccini and oppenheim , as well as twaroch and hlawatsch , propose a warping of the filter frequency responses only , by defining a unitary warping operator .however , in ensuring unitarity , the authors give up the property that warping is shape preserving when observed on the warped frequency scale . in this contribution, we trade the unitary operator for a shape preserving warping in order to construct tight and well - conditioned frames more easily .the manuscript begins with a short recollection of results from the theory of nonuniform filter bank frames ( section [ sec : pre ] ) , before describing the construction of warped time - frequency systems and the induced conditions on the warping function and prototype function ( section [ sec : warp ] ) . in section [ sec : warpedframes ] , we derive conditions on a warped time - frequency system to form a frame . herewe also see how the warping approach results in a straightforward procedure for the construction of tight frames from a compactly supported prototype function .we use the following normalization of the fourier transform and its unitary extension to .further , we require the _ modulation operator _ and the _ translation operator _ defined by and respectively for all .the composition of two functions and is denoted by .when discussing the properties of the constructed function systems in the following sections , we will repeatedly use the notions of weight functions and weighted -spaces .weighted -spaces are defined as with a continuous , nonnegative function called _weight function_. the associated norm is defined as expected .two special classes of weight functions are of particular interest . * a weight function called _ submultiplicative _ if * a weight function is called _ -moderate _ if for some submultiplicative weight function and constant .submultiplicative and moderate weight functions are an important concept in the theory of function spaces , as they are closely related to the translation invariance of the corresponding weighted spaces , see also for an in depth analysis of weight functions and their role in harmonic analysis . in digital signal processing , function systems formed as union of a finite number translation - invariant systems over with possibly varying translation stepare called nonuniform ( analysis ) filter banks . in this contributionwe consider a countable union of systems of translates over or , where is an unbounded interval contained in , but use filter bank terminology by analogy .filter bank frames have received considerable attention , e.g. in , as ( generalized ) shift - invariant systems in and as ( frequency - side ) nonstationary gabor systems in .[ def : filterbank ] let and .the we call the system a _ ( nonuniform ) filter bank _ for . the windows are called _ frequency responses _ and are the corresponding _ downsampling factors_.such filter banks can be used to analyze signals whose fourier transform is in . for a given signal call the filter bank _ analysis coefficients_. for many applications it is of great importance to be able to stably reconstruct from these coefficients , which is equivalent to the existence of constants , s.t . a system that satisfies this condition is called _ frame _ , and the frame operator is invertible . in this case , the _ canonical dual frame _ allows for reconstruction from the analysis coefficients , i.e. the frame property and frame operator are commonly defined in the signal domain .since our results are based on their frequency side equivalent , we use the fourier domain variants instead .the signal domain frame operator is obtained as .we are formerly interested in filter banks that satisfy the frame property eq ., i.e. filter bank frames .the two results presented in this section will be required in section [ sec : warpedframes ] to show that the warped filter bank construction we propose provides nicely structured time - frequency systems with intuitive and easily satisfied frame and tight frame conditions .we begin with the following proposition that combines ( * ? ? ?* corollary 1 ) with ( * ? ? ?* proposition 2 ) and ( * ? ? ? * theorem 2.9 , proposition 2.8 ) .[ pro : necandsuf ] let be a filter bank for . * if is a bessel sequence with bound , then if is a frame with lower frame bound and satisfying a local integrability condition ( cp . ) , then * assume that , there are some constants , such that ] for all .however , the real value of theorem [ thm : suficient ] can be seen when combined with structural assumptions on , similar to the results on nonstationary gabor systems in .its implications in our setting are explored in section [ sec : warpedframes ] .the goal of this contribution is the construction of filter banks with uniform frequency resolution on general nonlinear frequency scales , as well as the investigation of the frame properties of these systems .our method is based on the simple premise that the filter frequency responses should be translates of each other when viewed on the desired frequency scale .the frequency scale itself is determined by the so - called _ warping function_. generally , any bijective , continuous and increasing function , where is an interval , specifies a frequency scale on . for simplicity, we only consider the two most important cases and in this contribution .the warped system of frequency responses corresponding to the prototype and warping function is simply given by this method allows for a large amount of flexibility when selecting the desired frequency scale , but we also recover classical time - frequency and time - scale systems : clearly , we obtain a regular system of translates for any linear function , leading to gabor systems . on the other hand , observing shows that logarithmic provides a system of dilates , resulting in wavelet systems . in order to obtain_ nice _ systems , the derivative of the inverse warping function must be a -moderate weight function .[ def : warpfun ] let .a bijective function is called _ warping function _ , if with , and the associated weight function is -moderate for some submultiplicative weight . if , we additionally require to be odd . [rem : transinv ] moderateness of ensures translation invariance of the associated weighted spaces . in particular , holds for all .the definition above only allows warping functions with nonincreasing derivative and , if , we also require point - symmetry . both constraints can be easily weakened by extending the warping function definition to any function , such that there exists another function satisfying and , with being a warping function as per definition [ def : warpfun ] .the results presented in this paper hold for such warping functions . since all the examples that we wish to discuss have nonincreasing derivative in the first place , we will omit the details of working with two distinct warping functions , .a warped filter bank is now defined simply by selecting a prototype , a warping function and a family of time steps .while the latter can be chosen freely , the warping function induces some canonical choice of downsampling factors , which relates to the essential support of the frequency responses and is particularly suited for the creation of warped filter bank frames , see section [ sec : warpedframes ] .[ def : warpedfilter banks ] let be a warping function and . furthermore , let be a set of downsampling factors .then the _ warped filter bank _ associated to and is given by with and as defined in eq . .we further call , with and as in eq . for some _ natural downsampling factors _ , as they lead to frequency responses with bounded -norms .to conclude this section we give some examples of warping functions that are of particular interest , as they encompass important frequency scales . in proposition [ prop : warpingfunctions ] at the end of this section , we show that the presented examples indeed define warping functions in the sense of definition [ def : warpfun ] .[ ex : wavelet1 ] choosing , with leads to a system of the form this warping function therefore leads to being a dilated version of up to normalization .the natural downsampling factors are given by .this shows that is indeed a wavelet system , with the minor modification that our scales are reciprocal to the usual definition of wavelets .[ ex : rplusalpha1 ] the family of warping functions , for some and ] .if is a symmetric bump function centered at frequency , with a bandwidthdb bandwidth , is not important for this example .] of , then is a symmetric bump function centered at frequency , with a bandwidth of .up to a phase factor , .varying , one can _ interpolate _ between the gabor transform ( , constant time - frequency resolution ) and a wavelet - like ( or more precisely erb - like ) transform with the dilation depending linearly on the center frequency ( ) . through our construction , we can obtain a transform with similar properties by using the warping functions , for ,1] ] , and for we obtain .finally , the time - frequency atoms are again obtained by translation of .all in all , it can be expected that the obtained warped filter banks provide a time - frequency representation very similar to sampled -transforms with the corresponding choice of . , for ( light gray ) , ( dark gray ) and ( medium gray ) .note that the horizontal axis is logarithmic .( right ) warping functions from examples 3 and 4 : this plot shows the erblet warping function and , for ( light gray ) , ( dark gray ) and ( medium gray ) .the horizontal axis is linear.,title="fig : " ] , for ( light gray ) , ( dark gray ) and ( medium gray ) .note that the horizontal axis is logarithmic .( right ) warping functions from examples 3 and 4 : this plot shows the erblet warping function and , for ( light gray ) , ( dark gray ) and ( medium gray ) .the horizontal axis is linear.,title="fig : " ] [ prop : warpingfunctions ] the following functions are warping functions : 1 . for some .2 . for some and ] .( a ) : in the first case we find that for this weight function we find . on the other hand , it is easy to see that the weight function is submultiplicative. therefore is indeed -moderate .( b ) : in the second case , we assume for simplicity . is in and the inverse of is given by .assume that is -moderate with constant , then therefore , is -moderate with a submultiplicative weight function .it remains to show moderateness of .to that end , define and observe and similarly . by elementary manipulation and taking squarestwice , where the last inequality holds , since at is the global minimum of .this shows that is submultiplicative and is -moderate , concluding the proof of ( b ) . for general , the proof follows the same steps .( c ) : the third function is easily identified as being once continuously differentiable and point symmetric around .the corresponding weight function is given by similar to example [ ex : wavelet1 ] , we find that , which shows that is indeed moderate with respect to the submultiplicative weight function .( d ) : for the last warping function we also start by noting that it is in .further , we find moreover , we define the weight function , which is submultiplicative since obviously , , concluding the proof .we can now apply the results in section [ ssec : necandsuf ] to warped filter banks in order to investigate their frame properties in more detail .the application of proposition [ pro : necandsuf ] is straightforward and leads to the following corollary .[ cor : plwarp ] let be a filter bank for .* if is a bessel sequence with bound , then if is a frame with lower bound and , then * assume that , there are constants , such that ] , allowing us to divide the sum into terms with and .let further and apply lemma [ lem : cyestimate ] with and to continue estimating eq .by since is nonincreasing away from , we can use the following estimate for negative to find consequently the same estimate can be obtained for using the symmetry of . to conclude the proof of the upper frame bound , we recall that for some and therefore }\\ & \leq 4cd \tilde a^{1+\epsilon}\zeta(1+\epsilon,\tilde a)\zeta(1+\epsilon,1 ) < \infty .\end{aligned}\ ] ] we now turn our attention to the lower frame bound and show that the sufficient condition is satisfied for sufficiently small .our goal is to show that as , uniformly over . to this end , we proceed by defining and estimating each separately . if , then obvious modifications to the steps leading to the estimates for yield which converges to for .it is worth mentioning that we again used the setting , which implies that ] .further we will estimate the term , while assuming , with the constant as used in the previous steps .we continue by showing that implies that is large for sufficiently small .more explicitly , we prove for some function with , as tends to . to show the contraposition of this statement , note that implies , as the interval containing is a subset of the interval defining the set .recall , with as before , and assume to find by the ftc and submultiplicativity of the function we find that recall to conclude that for , analogous derivations show by defining the function via its monotonous inverse we indeed showed the desired implication eq . for both cases .putting together all the pieces , we obtain since the right hand side tends to zero for we have indeed shown that the sufficient condition for the lower frame bound is satisfied for sufficiently small , which finishes the proof for . for , the only adjustment necessary to provethe upper bound is that we assume ,\tilde a^{-1 } ] - \frac{f^{-1}(m)}{cv(m)}}\\ 0\quad & \text { else . } \end{cases}\ ] ] it satisfies proposition [ th : christophgeneral ] for any . with , we obtain that and form a tight warped filter bank with for the warping function introduced in example [ ex : alpha1 ] , we want to determine that satisfy eq . . restricting to for simplicity , we obtain where is the set of positive , odd numbers less or equal to and is the set of nonnegative , even numbers less or equal to .now let , and choose the hann window again . with , we obtain that and form a tight warped filter bank with finally , let us revisit the warping function introduced in example [ ex : rplusalpha1 ] .although eq . is easily evaluated , more instructive estimates can be calculated by a weaker condition , obtained from eq . via the ftc : choosing the hann window with and , we obtain , which converges to for and is bounded above by .therefore , and and form a tight warped filter bank with for , we obtain which converges to for and is bounded above by for .therefore , and form a tight warped filter bank with note that the estimates for are very coarse and any sequence of converging towards _ slowly enough _ for will do .in this contribution , we have introduced a novel , flexible family of structured time - frequency representations .we have shown that these representations are able to recreate or imitate important classical time - frequency representations , while providing additional design freedom .the representations proposed herein allow for intuitive handling and the application of important classical results from the theory of structured frames .in particular , the construction of tight frames of bandlimited filters reduces to the selection of a compactly supported prototype function whose integer translates satisfy a simple summation condition and sufficiently small downsampling factors .moreover , the warping construction induces a natural choice of downsampling factors that further simplify frame design . with several examples ,we have illustrated not only the flexibility of our method when selecting a warping function , or equivalently a frequency scale , but also the ease with which tight frames can be constructed .while the second part of this manuscript series discusses the integral transform analogue of the proposed construction , the associated ( generalized ) coorbit spaces and its sampling properties in the context of atomic decompositions and banach frames , future work will continue to explore practical applications of warped time - frequency representations and their finite dimensional equivalents on .the proper extension of the warping method and the presented results to function spaces in higher dimensions poses another interesting problem .this work was supported by the austrian science fund ( fwf ) start - project flame ( `` frames and linear operators for acoustical modeling and parameter estimation '' ; y 551-n13 ) and the vienna science and technology fund ( wwtf ) young investigators project charmed ( `` computational harmonic analysis of high - dimensional biomedical data '' ; vrg12 - 009 ) . the authors wish to thank jakob lemvig for constructive comments on a preprint of the manuscript .s. akkarakaran and p. p. vaidyanathan , _ nonuniform filter banks : new results and open problems _ , wavelets and their applications ( g.v .welland , ed . ) , studies in computational mathematics , vol .10 , elsevier , 2003 , pp .259301 . .g. baraniuk and d.l .jones , _ warped wavelet bases : unitary equivalence and signal processing _ , acoustics , speech , and signal processing , 1993 .icassp-93 . ,1993 ieee international conference on , * 3 * ( 1993 ) , 320323 .r. r. coifman , y. meyer , s. quake , and m. v. wickerhauser , _ signal processing and compression with wavelet packets _ , wavelets and their applications ( j.s .byrnes , jenniferl .byrnes , kathryna .hargreaves , and karl berry , eds . ) , nato asi series , vol . 442 , springer netherlands , 1994 , pp . 363379( english ) .to3em , _ weight functions in time - frequency analysis _ , pseudodifferential operators : partial differential equations and time - frequency analysis ( luigi rodino and et al . , eds . ) , fields inst .52 , amer .soc . , providence , ri , 2007 , pp . 343366 .holighaus , c. wiesmeyr , and p. balazs , _ construction of warped time - frequency representations on nonuniform frequency scales , part ii : integral transforms , function spaces , atomic decompositions and banach frames ._ , submitted , preprint available : http://arxiv.org/abs/1503.05439 ( 2015 ) .m. s. jakobsen and j. lemvig , _ reproducing formulas for generalized translation invariant systems on locally compact abelian groups _ , transactions of the ams ( 2015 , _ preprint available : http://http://arxiv.org / abs/1405.4948 _ ) . .nazaret and m. holschneider , _ an interpolation family between gabor and wavelet transformations : application to differential calculus and construction of anisotropic banach spaces ._ , nonlinear hyperbolic equations , spectral theory , and wavelet transformations a volume of advances in partial differential equations ( et al . and sergio albeverio , eds . ) , operator theory , advances and applications , vol . 145 , birkhuser , basel , 2003 , pp . 363394 . .necciari , p. balazs , n. holighaus , and p. sondergaard , _ the erblet transform : an auditory - based time - frequency representation with perfect reconstruction _ ,proceedings of the 38th international conference on acoustics , speech , and signal processing ( icassp 2013 ) , 2013 , pp . 498502 .j. princen and a bradley , _ analysis / synthesis filter bank design based on time domain aliasing cancellation _ , acoustics , speech and signal processing , ieee transactions on * 34 * ( 1986 ) , no . 5 , 11531161 .j. princen , a johnson , and a bradley , _ subband / transform coding using filter bank designs based on time domain aliasing cancellation _ , acoustics ,speech , and signal processing , ieee international conference on icassp 87 .12 , apr 1987 , pp .21612164 . .strohmer , _ numerical algorithms for discrete gabor expansions _ , gabor analysis and algorithms : theory and applications ( h. g. feichtinger and t. strohmer , eds . ) , appl ., birkhuser boston , boston , 1998 , pp . 267294 . .twaroch and f. hlawatsch , _ modulation and warping operators in joint signal analysis _ , proceedings of the ieee - sp international symposium on time - frequency and time - scale analysis , 1998 .( pittsburgh , pa , usa ) , oct 1998 , pp . 912 .
a flexible method for constructing nonuniform filter banks is presented . starting from a uniform system of translates , a nonuniform covering of the frequency axis is obtained by nonlinear evaluation . the frequency progression is determined by a warping function that can be chosen freely apart from some minor restrictions . the resulting functions are interpreted as filter frequency responses and , combined with appropriately chosen downsampling factors / translation parameters , give rise to a nonuniform analysis filter bank . classical gabor and wavelet filter banks are obtained as special cases . beyond the state of the art , we construct a filter bank adapted to the auditory erb frequency scale and families of filter banks that can be interpreted as an interpolation between linear ( gabor ) and logarithmic ( wavelet , erblet ) frequency scales , closely related to the -transform . we investigate conditions on the prototype filter and and warping function , such that the resulting warped filter bank forms a frame . in particular , we obtain a simple and constructive method for obtaining tight frames with bandlimited filters by invoking previous results on generalized shift - invariant systems .
a closed curve may form a well - defined mathematical knot whose main characteristic is the number of intersections when projected onto a plane .unknotting it would require slicing through it . a circular dna forms a closed curve that typically contains knots .there have been many studies of knots in such dna .dna , however , may exist both in closed and open forms , but in either case all topological transformations must occur through the action of cutting and reattaching enzymes such as topoisomerases and resolvases . the cutting has been observed to be facilitated by supercoiling that tightens dna knots .topology changes of open dna also require cutting because of the large size of the molecule . knotted proteins , on the other hand , are small compared to dna andtheir topological states may evolve in time through large conformational changes such as folding from an unknotted extended state and unfolding .all of the native protein knots can be obtained by repeatedly twisting a closed loop and then threading one of the ends through the loop .therefore , they are called twist knots .theoretical studies have established that the folding behavior depends on whether the native state of the protein is knotted in a deep or shallow fashion : it is much harder to tie the former than the latter but the process , in both cases , is predicted to be helped by the nascent conditions provided by the ribosomes .a protein is considered to be deeply knotted in its native state if both ends of the knot , as determined , _ e.g. _ by the kmt algorithm , are far away from the termini ( in practice , by more than about 10 residues ) .otherwise it is considered to be knotted shallowly .notice that the sequential heterogeneity of a protein positions the knot in a specific sequential region and tightening of the knot , upon protein stretching from its termini , goes through jumps to specific locations . in this paper, we consider two different types of conformational changes : thermal unfolding and protein deformation induced by a nearby air - water interface .we find that a sufficiently high temperature can untie any type of knots if one waits long enough , but the topological pathways of unfolding are generally not the reverse of those found for folding .the air - water interface may induce unknotting of shallow knots but we have also found an example of a situation in which a protein acquires a knot . we perform our simulations within a structured - based coarse - grained model and the interface is introduced empirically through coupling of a directional field to the hydropathy index of an amino acid residue in the protein .such a field favors the hydrophilic residues to stay in bulk water and the hydrophobic residues to seek the air , leading to surface - induced deformation and sometimes even to denaturation , defined by the loss of the biological functionality .the simplified character of the model leads to results that are necessarily qualitative in nature they just illustrate what kinds of effects the presence of the interface may bring in , especially in the context of the topological transformations .it should be noted that the behavior of proteins and protein layers at the air - water interface is of interest in physiology and food science .for instance , the high affinity of lung surfactant proteins to stay at the surface of pulmonary fluid generates defence mechanisms against inhaled pathogens .the layers of the interface - adsorbed proteins typically show viscoelastic properties and the enhanced surface viscosity of the pulmonary fluid is thought to provide stabilization of alveoli against collapse . protein films in saliva increase its retention and facilitate its functioning on surfaces of oral mucosa .various proteins derived from malted barley have been found to play a role in the formation and stability of foam in beer .adsorption at liquid interfaces has been demonstrated to lead to bending of and ring formation in amyloid fibers .there are many theoretical questions that pertain to the behavior of proteins at the air - water interfaces .the one that we explore here is whether the interfaces can affect topology .we find that indeed it can : the shallowly knotted proteins may untie and some unknotted proteins may acquire a shallow knot . deeply knotted proteins get distorted but their knottedness remains unchanged .we consider four proteins : 1 ) the deeply knotted yibk from _ haemophilus influenzae _ with the pdb structure code 1j85 , 2 ) the shallowly knotted mj0366 from methanogenic archea _methanocaldococcus jannaschi _( pdb:2efv ) this is the smallest knotted protein known , 3 ) the shallowly knotted dnde from _ escherichia coli _ ( pdb:4lrv ) , and 4 ) chain a of the pentameric ligand - gated ion channel from _ gleobacter violaceus _ ( pdb:3eam ) which is an unknotted protein . from now on, we shall refer to these proteins by their pdb codes . in order to elucidate the effects of hydrophobicitywe shall also consider certain `` mutated '' sequences in which certain residues are replaced by other residues without affecting the native structure .the proteins 1j85 , 2efv , and 4lrv have the sequential lengths of 156 , 82 and 107 respectively and the corresponding sequential locations of their knots are 78 - 119 , 11 - 73 , and 8 - 99 .thus the knot in 2efv is shallow at the c - terminus whereas the one in 4lrv at both termini .our basic structure - based coarse - grained model of a protein has been described in detail in refs .we use our own code .other implementations of structure - based ( or go - like ) models can be found in refs .the primary ingredient of the model is the contact map which specifies which residues may form non - bonding interactions described by a potential well .there are many types of contact maps , as summarized in ref . , and we take the one denoted by ov here . this ov map is derived by considering overlaps between effective spheres assigned to heavy atoms in the native state .the radii are equal to the van der waals radii multiplied by 1.24 .the potentials assigned to the contacts between residues and are given by \;\;\;.\ ] ] the length parameters are derived pair - by - pair from the native distances between the residues the minimum of the potential must coincide with the -c-c distance .consecutive -c atoms are tethered by the harmonic potential , where =100 / and is the native distance between and .the local backbone stiffness favors the native sense of the local chirality , but using the self - organized polymer model without any backbone stiffness yields similar results . the value of the parameter has been calibrated to be of order 110 pn which was obtained by making comparisons to the experimental data on stretching .we use the overdamped langevin thermostat and the characteristic time scale in the simulations , , is of order 1 ns .the equations of motion were solved by the 5th order predictor - corrector method . due to overdamping , our code is equivalent to the brownian dynamics approach .a contact is considered to be established if its length is within 1.5 .the trajectories typically last for up to 1 000 000 . despite its simplicity, the structure - based model used here has been shown to work well in various physical situations .in particular , it is consistent ( within 25% error bars ) with the experimental results on stretching for 38 proteins .it also has good predictive powers .for instance , our simulations have yielded large mechanostability of two cellulosome - related cohesin proteins c7a ( pdb:1aoh ) and c1c ( pdb:1g1k ) that got confirmed experimentally . in the case of c7a ,the calculated value of the characteristic unravelling force is 470 pn and measured 480 pn .the model also reproduces the intricate multi - peak force profile corresponding to pullling bacteriorhodopsin out of a membrane .the equilibrium positional rmsf patterns have been found to be agree with all - atom simulations , for instance , for topoisomerase i and man5b complexed with a hexaose .this model has also been used to study nanoindentation of 35 virus capsids and to demonstrate that characteristic collapse forces and the initial elastic constants are consistent with the experimental data .the air - water interface is centered at =0 and extends in the plane so that the bulk water corresponds to negative and air to positive .however , it should be noted that the interface is diffuse its width is denoted by .the interface - related force acting on the -c atom is given by where is the hydropathy index , is set equal to 10 , and =5 .we use the values of as determined by kyte and doolittle .they range between 4.5 for the polar and charged arg and 4.5 for the hydrophobic ile .other possible scales are listed in ref .for each protein , we can identify a degree of hydrophobicity in terms the values of of its amino acids , , where the sum is over the amino acids in the protein .properties of protein conformations are assessed by the fraction , , of the native contacts that are present in the conformation .the phenomenologically motivated addition of the air - water term to the basic structure - based model leads to the experimentally observed formation of a protein layer at the interface and gives rise to the in - layer diffusive behavior which is characteristic of soft colloidal glass with the intermediate values of the fragility indices . specifically , as a function of the number density of the proteins at the interface , the surface diffusion coefficient obeys a vogel - fulcher - tamann law .this is consistent with the microrheology experiments on the viscoelastic behavior of protein layers . in the initial state , proteins ( is between 2 and 50 are placed in a large square box so that their center of mass are around =3.2 0.4 nm with the and coordinates selected randomly .their initial conformations are native .the box is bounded by a repulsive bottom at -7 nm and by repulsive sides .the force of the wall - related repulsion decays as the normal distance from the wall to the tenth power .the walls may be brought to a desired smaller separation in an adiabatic way , however , here we focus on the dilute limit in which the proteins are far apart .the purpose of considering many proteins simultaneously is to generate statistics of behavior and a spread in the arrival times to the interface .if the proteins happen to come close to one another , their mutual interactions correspond to repulsion that forbids overlap .the thermodynamic properties of the system in the bulk are assessed by determining the temperature ( ) dependence of the probability of all contacts being established simultaneously . is determined in several long equilibrium runs . for typical unknotted proteins ,the optimum in the folding time is in the vicinity of ( is the boltzmann constant ) where is nonzero .a more detailed discussion of this point is presented in ref . then plays the role of the effective room temperature .this value of is also consistent with the calibration of the parameter .thermal unfolding is studied by considering a number of trajectories at that start in the native state and last for up to 1 000 000 .unfolding is achieved if all native contacts that are sequentially separated by more than a threshold value of residues are ruptured for the first time .an ideal unfolding would involve breaking of all contacts , but such simulations would take unrealistically long to run .we thus introduce the threshold that separates contacts that are sequentially local from the non - local ones .contact in -helices do not exceed the distance of 4 .usually , we take =10 .the median value of this rupture time defines the characteristic and -dependent unfolding time .an alternative criterion could involve crossing a threshold value of .the dynamics of staying in the knotted state in the bulk or on approaching the air - water interface is assessed by monitoring the time dependence of the probability that , at time , the protein stays in its native topology .even though the deeply knotted 1j85 protein is difficult to fold from a fully extended conformation at any , we find that it is easy to unfold it at elevated , if the waiting time is sufficiently long . within our cutoff - time, we could observe it happen for .for we have not recorded any refolding events after full unfolding . at , 21% of the 28 trajectories resulted in retying the trefoil knot .note that the starting conformation for the refolding process is not at all fully extended and is thus biased towards knotting a situation most likely encountered in ref .the loss of all contacts may result in conformations that look like expanded globules .taking of 10 , the values of are 565 045 , 116 512 , and 25 479 for equal to 0.9 , 1.0 , and 1.2 respectively . breaking contacts is not directly related to untying .we find that the median untying times are 198 850 , 85 050 , and 34 710 for the same temperatures respectively .this indicates that at the lower two of the three temperatures untying precedes unfolding and decreasing the enhances the gap between the two events ( see fig .s1 in supplementary information , si ) . only in one trajectory out of the total of 25 at , unfolding takes place 200 earlier than untying . for , unfolding takes place before untying in most of the trajectories if one takes =10 , but for =4 , the reverse holds .it is only at that unfolding always takes place before untying even if =4 .the unfolding pathway of knotted proteins has been studied in ref . for a structurally homologous yibk - like methyltransferase ( pdb:106d ) .the theoretical part of the study also involved a structure - based model , but with a very different contact map .the finding was that untying takes place after unfolding and this was taken as a signature of a certain hysteresis in the process .however , the value of was not specified presumably the simulations were done at a high .we just demonstrate that the actual sequence of the unfolding events depends on .since , in our model , of corresponds to about 850 k , it is the still lower that are relevant experimentally and thus observing unfolding before unknotting on heating seems unlikely .however , the experimental studies involve chemical denaturation by gnd - hcl , which allows for a broader range of conditions that are meaningful experimentally .the mechanisms of unknotting in 1j85 are dominated by direct threading ( dt ) events , illustrated in fig .[ tunf1j85dt ] , followed in statistics by slipknotting ( sk ) events , as illustrated in fig .[ tunf1j85sk ] .we observe no other unfolding mechanisms .they have been discussed in refs . in the context of folding except that now they operate in reverse .for instance , the dt mechanism involves pulling of a terminus of the protein out of a loop and the sk mechanism involves pulling a slipknot out of the loop .the determination of the precise nature of the process is based on a visual monitoring of the subsequent snapshots of the evolution .the exact proportions between the mechanisms depend on the .the red color is used for the n - terminal segment , blue for the c - terminal one and green for the middle part of the backbone . however, the number of instances of unknotting through sk decreases with a growing ( 32% , 8% , and 0% at 0.9 , 1.0 and 1.2 ) .unknotting in the trajectory shown in fig .[ tunf1j85dt ] takes place at time 221 400 so the last panel corresponds to a situation in which the protein is unknotted but not yet fully unfolded .topological pathways of folding in the shallowly knotted 2efv have been demonstrated to be of two basic kinds : through single loops or through two smaller loops .the latter is the dominant pattern and is a two - stage process .the two - stage pathways have not been observed in the deeply knotted 1j85 . in each of these cases ,the specific mechanisms of making the knot involve , in various proportions , dt , sk , and mouse - trapping ( mt ) .mt is similar to dt but the knot - loop moves onto the terminal segment of the protein instead of the other way around .there is also a possibility of an embracement ( em ) in which a loop forms around a terminal segment .the dt , sk , mt , and em mechanisms may operate either at the level of a single larger loop in a process , which is topologically one - stage , or at the level of two smaller loops and hence in two stages . again , the identification of the nature of the pathway is obtained visually .when unfolding 2efv at , all events are two - stage , exclusively sk - based , and are soon followed by refolding . at 0.7 , only 28% of 50 trajectories are two - stage ( dt and sk are involved in each stage ) and the remaining ones are one - stage .most of them refold back soon afterwards . at is no refolding and all trajectories unfold through the single loop mechanism .the process is dominated by the dt events , followed by sk , and then some mt ones .an example of a dt - based pathway is shown in fig .[ tunf2efv ] . the n - terminal segment ( 1 - 16 )is marked in orange , sites 17 - 53 in red , sites 54 - 78 in blue , and the c - terminal segment ( 79 - 82 ) in gray .in all trajectories , untying of 2efv occurs before thermal unfolding ( for at see fig .s1 in si ) .the physics of folding and unfolding in 4lrv is found to be similar to that of 2efv , but the dt unfolding events are more likely to proceed from the n - terminus instead of the c - terminus .another difference is that folding at is seen to take place exclusively through the two - loop mechanism .12 out of 50 trajectories led to folding .7 of them proceeded through the em - sk pathway , 4 through sk - sk , and 1 through dt - sk ( see fig .s2 in si ) .we conclude that the thermal unfolding processes of the knotted proteins are generally distinct from a simple reversal of folding .for instance , the dominant two - loop folding trajectories do not form a reverse topological template for the dominant single - loop unfolding trajectories .a similar observation has been already made for unknotted proteins although it involves no changes in the topology .we now consider the interface - related effects at .the proteins that come to the interface get deformed and lose some of their native contacts .we find that these phenomena do not affect the topological state of the deeply knotted 1j85 as demonstrated in fig .[ intpro ] .the data shown are for one example trajectory which corresponds to a specific starting protein orientation with respect to the interface .various orientations and different initial locations yield various adsorption times .when one averages over 50 proteins , one gets the results shown in fig .[ intpro50 ] .the loss of contacts is related to the approach of the center of mass of the protein(s ) to the center of the interface .the knot - ends may shift from one trajectory to another , but there is no knot untying . furthermore combining the effects of the interface with those of an elevated temperatureis found not to promote any untying .the situation changes for the shallowly knotted proteins .now the knots do untie .an untying process is illustrated in fig .[ intpro ] ( 2efv and two of its mutants ) , fig .[ intpro50 ] ( 2efv and 4lrv ) and fig .[ figwa2efv ] ( 2efv ) .adsorption of 2efv is driven by the hydrophobic n - terminus ( its first two residues are hydrophobic while the hydropathy indices of the first 10 residues add up to ) but the untying process takes place primarily through dt ( 7% by mt ) at the hydrophilic c - terminus . due to the distortion of the whole protein , it is difficult to decide whether the unfolding process involves one or two loops so we do not provide the partitioning numbers .the last nine residues in 2efv are and their hydropathy indices add up to + 0.41 .however , the protein can tie back again either through dt or sk and hence in fig .[ intpro ] decays to a finite value instead of to zero .overall , the changes in the topology , as described by , depend both on the approach to the interface and on the related loss of the contacts .we now consider two mutations at the c - terminus in 2efv .the first mutation replaces the last 9-residue sequential segment by which makes it more hydrophobic the hydropathy indices add up to + 2.81 and the second mutation , to , makes it hydrophilic the hydropathy indices add up to -2.49 .[ intpro ] shows that both mutations enhance the probability of staying knotted but mutation 2 is much more effective in doing so .the hydrophobic c - terminus of the first mutation favors an accelerated adsorption with less time to untie .the hydrophilic c - terminus , on the other hand , gets stuck in the water phase which preserves the knotted topology of the protein . in conclusion ,the distribution of the hydrophobicity of a knotted protein is a factor contributing to the untying probabilities at the interface .a similar behavior is observed for 4lrv ( fig .[ intpro50 ] ) except that this protein is more likely to stay knotted than 2efv .the two proteins are quite comparable in their linear size in the native state : the radius of gyration for 4lrv is 13.08 , and for 2efv 12.89 .however , they differ in the contact - mediated connectivity significantly : 4lrv has 36% more contacts than 2efv .this feature makes 4lrv harder to untie than 2efv .two examples of the interface - induced unknotting of 4lrv are shown in fig .s3 in si , which demonstrates two available untying mechanisms of 4lrv , _i. e. _ dt and mt with dt occuring more frequently .sk is not observed in the unknotting of 4lrv , which may be due to the fact that the terminal outer segments of 4lrv are too short to form a slipknot .if at least one of the terminal segments of an unknotted protein is hydrophobic , there is a possibility that dragging it towards the interface may lead to formation of a knot .this is , in fact , what we found to happen in protein 3eam with .this protein comprises 311 structurally resolved residues .its native state is unknotted and thermal fluctuations in the absence of the interface do not lead to any knot - tying in the -range between 0.3 and 0.7 .the net hydropathy score for its n - terminal segment of 8 residues , which should cross an entangled region of this protein to form a knot , is . due to the low hydrophobicity of this segment ,the knotting process in most trajectories is accomplished when the n - terminus is still in the water phase , as shown in fig [ 3eamknot ] .this terminus gets lifted to the interface together with other segments after the c - terminus ( its net hydrophathy of 8 residues is ) of the protein is already adsorbed to the interface .we find that in 52% of 50 trajectories , a knot forms through dt .an example of a formed trefoil knot is illustrated in fig .[ 3eamknot ] .if one makes the n - terminal segment more hydrophobic ( to =+2.44 ) through mutations , then the success in tying the knot is : 68% .if one makes it more polar ( to ) then the success rate is 66% .thus both mutations increase the knotting probability of 3eam .the n - terminus of the hydrophobic mutation can be easily adsorbed across the entangled part to the interface , which increases the probability of forming a knot . on the other hand , if the n - terminal segment is made more hydrophilicit may be dragged downward to the water phase after the whole protein gets adsorbed .this phenomenon may create another chance at passing through the entangled part of the protein , increasing the probability of knotting .the knotted conformation need not last the knot , if very shallow , may untie through the subsequent evolution . we have also observed knotting in a transport protein 4nyk from _ gallus gallus_. however , we have not detect it in other plausible candidates such as 1a91 , 1fjk , 1h54 , 1h7d , 1kf6 , 1n00 , 1nnw , 1o4 t , 1rw1 , 1yew , 2ec8 , 2fv7 , 2hi7 , 2i0x , 2ivy , 2oib , 3jud , 3kly and 3kzn .these proteins have been selected so that at least one of their termini is hydrophobic since such terminal segments have an enhanced probability of moving through the protein on approaching the interface .we have demonstrated that the forces associated with the air - water interface may affect the topological state of a protein .it is an interesting question to ask how to devise experimental ways to detect such transformations , if they indeed arise .after all , our model is coarse - grained and phenomenological , especially in its account of the interface .thus , further investigation such as the comparison between atomistic and coarse - grained models would be required .all atom simulations of air - water interfaces , even in the absence of any proteins , are expected be complicated due to the huge number of molecules needed to set up a necessary density profile that would be stationary .our simple model points to possible topological transformations that may take place at the interface .we hope it will provide motivation for studies by other means .it should be noted that topological transformations can also occur in the intrinsically disordered proteins simply as a result of time evolution .this has been demonstrated through all - atom simulations for polyglutamine chains of a sufficient length . for 60-residue chains , about 10% of the statistically independent conformationshave been found to be knotted .these knots can be shallow or deep and are not necessarily trefoil .the knotted character of these conformation may be related to the toxicity of proteins involved in huntington disease .contrary to the results reported in ref . , we find that shallow knots always untie before the unfolding on heating and the untying of deep knots may follow unfolding only at unrealistically high temperatures , though perhaps at acceptable concentration of the denaturant . it should be noted that homopolymers without any attractive contact interactions may tie knots purely entropically .* acknowledgments * we appreciate comments of .gmez - sicilia about the manuscript .the project has been supported by the national science centre , grant no .2014/15/b / st3/01905 and by the eu joint programme in neurodegenerative diseases project ( jpnd cd fp-688 - 059 ) through the national science centre ( 2014/15/z / nz1/00037 ) in poland .all authors wrote the paper .marek cieplak designed the research and wrote the code .mateusz chwastyk wrote knot - related pieces of the code and made preliminary runs .yani zhao did the calculations and made the figures .* competing financial interests * dean , f. b. , stasiak , a. , koller , t. & cozzarelli , n. r. duplex dna knots produced by escherichia coli topoisomerase i : structure and requirements for formation ._ j. biol .chem . _ * 260 * , 4975 - 4983 ( 1985 ) .ernst , c. & sumners , d. w. a calculus for rational tangles : application to dna recombination .soc . _ * 108 * , 489 - 515 ( 1990 ) .witz , g. , dietler , g. & stasiak , a. tightening of dna knots by supercoiling facilitates their unknotting by type ii dna topoisomerases .usa _ * 108 * , 3608 - 3611 ( 2011 ) .sukowska , j. i. , rawdon , e. k. , millet , k. c. , onuchic , j. n. & stasiak , a. conservation of complex knotting and slipknotting patterns in proteins .usa _ * 109 * , e1715-e1723 ( 2012 ) .li , w. , terakawa , t. , wang , w. & takada , s. energy landscape and multiroute folding of topologically complex proteins adenylate kinase and 2ouf - knot .usa _ * 109 * , 17789 - 17794 ( 2012 ) .euston , s. r. , hughes , p. , naser , m. a. & westacott , r. molecular dynamics simulation of the cooperative adsorption of barley lipid transfer protein and cis - isocohumulone at the vacuum - water interface ._ biomacromolecules _ * 9 * , 3024 - 3032 ( 2008 ) .clementi , c. , nymeyer , h. & onuchic , j. n. topological and energetic factors : what determines the structural details of the transition state ensemble and `` en - route '' intermediates for protein folding ?an investigation for small globular proteins .biol . _ * 298 * , 937 - 953 ( 2000 ) .szklarczyk , o. , staron , k. & cieplak , m. native state dynamics and mechanical properties of human topoisomerase i within a structure - based coarse - grained model . _proteins : structure , function and bioinformatics _ * 77 * , 420 - 431 ( 2009 ) . .the six panels on the left show successive snapshots of the backbone conformations at times indicated .the six panels on the right provide the corresponding schematic representations of these conformations .the n - terminal segment is shown in shades of orange and red , the c - terminal segment in shades of blue , and the middle segment in shades of green . ,scaledwidth=45.0% ] ) , fraction of preserved native contacts ( ) and probability of being knotted ( ) as a function of time , , for 1j85 ( red ) , 2efv ( blue ) and its two mutants ( 1,2 ; black ) at the air - water interface .the data are based on one trajectory in each case . , scaledwidth=45.0% ] , and averaged over 50 proteins at .the black , red , and blue lines are for 2efv , 4lrv , and 1j85 respectively . during the first 10 000 ,the proteins diffuse around without the interface ., scaledwidth=45.0% ] -xis indicate the subsequent positions of the center of mass of the protein .the 6 bottom panels show the schematic topological representations corresponding to the conformations shown in the top panels .untying is accomplished through the dt mechanism ., scaledwidth=50.0% ] .the red segment extends from the n - terminus ( begins from site 5 since the first four amino acids are not available in the crystal structure ) to site 35 , the green segment from 36 to 70 , and the blue segment from 71 to the c - terminus ., scaledwidth=40.0% ]
using a structure - based coarse - grained model of proteins , we study the mechanism of unfolding of knotted proteins through heating . we find that the dominant mechanisms of unfolding depend on the temperature applied and are generally distinct from those identified for folding at its optimal temperature . in particular , for shallowly knotted proteins , folding usually involves formation of two loops whereas unfolding through high - temperature heating is dominated by untying of single loops . untying the knots is found to generally precede unfolding unless the protein is deeply knotted and the heating temperature exceeds a threshold value . we then use a phenomenological model of the air - water interface to show that such an interface can untie shallow knots , but it can also make knots in proteins that are natively unknotted .
high altitude astronomical sites are a scarce commodity with increasing demand .a thin atmosphere can make a substantial difference in the performance of scientific research instruments like millimeter - wave telescopes or water erenkov observatories . in our planetreaching above 4000 metres involves confronting highly adverse meteorological conditions .sierra negra , the site of the large millimeter telescope / el gran telescopio milimtrico ( lmt ) is exceptional in being one of the highests astronomical sites available with endurable weather conditions .the lmt site combines high altitude ( 4580 m ) and low atmospheric water content .the water vapor opacity has been monitored since 1997 with radiometers working at 225 ghz showing that the zenith transmission at the site is better than 0.89 at 1 mm during 7 months of the year and better than 0.80 at 850 microns during 3 months of the year .there is no telescope as massive as the lmt above 4500 metres anywhere else and one can barely expect to operate at that altitude with temperatures above freezing .the development of the lmt site led to the interest and development of other scientific facilities benefiting from the high altitude conditions and sharing the same basic infrastructure . in july 2007the base of sierra negra was selected as the site for the high altitude water erenkov ( hawc ) gamma - ray observatory , an instrument whose performance depends critically on its 4100 m altitude location .sierra negra , also known as tliltepetl , is a 4580 meter volcano inside the parque nacional pico de orizaba , a national park named after the highest mountain of mexico . with an altitude of 5610 m picode orizaba , also known as citlaltepetl , is one of the seven most prominent peaks in the world , where prominent is related with the dominance of the mountain over the region .the parque national pico de orizaba has an area of 192 km enclosing the two volcanic peaks , separated by only 7 km from top to top , and their wide bases .tliltepetl is an inactive volcanic cone formed 460,000 years ago , much earlier than citlaltepetl whose present crater was created just 4100 years ago and has a record of activity within the last 450 years , including the flow of 0.1 km of lava in 1566 and a last eruptive event in 1846 .these two peaks are located at the edge of the mexican plateau which drops at the east to reach the gulf of mexico at about 100 km distance , as shown on the left side of fig .the weather of the site is influenced by the dry weather of the high altitude central mexican plateau and humid conditions coming from the gulf of mexico . in february 1997sierra negra was selected as the site of the lmt , a 50 m antenna for astronomical observations in the 0.8 - 3 millimeter range .the top of sierra negra , defined now by the position of the telescope , on the right side of fig. [ zoom ] , has universal transverse mercator ( utm ) and geographical coordinates and respectively. the development of the lmt site led to the installation of further scientific facilities benefiting from its strategic location and basic infrastructure like the 5 m radio telescope rt5 , a solar neutron telescope and cosmic ray detectors , among others . in july 2007 the base of sierra negra , about 500 m below the summit , was chosen as the site of the high altitude water erenkov ( hawc ) observatory , a water erenkov observatory for mapping and surveying the high energy -ray sky .hawc will be complemented by two atmospheric air erenkov telescopes , the omega ( observatorio mexicano de gammas ) formerly part of the hegra array .the seeing of sierra negra was monitored between 2000 and 2003 to quantify the potential of the site for optical astronomy .the site has a median seeing of 0.7 " , consistent with of a prime astronomical site .the wind velocity at 200 mbar has been analyzed using the noaa ncep / ncar reanalysis database showing that sierra negra is comparable to the best observatory sites as mauna kea in terms of applying adaptive optics techniques such as slow wavefront corrugation correction , based on the premise that global circulation of atmospheric winds at high altitude can be used as a criterion to establish the suitability of a site for the development of adaptive optics technique as the wind velocity at 200 mbar is strongly correlated to the average wavefront velocity allowing to compute the coherence time .different scientific facilities seek particular conditions and their dependence on meteorological conditions vary . among the sierra negra facilities we can note : * the large millimeter telescope requires minimum atmospheric opacity in the millimeter range , which translates in a reduced water vapour column density . according to design specifications , lmt operation at 1mm require wind velocities below 9 m s and the antenna is able to survive winds up to 250 km / h ( 69.4 m s ) .* the rt5 5 m radio telescope will operate at 43 and 115 ghz for observations of the sun .nighttime work will focus on interstellar masers and monitoring of mm - wave bright active galactic nuclei .rt5 requires absence of clouds in the line of sight . *optical and atmospheric erenkov telescopes require clear nights and relatively low humidity ( below 80% ) during nighttime .* water erenkov observatories like hawc seek high altitude environments which allow for a deep penetration of atmospheric particle cascades .they are basically immune to weather , although freezing conditions and large daily temperature cycles are concerns .the same applies to small cosmic ray detectors installed at sierra negra summit ..positions of the weather stations relative to the lmt ; increases to the east and to the north . [ cols= " <, > , > , > " , ] according to the davis weather station , the median temperature for the site is , with quartile values of and .the extreme temperatures recorded by the davis station on site are relatively mild : the minimum temperature in the data is while the maximum 11.8 .the texas station registered the same median temperature , but with somewhat larger variations and more marked extremes : and .as it would be expected that both stations register the same extreme temperature , it is required to perform some experiments to determine if these differences are real or are due to the distinct temperature sensors sensitivity to extreme conditions .we plan to carry out such experiments . in any casethe temperatures at the site do not show large variations .the daily cycle , quantified as the difference between the night and day medians , is 1.77 , going from 0.26 to 2.03 respectively .a similar value is obtained for seasonal variations : the median and third quartile ( ) values for dry / wet differ by only 1.76 and 1.09 respectively . the lowest quartile does show a larger -but still mild- difference , close to 2 . in fig .[ monthly_hour ] the amplitude of the curve between the lowest median , -0.22 at 5 am , and the highest median , 4.15 at 11 pm , is 4.37 .the coldest month is december , with a median of , 2.5 below the warmest month , june , with a median of .temperature distributions are shown in the cumulative histograms on the left panel of fig .[ temp3dq2 ] .the distributions for nighttime , daytime , dry and wet seasons are shown as indicated .the temperature differences due to the diurnal cycle are larger for values above the median while the seasonal temperature differences are larger for values below the median .the right hand side panel of the same figure shows in a grey scale diagram the medians per hour and month .temperatures are at their lowest during the nights of the dry months , specifically between december and february , and highest around or just after noon between april and june .note that the period between july and september is not warmer than april and may , due to the effect of rain .if we consider the altitude of the site and the temperature gradient of a standard atmosphere model , , the corresponding sea - level temperature is , about above the standard atmosphere base value .this is clearly an effect due to the low latitude , which results in a warmer temperature at a high altitude site .a final remark is that the site presents a good degree of thermal stability , beneficial for scientific instruments : thermal stability will help the performance of the lmt , designed to actively correct its surface to compensate for gravitational and thermal deformations .the barometre of the texas weather station did not provide meaningfull data and these had to be discarded .we discuss only the data from the davis weather station . to verify the calibration of the davis barometre we performed a comparison with a basic water barometre , for about 3 hours during daytime , obtaining a pressure of 594 mbar , in very good agreement with the davis barometre reading of 592.4 mbar .therefore , the davis weather station gives readings accurate to within 2.4 mbar .the site presents a low atmospheric pressure which is characteristic of a high altitude site .the median is 590.11 mbar with a daily cycle of 1.45 mbar , as measured by the difference between the median of the 4 am sample ( 589.36 mbar ) and that of the 11 am data ( 590.81 mbar ) , displayed in fig . [ monthly_hour ] .the daily cycle is in fact a double 12 hour cycle , with maxima at 11h and 23h and minima at 5h and 17h .this semidiurnal pressure variation of a few mbar is well known for low latitude zones .it is associated with atmospheric tides excited by heating due to insolation absorption by ozone and water vapor .the yearly cycle is not as well defined , see fig .[ annual ] , with relative minima in february ( 589.37 mbar ) , june and december , and maximum value in july ( 590.87 mbar ) , for a peak to peak amplitude of 1.5 mbar .high and low pressure are usually related to good and bad weather , respectively .the largest pressure recorded on site is 597.4 mbar , 3.8 mbar above the median , just before midnight on the 17/8/2001 and again at noon 18/8/2001 .the weather was dry as relative humidity values were 18% and 22% respectively with temperatures of 1.4 and 4.7 .we note that while the weather is usually poorer in the wet season , these good conditions happened in august , indicating that good observing conditions can happen any time of year ; the largest atmospheric pressure during the dry season occurred on 7/3/2004 at 10:35 am , when the weather record indicated a pressure of 594.3 mbar , temperature of 5.9 and a relative humidity of just 12% . the lowest pressure corresponded to what has presumably be the worst weather on site : the relatively close passage of hurricane dean , on 22/8/2007 . at 4:30am when the pressure dropped to 580.2 mbar , practically 10 mbar below the site median , with a temperature of and relative humidity of 92% .the same day registered the lowest daytime pressure , 580.8 mbar , at 10 am , when the temperature had _ dropped _ to .bad weather occasionally occurs early in the year , like on the 17/1/2004 at 6:20 am when the pressure reached its lowest dry season value , 581.6 mbar , with a temperature of and 85% relative humidity . as shown in fig .[ pres3dq2 ] , there is no significative difference between the cumulative distribution of atmospheric pressure values between day and night , presumably because of the actual 12 hour cycle .the seasonal distributions show that pressure tends to be 0.57 mbar lower during the dry season compared to the wet season .the grey scale diagram in fig .[ pres3dq2 ] shows that the main seasonal effect on pressure seems to be a shift the daily cycle to later hours in june - july ; the minima move from 4 am and 4 pm in december / january to 6 am/6 pm in june / july .the temperature and atmospheric pressure agree with a standard atmosphere model , with the usual temperature gradient of a standard atmosphere , , and the constant , with the atomic mass unit , the acceleration of gravity , the boltzmann constant , and is the mean atomic mass of air .the data departure from the standard model requires a warmer base temperature , , which results in and , close to the measured value .a warm standard atmosphere model appears reasonable for the site although it would be convenient to validate it with measurements of pressure and temperature at different elevations .the median rh is 68.87% with quartile values of 36.76% and 92.59% .the rh values for day and nighttime are 68.19% and 69.35% . when folded by months , the data show a clear seasonal modulation with lower values between november and march , and higher humidity between june and october with a median as illustrated in fig .[ monthly_hour ] .a second clear trend is an increase of the rh at around 8h to reach a maximum at 18h , % .once the sun sets the rh starts decreasing to reach its minimum value of 49% .nevertheless , it must be mentioned that for the daily cycle plot the very low rh values measured during the dry season are merged with those obtained during the wet season at the same hour .the cumulative distributions of rh are shown on the left side of fig .[ hum3dq1 ] where the seasonal differences are better appreciated . for the dry season the first , second and third quartile are 20.82% 50.92% and 78.52% .in contrast for the wet season the corresponding values are 64.86% , 84.92% and 96.18% . the right hand side of the same figure shows the median per hour and month in a grey scale . for november , december , january and februrary , the driest times are from about 8:pm up to noon while for february , march , april and maythe rh is lowest from dawn up to midday .wind velocity is an important factor for the large millimeter telescope , specified to perform at for wind velocities below 9 m s .both stations give similar percentage of data below the critical value of 9 m s ( davis : 91.5% ; texas : 87.7% ) .the davis weather station has two wind values in each data record : one ( wind " ) corresponding to a mean value acquired during the sampling interval ( ) and a second one ( whigh " ) corresponding to the maximum value during the same time interval . the median value of whigh is 6.03 m s and whigh for 22% of the time .the wind is fairly constant at the site , with a mild decrease less than during daytime compared to nighttime .differences between months are also small , except for a marked increase in wind velocities during the month of may ( ) , noted by both datasets , and a small decrease ( ) of wind velocities in the last months of the year . the wind distributions are shown in fig .[ viento3dq2 ] in black for the whole data set ; for nighttime , daytime , dry and wet seasons as marked . in the 3-d plot a seasonal pattern can not be as clearly identified as in the case of other parameters but we can still notice that the wind is slightly higher during the nights and the effect is more pronounced during the winter months .a special mention deserves the strong winds in one year in may as can be seen from fig . [ monthly_hour ] .the daily cycle is better appreciated in the right panel of the same figure if we look at the third quartile pattern .the lmt has two other specified wind limits : operations at any wavelength are to stop if wind velocities reach 25 m s and the telescope has to be stowed . in the extreme ,the design survival wind speed is 70 m s ( km/h ) .the two data sets show extremely rare wind velocities above 25 m s , with whigh exceeding that value 0.3% of the time .the largest wind speed registered so far corresponded to the storm recorded on the 22 of february 2002 , with . more recentlythe near passage of hurricane dean in august 2007 gave peak wind speeds of 40.7 m s , the highest endured successfully by the large millimeter telescope prior to the installation of its stow pins .solar radiation data was acquired by the texas weather station between april 25 , 2002 and march 13 , 2008 .the coverage for this time interval was 62% ( table 3 ) , with due consideration of the diurnal cycle .the data are output as time ordered energy fluxes in units of w / m .we obtain daily plots of the radiation flux which show the expected solar cosine modulation .we present here a preliminar analysis regarding a method to retrieve the cloud coverage from the radiation data .the radiation flux at ground level is modulated by the position of the sun according to where is the solar constant , which for working purposes we take as exactly equal to ; is the zenith angle of the sun and is a time dependent factor , nominally below unity , which accounts for the instrumental response , the atmospheric absorption on site and the effects of the cloud coverage on the radiation transfer through the atmosphere .given the site coordinates , we computed the modulation factor as a function of day and local time .local transit cosine values range between 1 around may 18 and july 28 ( for 2008 ) and 0.74 at winter solstice ( december 21 ) . knowing the position of the sun at the site as a function of time, we can study the variable the histogram of values of is shown in fig .[ rad - histfit ] .it has a bimodal distribution , with a first maximum at around and a narrow peak at , with a minimum around 0.55 .we interpret the narrow component as due to direct sunshine , while the broad component is originated when solar radiation is partially absorbed by clouds ; we then use the relative ratio of these as the clear weather fraction " . separating the data in intervals of , we observe that the minimum of the distribution of values of increases with for small airmasses to become constant at lower solar elevations , following the empirical relation : for this first analysis we separated data with as cloudy weather and data with as clear weather .we computed the fraction of clear weather ( clear / clear+cloudy ) , for every hour of data . only hours with at least 30 minutes of data were considered for the analysis , adding to 15223 hours of data .we considered data with airmasses lower than 10 . .the distribution shows a bimodal behaviour which can be reproduced by a two component fit , show in solid lines .the relative area of both components determines the clear / cloud fraction .see the electronic edition of mnras for a color version of this figure .[ rad - histfit ] ] ] the median clear fraction for the site is 48.4% , consistent with values reported by . in a comprehensive study for the california extremely large telescope ( celt ) project , the authors surveyed cloud cover and water vapor conditions for different sites using observations from the international satellite cloud climatology project ( isccp ) .the study period is of 58 months between july 1993 to december 1999 using a methodology that had been tested and successfully applied in previous studies .for sierra negra they measured a clear fraction for nighttime of 47% .we note that the set of hourly clear fractions behaves in a rather bimodal fashion , as show in the histogram in fig .[ histograma ] : 20.3% of the hours have , while 25.0% have .the remaining ( 55% ) have intermediate values .the histogram has a strong modulation in terms of wet and dry months .if we consider the semester between may and october the peak contains 32.5% of the data , while the has 11.4% . during the complementary dry months the peak contains 9.0% of the data , while the has 37.4% .intermediate conditions prevail around 55% of the time in both semesters . ]the contrast between dry and wet semesters is well illustrated in fig .[ wetdry ] , showing the median and quartile fractions of clear time for successive wet and dry semesters .semesters are taken continuously , from may to october representing the wet season and november to april of the following year for the dry season .the bars represent the dispersion in the data , measured by the interquartile range .large fluctuations are observed at any time of the year .the contrast between the clearer dry months , with median daily clear fractions typically above 75% , and the cloudier wet months , with median clear fractions below 20% , is evident .the seasonal variation can be seen with more detail in the monthly distribution of the clear weather fraction , combining the data of different years for the same month , shown in fig .[ cloud_meses ] .the skies are clear ( f(clear) ) between december and march , fair in april and november ( f(clear) ) , and poor between may and october ( f(clear) ) .the fluctuations in the data are such that clear fractions above 55% can be found 25% of the time in the worst observing months . ] ] fig .[ cloud_horas2 ] shows the median and quartile clear fractions as function of hour of day for the wet / dry subsets .the interquartile range practically covers the ( 0 - 1 ) interval at most times .we note that good conditions are more common in the mornings of the dry semesters , while the worst conditions prevail in the afternoon of the wet season , dominated by monsoon rain storms .the trend in our results for daytime is consistent with that obtained by erasmus and van staden ( 2002 ) .by analysing the clear fraction during day and nighttime they found that the clear fraction is highest before noon , has a minimum in the afternoon and increases during nighttime . ][ cloud - msh ] shows a grey level plot of the median percentage of clear time for a given combination of month and hour of day .dark squares show cloudy weather , clearly dominant in the afternoons of the rainy months ( mjjaso ) .these are known to be the times of stormy weather in the near - equator .clear conditions are present in the colder and drier months ( ndjfma ) .this plot is similar to that of humidity .in fact , when relative humidity decreases , the fraction of clear time increases .the relation between rh and will be the subject of a forthcoming paper .we have presented for the first time data and analysis of long - term meteorological data directly obtained from local meteorological stations at sierra negra .a comparison of the measurements from two weather stations was carried out by cross calibrating the data ; to include the accuracy errors of both stations , we obtained a fit for each parameter by minimizing . in the case of the temperaturethe values of both stations are consistent . for the windvelocities the fit is not consistent wit the equality between the two data sets .however , we showed that their statistical behaviour is similar , probably the two stations are sampling the same wind but not simultaneously and the differences might be due to the topography of the site . we will present a more detailed analysis of the wind in a forthcoming paper .the relative humidity sensor of one of the station slides up with time providing data higher than the real ones .we verified our results with a third station and data from the ncep / ncar reanalysis project database . in the case of atmospheric pressure and solar radiationwe only have data from one of the stations .we reported the daily , seasonal and annual behaviour of temperature , atmospheric pressure , relative humidity , wind speed and solar radiation .the site presents a median temperature of 1.07 and an atmospheric pressure median of 590.11 mb .the results for these two parameters agree with a warm standard atmosphere model for which the base temperature would be .as the site is influenced by the tropical storms moving off the gulf of mexico the median relative humidity has a strong seasonal dependence : while the median value for dry season is 50.92% for the wet season is 84.92% .the wind velocity median is 3.77 m s , with a third quartile of 5.88 m s and a maximum of 36.2 m s ; these values are below the three lmt specifications : to perform below 1 mm the wind speed must below 9 m s ; operation at any wavelength are stop if the wind velocity is 25 m s and the design survival wind speed is 70 m s . from the solar radiation data we developed a model for the radiation that allowed us to estimate the fraction of time when the sky is clear of clouds .the results obtained are consistent with erasmus and van staden ( 2002 ) measurements of cloud cover using satellite data .this consistency shows the great potential of our method as cloud cover is a crucial parameter for astronomical characterization of any site . to our knowledgethis is the first time that solar radiation data from the ground are used to estimate the temporal fraction of clear sky .the result presented here show that the meteorological conditions at sierra negra are stable daily and seasonally and have been so for the seven years measured .we consider that this period is representative of the climate at the site .therefore sierra negra offers exceptional conditions for such a high altitude , specially during the dry season , and is an ideal site for millimeter and high energy observations .ncep reanalysis data provided by the noaa / oar / esrl psd , boulder , colorado , usa , from their website at http://www.cdc.noaa.gov/. the authors thank g. djordovsky , a. walker , m. schoeck and g. sanders for their kind permission to use the results from the erasmus and van staden ( 2002 ) report for sierra negra .remy avila and esperanza carrasco thanks conacyt support through the grant no .
sierra negra , one of the highest peaks in central mexico , is the site of the large millimeter telescope . we describe the first results of a comprehensive analysis of the weather data measured in situ from october 2000 to february 2008 to be used as a reference for future activity in the site . we compare the data from two different stations at the summit considering the accuracy of both instruments . we analysed the diurnal , seasonal and annual cycles for all the parameters . the thermal stability is remarkably good , crucial for a good performance of the telescopes . from the solar radiation data we developed a new method to estimate the fraction of time when the sky is clear of clouds . we show that our measurements are consistent with a warm standard atmosphere model . the conditions at the site are benign and stable given its altitude , showing that sierra negra is a extremely good site for millimeter and high energy observations . [ firstpage ] site testing atmospheric effects
graphs are used to model infrastructure networks , the world wide web , computer traffic , molecular interactions , ecological systems , epidemics , citations , and social interactions , among others . despite the differences in the motivating applications ,some topological structures have emerged to be important across all these domains . in particular , the _ triangle _ is a manifestation of homophily ( people become friends with those similar to themselves ) and transitivity ( friends of friends become friends ) .the triangle structure of a graph is commonly used in the social sciences for positing various theses on behavior .triangles have also been used in graph mining applications such as spam detection and finding common topics on the www .a new generative model , blocked two - level erds - rnyi ( bter ) , can capture triangle behavior in real - world graphs , but necessarily requires the degree - wise clustering coefficients as input .relationships among degrees of triangle vertices can also be used as a descriptor of the underlying graph . in this paper, we study the idea of _ wedge sampling _ , i.e. , choosing random wedges ( from a uniform distribution over all wedges ) to compute various triadic measures on large - scale graphs .we provide precise probabilistic error bounds where the accuracy depends on the number of random samples .the term _ wedge _ refers to a path of length 2 ; in , is a wedge _ centered _ at node 1 .a _ triangle _ is a cycle of length 3 ; in , is a triangle .we say a wedge is _ closed _ if its vertices form a triangle .observe that each triangle consists of three closed wedges .\(1 ) at ( 0,0 ) [ nd ] 1 ; ( 2 ) at ( 3,2 ) [ nd ] 2 ; ( 3 ) at ( 3,-2 ) [ nd ] 3 ; ( 4 ) at ( 6,1 ) [ nd ] 4 ; ( 5 ) at ( 8,-2 ) [ nd ] 5 ; ( 6 ) at ( 9,0.2 ) [ nd ] 6 ; ( 7 ) at ( 9,2.5 ) [ nd ] 7 ; ( 1 ) to ( 2 ) ; ( 1 ) to ( 3 ) ; ( 2 ) to ( 4 ) ; ( 3 ) to ( 4 ) ; ( 3 ) to ( 5 ) ; ( 4 ) to ( 5 ) ; ( 4 ) to ( 6 ) ; ( 4 ) to ( 7 ) ; ( 6 ) to ( 7 ) ; let , , , and denote the number of nodes , edges , wedges , and triangles , respectively , in a graph .a standard algorithmic task is to count ( or approximate ) .there are many other `` triadic '' measures associated with graphs as well .the following are classic quantities of interest , and are defined on _undirected graphs_. ( formal definitions are given in . ) transitivity or global clustering coefficient : this is , and is the fraction of wedges that are closed ( i.e. , participate in a triangle ) . intuitively , it measures how often friends of friends are also friends , and it is the most common triadic measure .local clustering coefficient : the clustering coefficient of vertex ( denoted by ) is the fraction of wedges centered at that are closed . the average of over all vertices is the _ local clustering coefficient _ , denoted by .degree - wise clustering coefficients : we let denote the average clustering coefficient of degree- vertices .in addition to degree distribution , many graphs are characterized by plotting the degree - wise clustering coefficients , i.e. , , versus . in , the transitivity is .the clustering coefficients of the vertices are 0 , 0 , 1/3 , 1/5 , 1 , 1 , and 1 in order , and the local clustering coefficient is 0.5048 . in general , the most direct method to compute these metrics is an exact computation that finds all triangles . for degree - wise clustering coefficients ,no other method was previously known . triangles and transitivity in directed graphs have also been the subject of recent work ( see e.g. and references therein ) . in a directed graph , edges are ordered pairs of vertices of the form , indicating a link from node to node .when edges _ and _ exist , we say there is a _ reciprocal edge _ between and .if just one edge exists , then we say it is a _ one - way edge_. considering all possible combinations of reciprocal and one - way edges leads to 6 different wedges and 7 different triangles . in this paper, we discuss the simple yet powerful technique of _ wedge sampling _ for counting triangles .wedge sampling is really an algorithmic template , since various algorithms can be obtained from the same basic idea .it can be used to estimate all the different triadic measures detailed above .the method also provides precise probabilistic error bounds where the accuracy depends on the number of random samples .the mathematical analysis of this method is a direct consequence of standard chernoff - hoeffding bounds .if we want an estimate for with error with probability 99.9% ( say ) , we need only 380 wedge samples .this estimate is _ independent of the size of the graph _, though the preprocessing required by our method is linear in the number of edges ( to obtain the degree distribution ) .we discovered older independent work by schank and wagner that proposes the same wedge sampling idea for estimating the global and local clustering coefficients .our work extends that in several directions , including several other uses for wedge sampling ( such as directed triangle counting , random triangle sampling , degree - wise clustering coefficients ) and much more extensive numerical results .this manuscript is an extended version of ; here we give more detailed experimental results and also show how wedge sampling applies to computing directed triangle counts .a detailed list of contributions follows .* computing the transitivity ( global clustering coefficient ) and local clustering coefficient : * we show how to use wedge sampling to approximate the global and local clustering coefficients : and .we compare these methods to the state of the art , showing significant improvement for large - scale graphs . *computing degree - wise clustering coefficients : * the idea of wedge sampling can be extended for more detailed measurements of graphs .we show how to calculate the degree - wise clustering coefficients , for .the only other competing method that can compute is an exhaustive enumeration .we compare with the basic fast enumeration algorithm given by ( which has been studied and reinvented by ) .* computing triangles per degree : * wedge sampling can also be employed to sample random triangles , including the application of estimating the number of triangles containing one ( or more ) vertices of degree , denoted for .* counting directed triangles : * few methods have been considered for counting triangles in directed graphs .it is especially complicated because there are 7 types of triangles to consider .once again , the versatility of wedge sampling is used to develop a method for counting all types of directed triangles. there has been significant work on enumeration of all triangles .recent work by cohen and suri and vassilvitskii give mapreduce implementations of these algorithms .arifuzzaman et al . give a massively parallel algorithm for computing clustering coefficients .enumeration algorithms however , can be very expensive , since graphs even of moderate size ( millions of vertices ) can have an extremely large number of triangles ( see , e.g. , ) .eigenvalue / trace based methods have been used by tsourakakis and avron to compute estimates of the total and per - degree number of triangles .however , computing eigenvalues ( even just a few of them ) is a compute - intensive task and quickly becomes intractable on large graphs . in our experiment , even computing the largest eigenvalue was multiple orders of magnitude slower than full enumeration .most relevant to our work are sampling mechanisms .tsourakakis et al . started the use of sparsification methods , the most important of which is doulion .this method sparsifies the graph by keeping each edge with probability ; counts the triangles in the sparsified graph ; and multiplies this count by to predict the number of triangles in the original graph . various theoretical analyses of this algorithm ( and its variants )have been proposed .one of the main benefits of doulion is that it reduces large graphs to smaller ones that can be loaded into memory .however , the doulion estimate can suffer from high variance .alternative sampling mechanisms have been proposed for streaming and semi - streaming algorithms . yet, all these fast sampling methods only estimate the number of triangles and give no information about other triadic measures . in subsequent work by the authors of this paper , a hadoop implementation of these techniques is given in , and a streaming version of the wedge sampling method is presented in .we present the general method of wedge sampling for estimating clustering coefficients . in later sections ,we instantiate this for different algorithms .before we begin , we summarize the notation presented in in .lx|lx & number of vertices & & number of vertices of degree + & number of edges & & degree of vertex + & set of degree- vertices & & total number of wedges + & number of wedges centered at vertex & & total number of triangles + & number of triangles incident to vertex & & number of triangles incident to degree- vertices @ & transitivity + & clustering coefficient of vertex + & local clustering coefficient + & degree - wise clustering coefficient + we say a wedge is _ closed _ if it is part of a triangle ; otherwise , the wedge is _open_. thus , in , -- is an open wedge , while -- is a closed wedge .the middle vertex of a wedge is called its _ center _ ,i.e. , wedges -- and -- are centered at .wedge sampling is best understood in terms of the following thought experiment .fix some distribution over wedges and let be a random wedge .let be the indicator random variable that is if is closed and otherwise .denote ] .then for , we have hence , if we set , then < \delta ] as the probability that a uniform random wedge is closed or , alternately , the fraction of closed wedges . to generate a uniform random wedge , note that the number of wedges centered at vertex is and .we set to get a distribution over the vertices .note that the probability of picking is proportional to the number of wedges centered at .a uniform random wedge centered at can be generated by choosing two random neighbors of ( without replacement ) .[ clm : random ] suppose we choose vertex with probability and take a uniform random pair of neighbors of .this generates a uniform random wedge .consider some wedge centered at vertex .the probability that is selected is .the random pair has probability of .hence , the net probability of is . shows the randomized algorithm -wedge sampler for estimating in a graph .observe that the first step assumes that the degree of each vertex is already computed .sampling with replacements means that we sample from the original distribution repeatedly and repeats are allowed .sampling without replacement means that we can not pick the same item twice .select random vertices ( with replacement ) according to the probability distribution defined by where . foreach selected vertex , choose two neighbors of ( uniformly at random without replacement ) to generate a random wedge .the set of all wedges comprises the sample set .output the fraction of closed wedges in the sample set as estimate of . combining the bound of with, we get the following theorem .note that the number of samples required is _ independent of the graph size _ , but computing does depend on the number of edges , .[ thm : kappa ] set .the algorithm -wedge sampler outputs an estimate for the transitivity such that with probability greater than . to get an estimate on , the number of triangles , we output , which is guaranteed to be within of with probability greater than . & & & & & & & time ( secs ) + amazon0312 & 401k & 2350k & 69 m & 3686k & 0.160 & 0.421 & 0.261 + amazon0505 & 410k & 2439k & 73 m & 3951k & 0.162 & 0.427 & 0.269 + amazon0601 & 403k & 2443k & 72 m & 3987k & 0.166 & 0.430 & 0.268 + as - skitter & 1696k & 11095k & 16022 m & 28770k & 0.005 & 0.296 & 90.897 +cit - patents & 3775k & 16519k & 336 m & 7515k & 0.067 & 0.092 & 3.764 + roadnet - ca & 1965k & 2767k & 6 m & 121k & 0.060 & 0.055 & 0.095 + web - berkstan & 685k & 6649k & 27983 m & 64691k & 0.007 & 0.634 & 54.777 + web - google & 876k & 4322k & 727m & 13392k & 0.055 & 0.624 & 0.894 + web - stanford & 282k & 1993k & 3944 m & 11329k & 0.009 & 0.629 & 6.987 + wiki - talk & 2394k & 4660k & 12594 m & 9204k & 0.002 & 0.201 & 20.572 + youtube & 1158k & 2990k & 1474m & 3057k & 0.006 & 0.128 & 2.740 + flickr & 1861k & 15555k & 14670 m & 548659k & 0.112 & 0.375 & 567.160 + livejournal & 5284k & 48710k & 7519m & 310877k & 0.124 & 0.345 & 102.142 + we implemented our algorithms in c and ran our experiments on a computer equipped with a 2.3ghz intel core i7 processor with 4 cores and 256 kb l2 cache ( per core ) , 8 mb l3 cache , and an 8 gb memory .we performed our experiments on 13 graphs from snap and per private communication with the authors of . in all cases ,directionality is ignored , and repeated and self - edges are omitted .the properties of these matrices are presented in .the last column reports the times for the enumeration algorithm .this is based on the principles of : each edge is assigned to the vertex with a smaller degree ( using the vertex numbering as a tie - breaker ) , and then vertices only check wedges formed by edges assigned to them for closure . as seen in, wedge sampling is orders of magnitude faster than the enumeration algorithm .the timing results show tremendous savings ; for instance , wedge sampling only takes 0.026 seconds on ` as - skitter ` while full enumeration takes 90 seconds .shows the accuracy of the wedge sampling algorithm . at 99.9% confidence ( ) , the upper bound on the error we expect for 2k , 8k , and 32k samples is .043 , .022 , and .011 , respectively .most of the errors are much less than the bounds would suggest .for instance , the maximum error for 2k samples is .007 , much less than that 0.43 given by the upper bound .we show the rate of convergence in for the graph amazon0505 as the number of samples increases .the dashed line shows the error bars at 99.9% confidence .note that the convergence is fast and the error bars are conservative in this instance .we now demonstrate how a small change to the underlying distribution on wedges allows us to compute the clustering coefficient , .shows the procedure -wedge sampler .the only difference between -wedge sampler and -wedge sampler is in picking random vertex centers .vertices are picked uniformly instead of from the distribution .pick uniform random vertices ( with replacement ) . foreach selected vertex , choose two neighbors of ( uniformly at random without replacement ) to generate a random wedge .the set of all wedges comprises the sample set .output the fraction of closed wedges in the sample set as an estimate for .[ thm : lcc ] set .the algorithm -wedge sampler outputs an estimate for the clustering coefficient such that with probability greater than .let us consider a single sampled wedge , and let be the indicator random variable for the wedge being closed .let be the uniform distribution on edges .for any vertex , let be the uniform distribution on pairs of neighbors of .observe that } = \pr_{v \sim { { \cal v}}}[\pr_{(u , u ' ) \sim { { \cal n}}_v}[\textrm{wedge is closed}]]\ ] ] we will show that this is exactly . \\ & = & { \hbox{\bf e}}_{v \sim { { \cal v } } } [ \textrm{frac . of closed wedges centered at } ] \\ & = & { \hbox{\bf e}}_{v \sim { { \cal v } } } [ \pr_{(u , u ' ) \sim { { \cal n}}_v}[\textrm{ is closed } ] ] \\ & = & \pr_{v \sim { { \cal v}}}[\pr_{(u , u ' ) \sim { { \cal n}}_v}[\textrm{ is closed } ] ] = { \hbox{\bf e}}[x]\end{aligned}\ ] ] for a single sample , the probability that the wedge is closed is exactly .the bound of completes the proof . presents the results of our experiments for computing the clustering coefficients .experimental setup and the notation are the same as before .the results again show that wedge sampling provides accurate estimations with tremendous improvements in runtime .for instance , for 8k samples , the average speedup is 10k .+ + the real power of wedge sampling is demonstrated by estimating the degree - wise clustering coefficients .shows procedure -wedge sampler .pick uniform random vertices of degree ( with replacement ) . for eachselected vertex , choose a uniform random pair of neighbors of to generate a wedge.choose two neighbors of ( uniformly at random without replacement ) to generate a random wedge .the set of all wedges comprises the sample set .output the fraction of closed wedges in the sample set as an estimate for .[ thm : dcc ] set .the algorithm -wedge sampler outputs an estimate for the clustering coefficient such that with probability greater than .the proof is analogous to that of .since is the average clustering coefficient of a degree- vertex , we can apply the same arguments as in . algorithms in the previous section present how to compute the clustering coefficient of vertices of a given degree . in practice, it may be sufficient to compute clustering coefficients over bins of degrees .wedge sampling algorithms can be performed for bins of degrees by a small adjustment of the sampling . within each bin , we weight each vertex according to the number of wedges it produces .this guarantees that each wedge in the bin is equally likely to be selected .for instance , if we bin degree-3 and degree-4 vertices together , we will weight a degree-4 vertex twice as much as a degree 3-vertex , since a degree vertex generates wedges whereas a degree-4 vertex has .see for details of binned computations. shows results of on three graphs for clustering coefficients . in these experiments , we chose to group vertices in logarithmic bins of degrees , i.e. , .in other words , form the -th bin .the same algorithm can be used for an arbitrary binning of the vertices .we show the estimates with increasing number of samples . at 8k samples ,the error is expected to be less than 0.02 , which is apparent in the plots .observe that even 500 samples yields a reasonable estimate in terms of the differences by degree .shows the time to calculate the binned values compared to enumeration ; there is a tremendous savings in runtime as a result of using wedge sampling . in this figure, runtimes are normalized with respect to the runtimes of full enumeration .as the figure shows , wedge sampling takes only a tiny fraction of the time of full enumeration , especially for large graphs .( 0.200,0 ) rectangle ( 0.350,1.105 ) ; ( 0.360,0 ) node[below ] rectangle ( 0.510,0.785 ) ; ( 0.720,0 ) rectangle ( 0.870,1.147 ) ; ( 0.880,0 ) node[below ] rectangle ( 1.030,0.819 ) ; ( 1.240,0 ) rectangle ( 1.390,1.130 ) ; ( 1.400,0 ) node[below ] rectangle ( 1.550,0.824 ) ; ( 1.760,0 ) rectangle ( 1.910,2.992 ) ; ( 1.920,0 ) node[below ] rectangle ( 2.070,2.667 ) ; ( 2.280,0 ) rectangle ( 2.430,1.673 ) ; ( 2.440,0 ) node[below ] rectangle ( 2.590,1.557 ) ; ( 2.800,0 ) rectangle ( 2.950,3.016 ) ; ( 2.960,0 ) node[below ] rectangle ( 3.110,2.526 ) ; ( 3.320,0 ) rectangle ( 3.470,1.535 ) ; ( 3.480,0 ) node[below ] rectangle ( 3.630,1.220 ) ; ( 3.840,0 ) rectangle ( 3.990,2.328 ) ; ( 4.000,0 ) node[below ] rectangle ( 4.150,1.861 ) ; ( 4.360,0 ) rectangle ( 4.510,2.434 ) ; ( 4.520,0 ) node[below ] rectangle ( 4.670,1.995 ) ; ( 4.880,0 ) rectangle ( 5.030,1.911 ) ; ( 5.040,0 ) node[below ] rectangle ( 5.190,1.571 ) ; ( 5.400,0 ) rectangle ( 5.550,3.978 ) ; ( 5.560,0 ) node[below ] rectangle ( 5.710,3.588 ) ; ( 5.920,0 ) rectangle ( 6.070,2.928 ) ; ( 6.080,0 ) node[below ] rectangle ( 6.230,2.769 ) ; ( 6.440,0 ) rectangle ( 6.590,2.182 ) ; ( 6.600,0 ) node[below ] rectangle ( 6.750,1.848 ) ; ( 7.360,0 ) rectangle ( 7.510,0.000 ) ; ( 7.520,0 ) node[below ] rectangle ( 7.670,0.000 ) ; ( 7.88,0 ) ( 0,0) ( 0,0.00 ) node[left ] at ( -0.8 , 2.00 ) ( 0 , 4.80 ) ; ( 0 , 1.000 ) node [ left] ( 7.700,1.000 ) ; ( 0 , 2.000 ) node [ left] ( 7.700,2.000 ) ; ( 0 , 3.000 ) node [ left] ( 7.700,3.000 ) ; ( 0 , 4.000 ) node [ left] ( 7.700,4.000 ) ; ( 1.98,4.50 ) node[right ] at(2.43,4.65 ) 2k rectangle ( 2.28,4.80 ) ; ( 3.70,4.50 ) node[right]at(4.15,4.65 ) 8k rectangle ( 4.00,4.80 ) ;by modifying the template given in , we can also get estimates for ( the number of triangles incident to degree- vertices ) . instead of counting the fraction of closed wedges ,we take a weighted sum .describes the procedure -wedge sampler .we let denote the total number of wedges centered at degree- vertices .pick uniform random vertices of degree ( with replacement ) . for eachselected vertex , choose two neighbors of ( uniformly at random without replacement ) to generate a random wedge .the set of all wedges comprises the sample set . for each wedge in the sample set , let be the associated random variable such that set .output as the estimate for .[ thm : dcc2 ] set .the algorithm -wedge sampler outputs an estimate for the with the following guarantee : with probability greater than . for a single sampled wedge , we define . we will argue below that the expected value of ] , partition the set of all wedges centered on degree vertices into four sets , and .the set ( ) contains all closed wedges containing exactly degree- vertices .the remaining open wedges go into . for a sampled wedge , if , then . if , then .the wedge is a uniform random wedge from those centered on degree- vertices .hence , = w^{-1}_d(|s_1| + |s_2|/2 + |s_3|/3) ] . results of are shown in .once again , the data is grouped in logarithmic bins of degrees , i.e. , .in other words , form the -th bin .the number of samples is .here we also show the error bars corresponding to 99% confidence ( ) .counting triangles in directed graphs is more difficult , because there are six types of wedges and seven types of directed triangles ( up to isomorphism ) ; see . as mentioned earlier ,a reciprocal edge is a pair of one - way edges that are treated as a _single _ reciprocal edge . for each vertex , we have three associated degrees : the indegree , outdegree , and reciprocal degree , denoted by , , and , respectively. we can generalize wedge sampling to count the different triangles types in .this is done by randomly sampling the different wedges , and looking for various directed closures .we need some notation to formalize various directed triangle concepts , which is adapted from . + naturally , the first step is to construct methods for sampling the different wedge types ( see ) . for wedgetype , let denote the number of wedges of this type . for vertex , let be the number of -wedges centered at .it is straightforward to compute given the degrees of ; see the formulas in . given the number of -wedges per vertex ,we can use our standard template to sample a uniform random -wedge .this is given in the first two steps of .the procedure to select a uniform random -wedge centered at depends on the wedge type .for example , to sample a type-(i ) wedge , we pick a uniform random pair of out - neighbors of ( without replacement ) . to sample a type-(ii ) wedge, we pick a uniform random out - neighbor , and a uniform random in - neighbor .all other wedge types are sampled analogously . to get the triangle counts , we need a directed transitivity that measures the fraction ofclosed directed wedges .of course , a directed wedge can close into many different types of triangles .for example , an type-(i)wedge can close into a type-(a ) or type-(c)triangle . to measure this meaningfully , a set of directed closure measuresare defined in .we restate these definitions and show how to count triangles using these measures .let denote the number of type- wedges in a type- triangle .for example , a type-(a)triangle contains exactly one type-(i)wedge , but a type-(b)triangle contains three type-(ii)wedges . the list of these values is given in .we say that a -wedge is _ -closed _ if the wedge participates in a -triangle .the number of such wedges is exactly ( where is the number of -triangles ) . _the -closure , , is the fraction of -wedges that are -closed ._ there are 15 non - trivial -closures , corresponding to the non - zero entries in . by the wedge sampling framework , we can estimate any value .this value can be used to estimate . select random vertices ( with replacement ) according to the probability distribution defined by where . for each selected vertex , pick a uniform random -wedge centered at .note that if both edges are of the same type ( i.e. , two out - edges ) , then sampling is done without replacement .otherwise , since the two edge - types are distinct and the sampling is independent .determine , the number of -closed wedges among the sampled wedges .compute estimate for .output estimate for .[ thm : main - dir ] fix a triangle type and wedge type such that .set .the algorithm -sampler outputs an estimate for such that with probability greater than . using the hoeffding bound, we can argue ( as before ) that with probability at least .we multiply this inequality by and observe that .the experimental setup is the same as in the previous sections .we performed our experiments on 8 directed graphs , described in .the results of applying are shown in .we use 32k wedges samples for each of four specific wedge types ( detailed below ) . as can be seen ,we get accurate results for all triangle types , except possibly type-(b ) triangles . these are so few that they do not even appear in the plot .+ we take a closer look at the results , using _ relative error _ plots . for brevity ,we provide these only for ` amazon0505 ` and ` web - stanford ` graphs , and give an average over all the graphs .a clear pattern appears , where the relative error for type-(b ) triangles is worse than the others .this is where a weakness of wedge sampling becomes apparent : when a triangle type is extremely infrequent , wedge sampling is limited in the quality of the estimate .more details are shown in , where we show the relative error for these triangles , as well the fraction of these triangles ( with respect to the total count ) . across the board , we see that type-(b ) triangles are extremely rare ; hence , wedge sampling is unable to get sharp estimates. it would be very interesting to design a fast method that accuractely counts such rare triangles ..a closer look at type-(b ) triangles [ cols=">,^,>,^",options="header " , ] there are some subtleties about the implementation that are worth mentioning .we have a choice of which wedge types to use to count -triangles .we can get some reuse if we use the same wedge type to count multiple triangles .for instance ,type-(a ) triangles can be counted using sampling over type ( i ) , ( ii ) , or ( iii)wedges . by ,sampling over -wedges yields error is .less frequent wedge types give better approximations _ for the same triangle type .symmetrically , the same wedge type can be used to count multiple triangle types .for instance , ( ii)-wedges can be used to count type ( a ) , ( b ) , and ( d)triangles .we use the following wedges for our triangle counts : type-(ii ) wedges are used to count triangles of type ( a ) and ( b ) , type-(v ) wedges are used to count triangles of type ( c ) and ( f ) , type-(iv ) wedges are used to count triangles of type ( d ) and ( e ) , and type-(vi ) wedges are used to count triangles of type ( g ). the combinations that we use are boldface in .the choice of wedge can impact the accuracy of the directed triangle counts .for example , type-(a)-triangles can be counted using wedges of type ( i ) , ( ii ) , or ( iii ) . in, we show the relative error in the estimate for type-(a ) triangles using 32k samples via the different wedge types . because of the high frequency of type-(iii )wedges , the error in triangle counts using these wedges is quite large .other than one case , ( ii)-wedges are the best choice .( 0.150,0 ) rectangle ( 0.400,0.285 ) ; ( 0.410,0 ) node[below ] rectangle ( 0.660,0.741 ) ; ( 0.670,0 ) rectangle ( 0.920,0.141 ) ; ( 1.080,0 ) rectangle ( 1.330,0.156 ) ; ( 1.340,0 ) node[below ] rectangle ( 1.590,0.054 ) ; ( 1.600,0 ) rectangle ( 1.850,3.000 ) ; ( 2.010,0 ) rectangle ( 2.260,0.224 ) ; ( 2.270,0 ) node[below ] rectangle ( 2.520,0.026 ) ; ( 2.530,0 ) rectangle ( 2.780,3.000 ) ; ( 2.940,0 ) rectangle ( 3.190,3.000 ) ; ( 3.200,0 ) node[below ] rectangle ( 3.450,0.187 ) ; ( 3.460,0 ) rectangle ( 3.710,0.222 ) ; ( 3.870,0 ) rectangle ( 4.120,0.163 ) ; ( 4.130,0 ) node[below ] rectangle ( 4.380,0.086 ) ; ( 4.390,0 ) rectangle ( 4.640,1.104 ) ; ( 4.800,0 ) rectangle ( 5.050,0.469 ) ; ( 5.060,0 ) node[below ] rectangle ( 5.310,0.034 ) ; ( 5.320,0 ) rectangle ( 5.570,0.291 ) ; ( 5.730,0 ) rectangle ( 5.980,3.000 ) ; ( 5.990,0 ) node[below ] rectangle ( 6.240,0.129 ) ; ( 6.250,0 ) rectangle ( 6.500,3.000 ) ; ( 6.660,0 ) rectangle ( 6.910,0.248 ) ; ( 6.920,0 ) node[below ] rectangle ( 7.170,0.372 ) ; ( 7.180,0 ) rectangle ( 7.430,0.083 ) ; ( 7.89,0 ) ( 0,0) ( 0,0.06 ) node[left ] at ( -0.8 , 1.50 ) ( 0 , 3.30 ) ; ( 0 , 1.000 ) node [ left] ( 7.910,1.000 ) ; ( 0 , 2.000 ) node [ left] ( 7.910,2.000 ) ; ( 0 , 3.000 ) node [ left] ( 7.910,3.000 ) ; ( 1.40,3.30 ) node[right ] at(2.15,3.55 ) ( i ) rectangle ( 1.90,3.80 ) ; ( 3.86,3.30 ) node[right]at(4.61,3.55 ) ( ii ) rectangle ( 4.36,3.80 ) ; ( 6.32,3.30 ) node[right]at(7.07,3.55 ) ( iii ) rectangle ( 6.82,3.80 ) ;we proposed a series of wedge - based algorithms for computing various triadic measures on graphs .our algorithms come with theoretical guarantees in the form of specific error and confidence bounds .the number of samples required to achieve a specified error and confidence bound is independent of the graph size .for instance , 38,000 samples suffice for an additive error in the transitivity of less than 0.01 with 99.9% confidence _ for any graph_. the limiting factors have to do with determining the sampling proportions ; for instance , we need to know the degree of each vertex and the overall degree distribution to compute the transitivity .the flexibility of wedge sampling along with the hard error bounds essentially redefines what is feasible in terms of graph analysis .the very expensive computation of clustering coefficient is now much faster and we can consider much larger graphs than before . in an extension of this work , we described a mapreduce implementation of this method that scales to billions of edges , leading to some of the first published results at these sizes . with triadic analysis no longer being a computational burden, we can extend our horizons into new territories and look at attributed triangles ( e.g. , we might compare the clustering coefficient for `` teenage '' versus `` adult '' nodes in a social network ) , evolution of triadic connections , higher - order structures such as 4-cycles and 4-cliques , and so on ., _ patric : a parallel algorithm for counting triangles and computing clustering coefficients in massive networks _, ndssl technical report 12 - 042 , network dynamics and simulation science laboratory , virginia polytechnic institute and state university , july 2012 . , http://ngc.sandia.gov/assets/documents/berry-soda11_sand2010-4474c.pdf [ _ listing triangles in expected linear time on power law graphs with exponent at least _ ] , tech .report sand2010 - 4474c , sandia national laboratories , 2011 . , http://dx.doi.org/10.1145/2396761.2398503[_degree relations of triangles in real - world networks and graph models _ ] , in proceedings of the 21st acm international conference on information and knowledge management , cikm 12 , new york , ny , usa , 2012 , acm , pp . 17121716 . ,http://dx.doi.org/10.1007/11533719_72[_new streaming algorithms for counting triangles in graphs _] , in cocoon05 : computing and combinatorics , l. wang , ed . ,3595 of lecture notes in computer science , springer berlin heidelberg , 2005 , pp . 710716 .height 2pt depth -1.6pt width 23pt , http://dx.doi.org/10.1007/11427186_54[_finding , counting and listing all triangles in large graphs , an experimental study _ ] , in experimental and efficient algorithms , springer berlin / heidelberg , 2005 , pp .606609 . , http://knowledgecenter.siam.org/338sdm/338sdm/1[_triadic measures on graphs : the power of wedge sampling _ ] , in sdm13 : proceedings of the 2013 siam international conference on data mining , may 2013 , pp. 1018 . , http://dx.doi.org/10.1007/978-3-642-24082-9_83[_improved sampling for triangle counting with mapreduce _ ] , in convergence and hybrid information technology , g. lee , d. howard , and d. slezak , eds . ,6935 of lecture notes in computer science , springer berlin / heidelberg , 2011 , pp .
graphs are used to model interactions in a variety of contexts , and there is a growing need to quickly assess the structure of such graphs . some of the most useful graph metrics are based on _ triangles _ , such as those measuring social cohesion . algorithms to compute them can be extremely expensive , even for moderately - sized graphs with only millions of edges . previous work has considered node and edge sampling ; in contrast , we consider _ wedge sampling _ , which provides faster and more accurate approximations than competing techniques . additionally , wedge sampling enables estimation local clustering coefficients , degree - wise clustering coefficients , uniform triangle sampling , and directed triangle counts . our methods come with provable and practical probabilistic error estimates for all computations . we provide extensive results that show our methods are both more accurate and faster than state - of - the - art alternatives .
many new cmb missions such as _ cobras / samba _ have adopted scan patterns with a relatively small number of scan circles in order to simplify the data analysis .the very successful _ cobe _ pattern of cycloidal scans that cross through each pixel in many different directions is not used , and this leads to an increased sensitivity to systematic errors .these new missions have adopted a total power or one - horned approach to maximize sensitivity relative to the differential approach used by _cobe_. but this often leads to excess low frequency or noise .a simple scan pattern with great circle scans perpendicular to the sun line can lead to _ striping _ even in the absence of noise .the simplest reference pattern is to reference each pixel in the scan to the nep .thus the equation to be solved in the map - making for pixel which is the pixel in the scan is t_j - t _= s(i , k ) - s(,k ) .we assume that the samples are uncorrelated ( white noise ) except for a constant baseline that has to be determined for each scan .but the baseline cancels out in this equation .the noise associated with this equation is times the noise per pixel because two values are subtracted .these observations can be inverted directly giving t _ & = & 0 + t_j & = & s(i , k ) - s(,k ) + t _ & = & _ k + where the first line is an assumption for .this appears to be the approach described in the text of janssen ( 1996 ) .the covariance matrix can also be computed directly : t _ t _ & = & + t_i t _ & = & + t_i t_i & = & 2 _ 1 ^ 2 + t_i t_j & = & + the non - zero covariance of pixels on the same scan circle shows the presence of striping .the factor of 2 in the variance of shows that this method has lost the putative advantage of one - horned systems .how can the be recovered ?a better baseline estimator is needed .for this white noise case , the average of all the noises in a scan circle is the optimum baseline estimator .but we do nt have the noises we only measure samples which are the sum of noise plus sky .therefore we need to do any iterative solution to find the baseline : subtract the signal from the scan using an estimate of the map , compute the optimum baseline estimate , and use the signal minus baseline to construct a new estimate of the map .this procedure is almost identical to the time - ordered iterative approach used in wright , hinshaw & bennett ( 1996 ) .the only difference is that for differential data the contribution of the map to the `` baseline '' is just minus the sky temperature in the reference beam .if applied to a model with scans of pixels crossing only at the nep and sep , the optimum approach reduces the sensitivity loss to a factor of compared to an ideal total power system in the large limit . for large , the difference between the nep and sep is determined to great precision , so the only significant terms in the covariance matrix are t_i t_i & = & _ 1 ^ 2 + t_i t_j & = & + this still has stripes , but both the noise and striping are reduced . in a real sky map ,the number of multiply observed pixels in the polar caps is quite large , so the both the excess noise and the striping are reduced even more .this approach of subtracting an optimal baseline after correcting the baseline iteratively for the effect of the map on the baseline is equivalent to the method of adjusting the baselines in each scan circle using the set of overlapping pixels that was used by( meyer , cheng & page 1991 ) and is planned by _cobras / samba _ ( bersanelli 1996 ) .to illustrate the operations in the map - making process , i will use a toy model of the sky .this toy model has scans each with pixels crossing only at the nep ( north ecliptic pole ) and sep .figure [ fig : toy_sky.eps ] shows the toy model with , so there are data points and pixels .the pixel observed during each data sample must be known .let be the pixel observed during the data sample .i can write this as a matrix by defining to be an matrix , with 1 row for each observation and 1 column for each pixel .is all zero except for a single `` 1 '' in each row at the column observed for that row s datum .note that is the operation of `` flying the mission through a skymap '' and that maps a skymap into a data stream .the matrix for the toy model with is given by = ( rrrrrrrrrrrrrr 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 + 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 + 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 ) the nep is pixel 1 , the sep is pixel 14 , and scan circle 1 is pixels 1 , 2 , 3 , 14 , 4 , 5 in order , while scan circle 2 is pixels 1 , 6 , 7 , 14 , 8 , 9 .define the matrix to be an matrix that corrects the data for the optimal baseline . for the white noise case of the toy model, this matrix is given by = ( rrrrrrrrrrrrrrrrrr 5 & -1 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + -1 & 5 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + -1 & -1 & 5 & -1 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + -1 & -1 & -1 & 5 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + -1 & -1 & -1 & -1 & 5 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + -1 & -1 & -1 & -1 & -1 & 5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 5 & -1 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & -1 & 5 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 5 & -1 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & -1 & 5 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & -1 & -1 & 5 & -1 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & -1 & -1 & -1 & 5 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5 & -1 & -1 & -1 & -1 & -1 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 5 & -1 & -1 & -1 & -1 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 5 & -1 & -1 & -1 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & -1 & 5 & -1 & -1 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & -1 & -1 & 5 & -1 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & -1 & -1 & -1 & 5 + ) the matrix is the matrix that applies the optimal baseline correction .then for an observed data stream , the desired map is the that best satisfies the equations : = the application of to makes the elements of the right hand side uncorrelated .actually there is still an correlation because the true mean of the scan is completely undetermined , but i will ignore this effect which is negligible for real cmb experiments .let be the standard deviation of the noise in these values , which i will take to be in the following examples .note that this implies that the standard deviation of the original data is somewhat less than one due to the variance introduced by baseline subtraction .proceeding in the normal least squares fashion , make normal equations : ^t^t= ^t^tthe matrix is the operation of `` summing a data stream into pixels '' .thus the right hand side is and the correlation matrix is and the equation to solve is = for large problems this should be solved iteratively , and only products of the form times a vector are needed .however , for the toy model can be computed directly .of course , is singular so an extra equation stating that the sum of the map is zero must be added to the equations from the observations .this adds the matrix which is matrix with all elements equal to unity to .do not confuse with the identity matrix .the generalized inverse of is then given by ^-1 = ( + ) ^-1 - n_p^-2 .this particular form for the generalized inverse is a special case that only fixes the zero eigenvalue associated with the mean of the map . for experiments with partial sky coverage or disconnected regions the more general form of the generalized inverseshould be used : ^-1 _ 0^+ ( + ) ^-1 . when solving be sure that is orthogonal to all the eigenvectors of with zero eigenvalues . for the toy model = ( rrrrrrrrrrrrrr 90 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & -18 + -6 & 30 & -6 & -6 & -6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -6 + -6 & -6 & 30 & -6 & -6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -6 + -6 & -6 & -6 & 30 & -6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -6 + -6 & -6 & -6 & -6 & 30 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -6 + -6 & 0 & 0 & 0 & 0 & 30 & -6 & -6 & -6 & 0 & 0 & 0 & 0 & -6 + -6 & 0 & 0 & 0 & 0 & -6 & 30 & -6 & -6 & 0 & 0 & 0 & 0 & -6 + -6 & 0 & 0 & 0 & 0 & -6 & -6 & 30 & -6 & 0 & 0 & 0 & 0 & -6 + -6 & 0 & 0 & 0 & 0 & -6 & -6 & -6 & 30 & 0 & 0 & 0 & 0 & -6 + -6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 30 & -6 & -6 & -6 & -6 + -6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -6 & 30 & -6 & -6 & -6 + -6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -6 & -6 & 30 & -6 & -6 + -6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -6 & -6 & -6 & 30 & -6 + -18 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & -6 & 90 ) the covariance matrix is given by note that this particular form of the generalized inverse gives a zero sum for each row of and this introduces the anti - correlation for points on different scan circles .most instruments do not produce white noise , but have slow drifts and other anomalies that produce an excess noise at low frequencies in the output .the unknown baseline for each scan circle in the toy model used earlier is an example of low frequency noise : it corresponds to a spike in the noise power spectrum at zero frequency .real instruments have this zero frequency spike , but they also usually have a more gradual rise in the noise power spectrum as .typically an excess noise varying like is seen , giving a noise power spectrum p(f ) = 2 t _1 ^ 2 ( 1 + ) where is the sampling interval , is the noise in one sample ignoring the term , and is the `` knee '' frequency . in a spin - scanned system , noise is only important if the spin rate is less than . for _ cobras / samba_ the spin rate should be faster than the output drifts in either the bolometers or the differencing hemts .but in balloon - borne experiments , drifts in the atmosphere will produce excess low frequency noise that must be correctly treated during data analysis . in the presence of noise , successive samples of the signalare correlated . in order to simplify the analysis , a `` pre - whitening '' filter with transmission w(f ) = should be applied . after applying this filter, the noise in different samples will be uncorrelated .the impulse response function of in the time domain will integrate to zero because the response of vanishes for .the impulse response function will have a spike at zero time because as thus the action of is to replace the signal stream with signal minus baseline , and the baseline used is the optimal baseline estimator .figure [ fig : w_0p1.eps ] show the pre - whitening filter for . while the noise in the output of is uncorrelated, this filter makes each observation depend on many sky values . for a discretely sampled data , the in the time domainis given by a vector of weights , where and .for _ post facto _ data analysis the time symmetric filter with can be used instead of a causal filter . in terms of the matrices introduced earlier , only needs to be changed for noise . in the toy model ,the scan circles are so short that only a very short pre - whitening filter can be used , so i will use .this gives = ( rrrrrrrrrrrrrrrrrr 2 & -1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + -1 & 0 & 0 & 0 & -1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 2 & -1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & -1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & -1 & 0 & 0 & 0 & -1 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & -1 & 2 ) the correlation matrix is = ( rrrrrrrrrrrrrr 18 & -4 & 1 & 1 & -4 & -4 & 1 & 1 & -4 & -4 & 1 & 1 & -4 & 0 + -4 & 6 & -4 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 + 1 & -4 & 6 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -4 + 1 & 0 & 1 & 6 & -4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -4 + -4 & 1 & 0 & -4 & 6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 + -4 & 0 & 0 & 0 & 0 & 6 & -4 & 0 & 1 & 0 & 0 & 0 & 0 & 1 + 1 & 0 & 0 & 0 & 0 & -4 & 6 & 1 & 0 & 0 & 0 & 0 & 0 & -4 + 1 & 0 & 0 & 0 & 0 & 0 & 1 & 6 & -4 & 0 & 0 & 0 & 0 & -4 + -4 & 0 & 0 & 0 & 0 & 1 & 0 & -4 & 6 & 0 & 0 & 0 & 0 & 1 + -4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 6 & -4 & 0 & 1 & 1 + 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -4 & 6 & 1 & 0 & -4 + 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 6 & -4 & -4 + -4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & -4 & 6 & 1 + 0 & 1 & -4 & -4 & 1 & 1 & -4 & -4 & 1 & 1 & -4 & -4 & 1 & 18 ) the covariance matrix is given by examples so far are based on scanning in discrete scan circles .but continuous scanning is also possible . with a continuous scan changes to= ( rrrrrrrrrrrrrrrrrr 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 + -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 + 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 & -1 + -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 2 ) and the correlation matrix changes to = ( rrrrrrrrrrrrrr 18 & -4 & 1 & 1 & -4 & -4 & 1 & 1 & -4 & -4 & 1 & 1 & -4 & 0 + -4 & 6 & -4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 + 1 & -4 & 6 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -4 + 1 & 0 & 1 & 6 & -4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -4 + -4 & 0 & 0 & -4 & 6 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 + -4 & 0 & 0 & 0 & 1 & 6 & -4 & 0 & 0 & 0 & 0 & 0 & 0 & 1 + 1 & 0 & 0 & 0 & 0 & -4 & 6 & 1 & 0 & 0 & 0 & 0 & 0 & -4 + 1 & 0 & 0 & 0 & 0 & 0 & 1 & 6 & -4 & 0 & 0 & 0 & 0 & -4 + -4 & 0 & 0 & 0 & 0 & 0 & 0 & -4 & 6 & 1 & 0 & 0 & 0 & 1 + -4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 6 & -4 & 0 & 0 & 1 + 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -4 & 6 & 1 & 0 & -4 + 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 6 & -4 & -4 + -4 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -4 & 6 & 1 + 0 & 1 & -4 & -4 & 1 & 1 & -4 & -4 & 1 & 1 & -4 & -4 & 1 & 18 ) the covariance matrix becomes which is slightly better than the scan circle case because the continuous scan introduces some new comparisons . for large datasets and large maps ,the actual construction of and is impractical .however , the evaluation of for any vector can be done in a reasonable amount of time by following these steps : apply : fly the mission through the map , which takes operations .apply : this is a convolution . for the scan circle case , the convolution is done in sample blocks , and the total work is operations when using the fft . for the continuously scanned case ,the filter to be applied to the data is points long and the total work is when using ffts .apply : sum into pixels , which takes operations .thus the evaluation of takes operations and standard methods for the iterative solution of sparse systems that only need matrix - vector products can be used .this process should converge well for a complex scan pattern that crosses each pixel several times in different directions , such as the cycloidal scan produced by the scan plate in lubin s balloon experiment , shown in figure [ fig : ucsbscan.eps ] . a simple scan pattern such as rotating at constant elevation while the earth turns , shown in figure [ fig : firsscan.eps ] , will lead to much slower convergence of the iterations . since pixels are always scanned in the same direction ( unless the flight is long enough to see the sky both rising and setting ), the simple scan pattern will give a final map with stripes .these equations have been implemented for a bigger example using dmr pixels in galactic coordinates , and 256 scans through the ecliptic poles evenly spaced in ecliptic longitude , each scan having 256 points .thus there are a total of observations for pixels .since the whole matrices or are too big to print out , i have made maps from a single row of these matrices .the pixel on the diagonal in this row is the lockman hole at and .it was observed 10 times , which is close to the average exposure for the map .figure [ fig : spinner0.eps ] shows the white noise case .there is a small amount of excess noise and striping .the covariance matrices have been multiplied by the total number of data points , so a perfect experiment would have on the diagonal and in the off - diagonal pixels .figure [ fig : spinner1.eps ] shows the noise case .there is a larger amount of excess noise and striping , but it is concentrated in the low order multipoles .the angle scanned during one radian at for this case is , so excess noise in low multipoles is expected .half of the weight in the baseline filter in figure [ fig : w_0p1.eps ] is contained within of the central pixel , so is the effective chop angle .figure [ fig : cvslong.eps ] shows the covariance on a circle away from the central pixel , which peaks sharply when crossing the scan circle an indication of striping .an even bigger example ( figure [ fig : cobe1.eps ] ) shows the result of a one - horned experiment using the _ cobe _ dmr scan path with noise .one million data points were processed , and the orbit precession ( yearly ) rate was increased by a factor of 60 so these points covered a full annual cycle .the length of the pre - whitening filter was .slow convolutions instead of fft s were used to evaluate a row of and to iterate to find a row of .the time - ordered method took 30 seconds on a workstation to evaluate a product of the form .the residuals in were reduced by a factor after 20 iterations of the conjugate gradient method .these results are plotted at dmr resolution but the timing is independent of the number of pixels , , and directly proportional to the number of data points , .the angle scanned in one radian at was , so the only advantage this case has over figure [ fig : spinner1.eps ] is that scans were made through each pixel in every direction .thus it is not necessary to walk the baseline determination up to the nep and then down along some other longitude .thus the final covariance depends primarily on the distance between two points , giving a much more symmetric pattern in figure [ fig : cobe1.eps ] .the lockman hole was observed 189 times , and the diagonal element of the covariance matrix is 0.00633 , so there is 20% excess noise at the pixel level .the highest off - diagonal element in the lockman hole row is 8 times smaller than the diagonal element . in generala symmetric covariance is much easier to analyze , since it leads to a noise level that depends only on .but in the absence of systematic errors , figures [ fig : spinner0.eps ] and even [ fig : spinner1.eps ] can give useful cmb data especially if the overall noise level is low enough ( janssen 1996 ) .a systematic error term is a non - random signal in the time - ordered data that is not caused by the sky . since true patterns on the sky form an -dimension subspace in the -dimensional data space ,a systematic error term can be orthogonal to the sky subspace - these can be ignored .parallel to the sky subspace - these can not be fixed . at an intermediate angle to the sky subpsace- these must be measured and corrected .figure [ fig : sys_err.eps ] shows these possibilities .an example of a systematic error that is almost orthogonal to the sky subspace is the magnetic term in the _ cobe _ dmr .this is a susceptibility to a magnetic field along the axis perpendicular to the plane containing the plus and minus horns on a dmr .the bottom plot in figure [ fig : bysyserr.eps ] shows the time - ordered data produced by this term , which can then be made into a map .that map is then converted back into time - ordered data , giving the top plot .since the top and bottom plots are very different , it is easy to measure the sensitivity and correct for it .a term is due to a sensitivity to magnetic fields in the plane defined by the horns but perpendicular to the spin axis .this produces a map with a large dipole aligned with the celestial pole .but the term is not completely parallel to the sky subspace because the earth s magnetic dipole is offset and tilted so there is a diurnal modulation of the input shown on the bottom of figure [ fig : bzsyserr.eps ] that is not seen in the playback ( top ) .thus it is possible to measure a term and remove its effect from the map .an example of a systematic error that is parallel to the sky subspace for a simple scan pattern perpendicular to the sun - line through the ecliptic poles is zodiacal light emission . with the more complex scan pattern used by the dmr this effect can be measured and removed from the map .obviously one should try to make all systematic errors orthogonal to the sky subspace .this is made easier if the data space is much larger than the sky subspace .therefore the number of data points transmitted to the ground should be as large as possible , and on - board co - addition into scan circles should be avoided . to avoid stripes, scans should be made through every pixel in all possible directions .this requires a large number of scans and a large number of data points , which are also desired to reduce systematic error problems .i have shown how to analyze large datasets from one - horned cmb experiments without simplified scan patterns that are vulnerable to systematic errors .the cpu time required is is differs from the wright ( 1996 ) differential method timing only in a logarithmic factor .bersanelli , m. , bouchet , f. , efstathiou , g. , griffin , m. , lamarre , j. , mandolesi , n. , norgaard - nielsen , h. , pace , o. , polny , j. , puget , j. , tauber , j. , vittorio , n. & volonte , s. 1996 .report on the cobras / samba phase a study .janssen , m. , scott , d. , white , m. , seiffert , m. , lawrence , c. , gorski , k. , dragovan , m. , gaier , t. , ganga , k. , gulkis , s. , lange , a. , levin , s. , lubin , p. , meinhold , p. , readhead , a. , richards , p. & ruhl , j. 1996 .astro - ph/9602007
cmb anisotropy experiments seeking to make maps with more pixels than the 6144 pixels used by the _ cobe _ dmr need to address the practical issues of the computer time and storage required to make maps . a simple , repetitive scan pattern reduces these requirements but leaves the experiment vulnerable to systematic errors and striping in the maps . in this paper i give a time - ordered method for map - making with one - horned experiments that has reasonable memory and cpu needs but can handle complex _ cobe_-like scans paths and noise .
ever since its introduction by schrdinger in the context of the epr paradox , quantum entanglement has played a central role in quantum theory .while entanglement is responsible for the non - classical correlations leading to the violation of bell s inequalities , it also plays a key role in quantum computing where it is connected with the exponential advantage of quantum algorithms over their classical counterparts .studies of entanglement have led to a well developed mathematical theory of entanglement where positive maps ( p ) which are not completely positive ( cp ) and unextendable product bases ( upb ) play an important role .these mathematical advances have led to the discovery of bound entangled states : states from which one can not distill epr pairs although they are still provably non - separable . quantum states ( pure or mixed ) are represented by positive definite hermitian operators with unit trace .for the special case when the rank of is one , it represents a pure state . for a bipartite composite system where states are defined on ,a state is said to be separable if it can be written as a convex sum : where and are states in and respectively .a state is entangled , if it can not be written in the above form .the fundamental problem of determining whether a given state is separable or entangled remains open for general states of systems which are beyond and . allowed quantum evolutions are those p maps which are cp .p maps which are not cp are not physically allowed quantum evolutions , because entangled states can lose their positivity when such a map is applied to one part of the system .therefore such maps act as entanglement witnesses .the partial transpose operation which is a particular entanglement witness , plays an important role in the identification of entangled states .states which reveal their entanglement by acquiring one or more negative eigenvalues under partial transposition are called npt ( not - positive under partial transpose ) while the rest are called ppt ( positive under partial transpose ) .npt states are entangled while ppt states can be either entangled or separable . in and dimensional systems , it has been shown that a state is separable if and only if it is ppt . therefore , in dimensions and larger , there are entangled states which are ppt .these states in general require other entanglement witnesses to implicate their entanglement and can not be distilled to give epr pairs .such states ( with non - distillable entanglement ) are called ppt or bound entangled states .choi map and its generalizations have been used to detect the entanglement of certain classes of bound entangled states . however , even for the system , the detection of bound entangled states is far from complete . local operations , including measurement and filtering , can not alter the status of the state from ppt to npt and therefore can be used to convert one npt state to another npt state or one ppt state to another ppt state .al . showed that , starting with a mixed entangled state of a system that does not violate bell inequalities , one can set up a local filtration scheme based on measurements such that the filtered states violate bell inequalities . in this caseonly npt states were involved as there are no ppt entangled states for the case .these results have been extended , used and experimentally validated by a number of researchers .our work is a generalization and extension of these results into the domain of ppt states of systems . in this work ,we introduce local filters which convert ppt entangled states not detectable by the choi map , to states which are detectable by the choi map .in particular we are able to show that the ppt states obtained from the upb can be converted into states detectable by the choi map .furthermore , we provide an explicit scheme for implementation of our filtration protocol via local projective measurements involving local ancillas .although the notation of entanglement is well defined for infinite dimensional spaces , we restrict ourselves to finite dimensional hilbert spaces in this paper .the material in this paper is arranged as follows : in section [ filtration ] we describe our local filtration scheme .two examples are taken up in the section [ filtration - ent ] where such schemes are used to manipulate entangled states .section [ implementation ] describes a measurement - based scheme to realize the local filters while section [ implementation - general ] describes the general scheme .section [ implementation - two ] describes the implementation of filters used by gisin and section [ implementation - three ] describes the implementation of filters on three - level systems .section [ conc ] offers some concluding remarks .local filters are local non - unitary operators represented by where and are invertible operators acting in the state spaces of their respective systems . given a bipartite quantum state , the filter acts on the state giving a new state which is a positive hermitian operator belonging to the same space and its tracecan be brought to one by dividing by an appropriate positive number .for every invertible set of operators and we thus have a filter .let and be two full rank operators . then the map does not change the schmidt number of the state .terhal and horodecki defined the schmidt rank of a general density matrix .a bipartite state has schmidt rank if 1 .for any ensemble decomposition of as where at least one of the vectors has schmidt rank .2 . there exists a decomposition of where all vectors in the decomposition have a schmidt rank at most .therefore , we need only to show that the schmidt number of pure bipartite states is invariant under the operation .the schmidt rank ( sr ) of is the matrix rank of . thus let and be the singular value decompositions of and respectively , where are unitary operators .then where and are mutually orthogonal bases of the first and second systems respectively . since and are of full rank , the diagonal matrices and are also of full rank , and the above assertion holds , i.e. is invariant under these operations .the above proposition further shows that , entanglement is not created or destroyed by the above operations .let us choose a standard basis , in an -dimensional state space .any density operator can then be written as .transpose operation is defined through its action on a bipartite state is defined to be ppt if and only if where is the transpose operation defined on as described in equation ( [ transpose ] ) .the ppt or npt character of a state is invariant under an invertible local filtration operation .now consider , to be be the standard basis in .we can write , where .we then have after the application of the transpose operation on the second system we have where is the complex conjugate of .this proves that ppt states remain ppt under a local filter defined in equation ( [ filter ] ) .a similar argument holds for npt states .if the original state is entangled , the nature of its entanglement , ( npt or ppt ) does not change under the filtering operation .therefore for a given ppt entangled state , the filtered state is another ppt entangled state .it may turn out that even if the entanglement of the original state is not detectable by a given entanglement witness , the filtered state reveals its entanglement by the same witness .this thus allows us to generate new ppt entangled states from the old ones .consider a density operator for the system , defined by two parameters and . where is the normalization constant and this is a unit trace density operator for and .the state is entangled for a range of values of and ; however it is not always detected by the choi maps given in equations ( [ eqn:1 ] ) and ( [ eqn:2 ] ) .for instance , set , then for , the state is not detectable by the choi map .consider a local filter the filtered state after the application of this filter is obtained as we now apply the choi map given in equation ( [ eqn:1 ] ) on the first system via to the filtered as well as non - filtered density operator to obtain for the operator for and for , the minimum eigenvalue turns out to be negative , indicating that the state is entangled . on the other hand , the minimum eigen value of the operator which is obtained by the application of choi map without filtering , is positive .this shows that the local filter defined in equation ( [ filter1 ] ) has converted the state whose entanglement was not detectable via the choi map into a state whose entanglement is detectable via the choi map .an important example of bound entangled states is provided by the well known upb construction known as ` tiles ' for a system the mixed state provides an example of a ppt entangled state .choi maps , applied directly , can not detect the entanglement of such states .consider the local filter applying this filter gives a new filtered state given by we now apply the second choi map given in equation ( [ eqn:2 ] ) on the second system via to the filtered as well as non - filtered density operator to obtain the operator has a negative eigen value which reveals the entanglement of while does not have a negative eigen value .this shows that the entanglement of the state is not directly revealed by the choi map , however , it can be filtered into a state that is detected by the choi map .this is directly related to the construction given in terms of the automorphisms in and is much simpler than the construction given in .we now turn to the question of the physical interpretation and implementation of the local quantum filtration process introduced in the previous section .a filter is defined through its action given in equation ( [ filter ] ) and comprises of invertible operators and where acts locally on ( the hilbert space of alice ) and acts locally on on ( the hilbert space of bob ) .we choose the standard bases in and .each of these operators has a singular valued decomposition given by and . here are unitary operators and are diagonal with real positive definite diagonal entries .the unitary operators correspond to hamiltonian evolutions and can hence be physically realized in a straightforward way .we therefore focus here on the implementation of diagonal matrices and .consider the implementation of on .the diagonal matrix 12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) link:\doibase 10.1103/physrev.47.777 [ * * , ( ) ] * * , ( ) link:\doibase 10.1017/cbo9780511815676.004 [ _ _ ] ( , , ) pp . ,link:\doibase 10.1017/cbo9780511976667 [ _ _ ] , ed .( , ) link:\doibase 10.1016/0024 - 3795(92)90260-h [ * * , ( ) ] in _ _ ( , ) pp .link:\doibase 10.1007/s11117 - 002 - 2470 - 1 [ * * , ( ) ] \doibase 10.1023/a:1025101606680 [ * * , ( ) ] link:\doibase 10.1007/s11080 - 007 - 9052 - 4 [ * * , ( ) ] link:\doibase 10.1142/s1230161211000224 [ * * , ( ) ] link:\doibase 10.1103/physreva.84.024302 [ * * , ( ) ] link:\doibase 10.1088/1751 - 8113/47/48/483001 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.82.5385 [ * * , ( ) ] link:\doibase 10.1007/s00220 - 003 - 0877 - 6 [ * * , ( ) ] link:\doibase 10.1063/1.3663836 [ * * , ( ) ] link:\doibase 10.1016/s0024 - 3795(00)00251 - 2 [ * * , ( ) ] link:\doibase 10.1103/physreva.84.032328 [ * * , ( ) ] link:\doibase 10.1103/physreva.87.012318 [ * * , ( ) ] link:\doibase 10.1103/physreva.87.064302 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.80.5239 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.77.1413 [ * * , ( ) ] link:\doibase 10.1016/s0375 - 9601(96)00706 - 2 [ * * , ( ) ] link:\doibase 10.1016/j.physrep.2009.02.004 [ * * , ( ) ] link:\doibase 10.1007/s12043 - 009 - 0100 - 1 [ * * , ( ) ] link:\doibase 10.1016/s0375 - 9601(96)80001 - 6 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.160402 [ * * , ( ) ] link:\doibase 10.1103/physreva.64.010101 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.89.170401 [ * * , ( ) ] http://stacks.iop.org/1751-8121/47/i=42/a=424005 [ * * , ( ) ] link:\doibase 10.1038/35059017 [ * * , ( ) ] link:\doibase 10.1103/physreva.61.040301 [ * * , ( ) ] link:\doibase 10.1016/0024 - 3795(75)90058 - 0 [ * * , ( ) ]
we introduce local filters as a means to detect the entanglement of bound entangled states which do not yield to detection by witnesses based on positive ( p ) maps which are not completely positive ( cp ) . we demonstrate how such non - detectable bound entangled states can be locally filtered into detectable bound entangled states . specifically , we show that a bound entangled state in the orthogonal complement of the unextendible product bases ( upb ) , can be locally filtered into a another bound entangled state that is detectable by the choi map . we reinterpret these filters as local measurements on locally extended hilbert spaces . we give explicit constructions of a measurement - based implementation of these filters for and systems . this provides us with a physical mechanism to implement such local filters .
modeling the spreading of infectious diseases has a long history .mathematical models not only deepen the understanding of epidemic dynamics , but also shed light on the control of diseases . in recent years , much attention has been paid to the epidemic control via social relationship adjustment . as a pioneering work , gross _ et ._ first proposed a susceptible - infected - susceptible ( sis ) model on an adaptive network .therein the susceptibles break the link with the infected and rewire to another randomly selected susceptible individual .this rewiring rule brings in highly complex dynamics ( such as bistability and oscillation ) to the classical sis model .the rewiring dynamics then opens the avenue on how individualized partnership adjustment alters the epidemic dynamics . on the one hand , besides the sis model , typical epidemic models have almost been investigated including susceptible - infective - recovered - susceptible ( sirs ) model , susceptible - infective - recovered ( sir ) model and susceptible - infective - vaccinated ( siv ) model . on the other hand , more realistic and complex link rewiring rules are proposed . in particular , generalizations of gross _et al _ s rewiring rule are mainly in two folds : for one thing , after the disconnection of susceptible - infected ( si ) link , the susceptible is assumed to reconnect to a randomly selected member of the population no matter it is susceptible or not .for another , the infected is also allowed to switch its partnership from the susceptible to a new randomly selected contact .besides the rewiring rule which is dependent on the infection process , the rewiring rule that is independent of the infection process was also investigated . in spite of different model assumptions , all these models showed that , the infection propagation can be greatly influenced by the dynamical networks .in particular , the infection can be effectively suppressed by reducing the interaction opportunities between susceptible and infected individuals . besides the above - mentioned link - rewiring models , another type of adaptive networks is the link - activation - deactivation model .it assumes that a link can either be broken or recreated on the basis of the infectious states of the two endpoints of the link . in this model ,only local information is required , which could be more realistic .in particular ,al . _ proposed an asis model , in which any si link can be broken ( deactivated ) .after the disconnection of an si link , the two disconnected nodes can be reconnected again once both of them become susceptible ( activated ) . despite of seemingly differences ,the link - activation - deactivation dynamics is similar to the rewiring dynamics : on the one hand , guo _ et al _ showed that the asis model ( initiated on complete graphs ) can approximate the link - rewiring model in . on the other hand ,the quasi - stationary ( metastable ) fraction of infected individuals can be reduced by increasing the effective breaking rate ( proportional to the ratio of deactivating rate to activating rate ) .this echoes the results based on the link - rewiring models that the disease can be controlled by reducing the contacts between susceptible and infected individuals .therefore , both types of the linking dynamics in epidemic control can be seen as decreasing the interaction rate between susceptible and infected individuals ( called _ si control _ ) . furthermore , considering that the effective breaking rate in also depends on the activating rate between susceptibles , their work reminds us of the significance of ss links in epidemic control .intuitively , increasing the interaction time between susceptibles can also be a control strategy ( called _ ss control _ ) .yet it is seldom addressed , compared to the si control that has been intensively studied in previous literatures .it seems that these two control strategies are the two sides of the same coin .actually , this is true in gross s model , since based on their rewiring rule , the decrease of si links directly leads to the increase of ss links .however , this is no longer valid in risau - gusmn and zanette s model , since the disconnection of an si link does not necessarily result in the reconnection of an ss link . therefore the ss and si control strategies are not equivalent in general . in this work ,we provide a comparative study on the si control and the ss control by proposing a novel link - rewiring sis model .unlike the models only allowing the breaking of si links , we allow all the three types of links ( ss , si and ii ) to be broken , equipped with three independent parameters to characterize the breaking rates of ss , si and ii links .actually , this assumption mimics the intrinsic nature of human mobility , namely , people move or change their social relationships due to a variety of reasons , even without the consideration of avoiding infectious diseases . in this way, si links should not be the only type that is allowed to be broken , both ss and ii links can change .for example , in * aids * ( acquired immune deficiency syndrome ) not only the susceptibles are willing to avoid contacts from the infectives , but the susceptible - susceptible and infected - infected relationships may also be broken up due to unsatisfactory sexual experiences , _i.e. _ the rewiring processes can happen in ss and ii links . besides , we allow all the individuals to be capable of adjusting any of their partners .this mirrors the freedom of social life .it also excludes the central control of epidemics , for example , that via organizations . in this way, we could concentrate on how the social partnership adjustment strategies alone alter the fate of epidemics .we demonstrate analytically that our model captures the epidemic dynamics with non - uniform interaction rates under fast linking dynamics . it is shown that sometimes the ss control is more effective and robust than the si control .in particular , strengthening the closeness between susceptibles ( ss control ) effectively eradicates the disease no matter how infectious the disease is .however , the effectiveness of the si control sensitively depends on the infectious intensity and the intrinsic mobility rate of the population . in other words, there are cases such that the si control can not eliminate the disease so efficient as the ss control .simulation results are also shown for validating our theoretical predictions .our findings suggest that , besides the si control , it is still of concern that the ss control may serve as a better candidate for epidemic control .in this section , we propose the model of epidemic spreading coupled with a simple stochastic link - rewiring dynamics . then we theoretically analyze the epidemic model with non - uniform interaction rates based on the time scale separation .we consider a structured population of individuals .the population is located on a connected network .we assume that the average degree is much smaller than the population size , _i.e. _ . herenodes refer to individuals and links represent social ties between individuals .we adopt a standard susceptible - infected - susceptible ( sis ) model to study the epidemic spreading .the sis model assumes that susceptible individuals get infected with a probability proportional to the number of their infected neighbors ; infected individuals recover and become susceptible with no immunity to the disease after a period of recovery time .the sis model has three features : _ i _ )the whole population size is constant over time ; _ ii _ ) the transmission of disease only happens via the si links ; _ iii _ ) the recovery of infected individuals is independent of the status of their neighbors .let be the number of infected individuals at time , therefore , the mean - field equation of the sis model on the structured population is given as follows here is the transmission rate and is the recovery rate . all through the paperwe assume that without loss of generality , and is the number of the si links .the social relationships between individuals are not eternal , but are continuously co - evolving . as a typical example , susceptible individuals tend to avoid contacts with infected ones by adjusting their local connections .it has served as the most recognized prototype in the study of epidemic control on dynamical networks .however , individuals may receive miscellaneous information when making rewiring decisions , thus it is possible for all the individuals to adjust all of their current social relationships .such a rewiring process captures mobility - like human behavior .here we propose a simple link - rewiring dynamics by extending the dynamical nature from si links to all types of links in the network .each individual is either susceptible ( s ) or infected ( i ) .thus , there are three types of links : susceptible - susceptible ( ss ) , susceptible - infected ( si ) and infected - infected ( ii ) links .to characterize the fragilities of different types of links , we define ( ) as the probability with which an link breaks off in the process of disconnection . in each rewiring step , a link is selected randomly from the network . with probability ,the link is broken , otherwise the link remains connected .if it is broken , or is picked as the active individual , who is entitled to reform a new link .its new neighbor is randomly selected from the individuals who are not in its current neighborhood .self - connections and double connections are thus not allowed here . in this way, the link - rewiring dynamics can be modeled as a markov chain in the state space of .considering the transition probabilities between states , let us take the transition from to as an example .this happens only when is broken off and is selected to reform a new link to another susceptible individual .note that the total population size is much larger than the average degree , the transition probability is approximately given by , where is the density of susceptibles at the moment .similarly , we calculate all the other transition probabilities , yielding the transition probability matrix where is the density of infected individuals .according to the standard theory of markov chain , there exists a unique limiting distribution satisfying provided is irreducible and aperiodic .namely , when , has a unique stationary distribution where is the normalization . it is challenging to capture due to the complexity of real social networks is already true in static networks , and it becomes even more difficult taking into account the dynamical nature of social networks .here we overcome this problem by assuming the adiabatic elimination of fast linking dynamics ( also called annealed adaptive dynamics ) , _ i.e. _ the adjustment of social ties is much more frequent than the update of infection states .this assumption implies _ time scale separation _ of the two coupled dynamics . in other words ,the disease is unlikely to spread until the social configuration tends to the stationary regime . in this way, is approximated as where is the total number of the links in the network and is the fraction of si links in the stationary regime .this approximation greatly reduces the complexity of the coupled dynamics . in light of this, the idea of time scale separation has frequently been used in analyzing complex dynamics on adaptive networks ( epidemics , evolutionary games ) . by taking eq .( [ nsi ] ) into eq .( [ basicmodel ] ) we have note that , , and , eq . can be transformed to in particular , when all the interaction rates are uniform and positive ( ) , eq .( [ model ] ) reduces to eq .( [ basicmodel2 ] ) is nothing but the classical sis model , provided that is redefined as the effective transmission rate .this implies that the population is as if a well - mixed population , if individuals break their partnerships with no social bias .it should be pointed out that , when , the transition probability matrix eq .( [ matrix ] ) violates the irreducible condition that our analysis replies on .in fact , this case resembles the static network , which has been excluded from our analysis .when the interactions are violated from above social unbias , on the one hand , it results in non - uniform interactions in the population .therefore , eq . ( [ model ] ) extends the classical sis model from uniform interaction rates to non - uniform interaction rates .noteworthy , this non - uniform extension is an emergent property from microscopic stochastic linking dynamics , which is not assumed in prior . on the other hand , if we define , our model also extends the classical sis model from density independent transmission rate to density dependent transmission rate . in other words ,the dynamical nature of social networks essentially acts as a feedback mechanism on the sis model .the feedback mechanism , which is taken as the central idea of control , can significantly alter the epidemic dynamics .noteworthy , all the analysis above are based on the time scale separation .thus it suggests that the link - rewiring event should happen with a sufficiently large probability ( close to 1 ) in each update .furthermore , we give a more precise lower bound for this probability based on pair approximations : it is found that the time scale separation is at work provided the likelihood of the linking dynamics is greater than ( see appendix [ a ] ) for more general cases where the time scale separation is absent , higher order approximation method could be applied to provide theoretical insights ( see appendix [ a ] ) .our main concern in this comparative study is epidemic control via changing the interaction rates in different ways .based on eq .( [ basicmodel ] ) , it is that determines the spread of infection .the more the si links are , the more likely the spread of infection could be . generally , there are two ways to control . for one thing ,it is natural to increase for reducing the interaction rate ( ) between susceptible and infected individuals ( _ si control _ ) .for another thing , decreasing can also reduce the exposure of susceptibles to infection ( _ ss control _ ) .therefore , we will investigate the control of epidemics via these two strategies . more specifically , by taking the uniform interaction rates ( ) as the reference case , we would like to provide a comparative study on both the si control ( ) and the ss control ( ) . in the following ,we assume that the effective transmission rate is always larger than the recovery rate , _i.e. _ , where the epidemic control is necessary .to decrease the interaction rate between susceptibles and infectives , it is equivalent to increase the breaking probability .based on the uniform interaction as the reference case , we are interested in how the epidemic dynamics is changed by increasing . herethe uniform interaction can mimic the basic migration rate in the population . to illustrate our main results , we consider three typical cases with different initial values of the uniform interaction rates ( see appendix [ tech ] for technical details ) : _ small initial case ( fig .[ fig : wss = wii]a)_. in this case , we set initially the breaking probabilities for all types of links to be . the disease can be controlled by increasing from 0.05 to 1 .in particular , for small infectious rate ( _ i.e. _ ) , there is a phase transition with the increase of .that is , the final state of epidemics turns from endemic to extinction . for large ( _i.e. _ ) , there is a small region of bistability where the disease persists or die out due to the initial infected fraction .compared to the single continuous phase transition in the conventional ( uniform ) sis model , the non - uniform sis model can give rise to multiple phase transitions . the emergent bistability in adaptive sis modelhas already been reported in previous studies , but it is quite difficult to approximate the conditions under which bistability is present . for our model , we explicitly provide those analytical conditions under which the bistability emerges based on eq . .in the case of si control ( ) , it arises if and only if ( see appendix [ tech ] ) where ._ intermediate initial case ( fig .[ fig : wss = wii]b)_. in this case , increasing is not as effective as that in the above small initial case . for small ,even though there still exists a phase transition from endemic state to extinct state , the marginal value of that needs to cross the transition line is large .more importantly , when is large enough , increasing is unable to eradicate the disease any more .the disease will persist no matter how large the interaction rates between susceptibles and infectives are .moreover , it is shown in fig .[ fig:2 ] that , the endemic level is not sensitive to . in other words , by increasing , the final fraction of infectives declines very slowly .that is to say , the increase of can neither qualitatively change the final state of endemic , nor quantitatively inhibits the final fraction of infectives. _ large initial case ( fig .[ fig : wss = wii]c)_. in this case , the endemic state is always the global stable state provided .that is , the epidemics can not be eradicated by the si control . to summarize , the control efficiency via reducing the interaction rate between susceptibles and infectives strongly depends on the reference breaking probabilities , _i.e. _ , the intrinsic population mobility. the more likely the population is mobile , the worse the si control performs .unlike the si control , increasing the interaction rate between susceptibles is shown as an effective and robust strategy for epidemic control .in fact , no matter what the intrinsic mobility rate of the population is , the ss control successfully eradicates the disease . to this end ,we study the three typical reference population mobility cases in the above subsection ( see appendix [ tech ] for technical details ) . fig .[ fig : wsi = wii ] shows that the phase diagrams for the three cases are quite similar to each other : * for small ( ) , by decreasing , the final state of disease is directly transformed from endemic to extinction . * for large ( ) ,the bistablilty arises in all the three cases .that is , no matter how large the initial uniform interaction rates are , with the decrease of , there is an intermediate region where the disease persists or dies out depending on the initial fraction of disease furthermore , we analytically obtain that the bistable region is given by by comparison , the ss control is more effective than the si control in two ways . on one hand ,the control of is independent of the intrinsic population mobility , _ i.e. _ , robust control . on the other hand , decreasing can always effectively eradicate the disease regardless of infectious intensity ( fig .4 illustrates the position of equilibria as a function of in the bistable case ) .in this section , we present agent - based simulations and further discuss the efficiency of the time scale separation method based on the comparison between the simulation results and theoretical predictions . the _ contact process _ is adopted to model the epidemic spreading on networks .let be the probability of epidemic spreading in each update .the simulation is performed as following : 1 . initially , there are individuals located on a regular graph with degree , where each individual has exactly neighbors. then infectives and susceptibles are randomly distributed .2 . generate a random number .if , we perform the contact process . otherwise ( ) , we perform the linking dynamics . 3 .if the contact process occurs , an infected individual ( called bob ) is selected randomly . with probability bobbecomes susceptible , where is the degree of bob .otherwise a neighbor of bob s is selected at random .this neighbor , namely jack , is infected with probability .noteworthy , jack becomes infected if its status is susceptible .however , this new infection event does not change the state of jack if jack has been infected already . then return to step 2 .if the linking dynamics occurs , a link is selected randomly .the type of this link is denoted as ( ) . with probability ,the link is broken , otherwise the link remains connected .if it is broken , or is picked as the active individual , who is entitled to reform a new link .the new neighbor is randomly selected from the individuals who are not in its current neighborhood . then return to step 2 .each data point is averaged over independent samples . in each sample , we run a transient time of generations , and we set the mean value over time window of last generations to be the final fraction of infectives .it should be pointed out that , the simulation results are robust for all initial connected graphs , provided the number of infectives , population size and the average degree are fixed .the regular graph here only serves as a prototype for simulations .in fact , our linking dynamics is a markov chain , which is irreducible and aperiodic .this yields that the limiting behavior is independent of the initial configuration of the network .furthermore , the assumption of time scale separation allows all the links to converge to the stationary distribution .therefore , all the links would converge to the stationary distribution no matter what type of graph it is initially . with the coupled linking dynamics ,the final fate of the infection can be of three folds : die out no matter what the initial fraction of the infective is ( called _ extinction _ ) ; stabilize at a non - zero fraction of infectives no matter what the ( positive ) initial fraction of infectious individuals is ( called _ endemic _ ) ; stabilize at a non - zero fraction of infectives if the initial fraction of infectious individuals exceeds a critical value and die out otherwise ( called _ bistability _ ) . for the extinction cases ,simulation results are found to be in good agreement with the analytical predictions .this is true for all the parameter regions predicting extinction for both si and ss controls ( see fig .[ fig : si:1 ] ) .for the endemic cases , fig .[ fig : si:3 ] shows that the population would end up with a constant fraction of infected individuals , provided there are infective individuals initially .this is exactly in line with the analytical predictions .furthermore , the inconsistency between the analytical and simulation results is less than , which is acceptable . considering this disagreement , the analytical predictions systematically over - estimate the simulation results .in fact , the agent - based contact process is a markov process with an absorbing state , where no infected individual is present . in other words, the disease would go extinct eventually if the system evolves sufficiently long .our analytical results , however , are in the quasi - stationary time scale .the inconsistency between the analytical and simulation results suggests that the running time is beyond the quasi - stationary time scale .thus the system may evolve to the absorbing state with non - negligible chances .for the bistability cases , the simulation results show qualitative agreement with the analytical predictions . in particular ,the critical initial fraction of infected individuals , ensuring a dramatic outbreak of epidemics , is consistent with the unstable fixed point predicted by the analytical result ( see the blue dash lines in fig . [fig : si:2 ] ) .disagreements , however , are also present . for example , the theoretical results tend to underestimate the final infection when the infection fraction is rare initially .in fact , this bistable case bears two internal equilibria lying at ( unstable ) and ( stable ) ( ) . for small initial fraction of infectives ,the deterministic part of the system drives the infection to extinction based on the analytical investigation . yet by its intrinsic stochastic nature of the epidemic spreading , the infection would increase in number and be possibly trapped around the stable equilibria from time to time . even though it is a type of _ rare event_ , it takes quite long to escape from this trap .thus on average it results in a relatively higher level of final fraction of infectives given the running time of simulations ( here generations ) .in other words , it is the interplay between the stochastic effect and stable equilibrium at zero that results in such inconsistency .noteworthy , despite of this quantitative inconsistency , the salient feature of the bistable dynamics is still captured by the analytical predictions . in fig[ fig : si:4 ] , we investigate how the population size affects the accuracy of the analytical approximation .theoretically , large population size inhibits the stochasticity arising from the finite population effect , which is closer to the mean - field approximation .similar discussions can be found in . fig .[ fig : si:4 ] shows the case with still captures the bistable dynamics as the case with does .we have proposed a simple link - rewiring rule to model social partnership adjustment . therein all the links are about to break , capturing the mobility nature of the population .this simple model paves the way to compare different rewiring - based epidemic control strategies . instead of focusing on the control strategy via breaking si links ( _ e.g. _ ), our model extends the rewiring rule from si links to all the three types ( si , ss and ii ) of links , which facilitates us to compare different rewiring control strategies . we find that , for mild infectious disease , both si and ss control strategies can eradicate the disease . for strong infectious disease , however , it is more efficient to adopt the ss control than the si control .this result is counterintuitive .intuitively , reducing the contacts between susceptible and infected individuals is believed to suppress the disease propagation .moreover , it seems that decreasing the interaction rate of si links could naturally result in the increase of ss links .how can these two strategies perform so differently ?one of the salient features of our model is the variability of ii links , which is seldom addressed previously .actually , increasing is equivalent to decreasing both and . in other words ,the si control is equivalent to simultaneously strengthening ss links and ii links .similarly , the ss control is equivalent to simultaneously reducing the closeness of si links and ii links .thus , the relation of the si and ss control strategies is not as straightforward as expected . to illustrate the impact of ii links on the epidemic dynamics , we consider two examples : ( 1 ) , , ; ( 2 ) , , .the only difference between these two examples is the value of .it is easy to show that in example ( 1 ) disease becomes extinct , whereas bistability arises in example ( 2 ) ( based on eq . ) .another feature of our reconnection rule is _nonselective_. in other words , individuals are allowed to rewire to a randomly selected member no matter it is susceptible or not . compared to the _ selective _ rule in ( rewiring to a randomly selected susceptible ) ,individuals in our model are not necessary to know who gets infected currently , which is more realistic. actually , the nonselective rule increases the exposure of the susceptible to the infected .this is very likely in the beginning of epidemic season , where the information on infection status is unaccessible . in particular , even though the si control increases the breaking possibility of each si link , a new si link may be generated again due to the nonselective rule .by contrast , the ss control makes a straightforward intervention during the process of disconnection .that is , by strengthening the closeness between susceptibles , the ss strategy reduces the possibility of si connection effectively . in this way ,the nonselective rule has a relatively small impact on the ss control .therefore , in the framework of the nonselective rewiring rule , the ss strategy is more efficient than the si strategy .concentrating on the relation between the lifespan of each type of links and epidemic spreading , our model does not account for other features that are also considerable in capturing the epidemic dynamics of real world networks .for example , ( 1 ) our linking dynamics does not take into account the social interactions with memory , such as friendship and working partners , in which individuals preserve the contacts that they used to make ; ( 2 ) the link - rewiring process is a strong simplification of the real adaptive networked human behavior .it is not necessarily realistic for individuals who break up a relationship to have a new partner immediately .however , it probably mimics the dynamics of networks in * aids * to some extent : the susceptible individuals break up their ( mostly sexual ) relationships with their infected partners and switch to other perceived healthy individuals .moreover , the infected individuals may also rewire their links to other infectives . to sum up, our result captures the causation between the link fragility and the disease control .furthermore , this model might serve as a starting point to compare different rewiring control strategies for more general models closer to reality .our model couples the linking dynamics and the epidemic dynamics .while the method in the main text is analytically insightful , it requires the time scales of the two dynamics to be separated . in other words, individuals should adjust their partners much faster than the spread of epidemics to make this method applicable .this is , however , not the case in general .we propose another analytical method to overcome this restriction .the method is based on pair approximation and rate equations . herewe concentrate on how the method helps us estimate the condition under which the time scale separation is valid .let and be the global frequencies of infected and susceptible individuals _i.e. _ and in eq . ; and let be the frequencies of pairs , where . thus and hold .the system thus is determined by three independent variables : , and .the crucial assumption for pair approximation is that higher - order of moments can be captured by moments of pairs . in the following, we write down the rate equations of the three variables under the assumption of pair approximation . for the evolution of fraction of the infected ,it is only determined by the epidemic dynamics . in this case , the number of infected individuals increases or decreases by one , or stays the same in one time step . by the kolmogorov forward equation , we have that in particular , the probability that infected individuals increase by one in number happens : 1 ) the epidemic spreading is ongoing ( with probability ) ; 2 ) a susceptible individual is selected ( with probability ) , and it is infected by one of its infected neighbors . the fraction of the infected individuals around a susceptible individual is based on pair approximation .thus there are on average infected neighbors around the selected susceptible individual , where is the average degree of the entire network .therefore , the infection probability of the susceptible within a small time interval is .thus .similarly , we have .let us rescale the time interval . for large population size , dividing on both sides of eq .yields this equation is identical with the mean - field sis model eq . up to a rescaling factor . for the evolution of the links, it can be caused both by the linking dynamics and the epidemic spreading . taking the change of as an example : when the linking dynamics happens ( with probability ) , links would increase by one if an link is selected , then broken , and the infected individual of the link is selected , and it switches to another infected individual ( with probability ) , links would decrease by one if an link is selected , then broken , and the selected infected individual switches to a susceptible individual ( with probability ) ; when the epidemic spreading happens ( with probability ) : for the recovery event , an infected individual is selected ( with probability ) , it recovers with probability .if the selected individual has ( ) infected neighbors ( with probability ) , the change of links is ; for the infection event , a susceptible individual is selected ( with probability ) , if it has ( ) infected neighbors ( with probability ) , the infection happens with probability , and the change of links in this case is . taking into account the formula of the expectation and the variance of the binomial distribution yields with similar arguments we have we obtain the equations of moments with closed forms , _i.e. _ , eqs . , and .this method has been used in both evolutionary game theory and epidemic dynamics before .these equations can be employed to investigate the coupled dynamics of links and epidemics for any time scales .furthermore , the dynamics of and can help us figure out the condition under which the time scale separation is valid .the time scale separation requires that the evolution of links is mainly determined by the link - rewiring process .it implies that and based on eqs . and .let us assume that both the infection rate and the recovery rate are of order one .then the two inequalities implies this necessary condition is a more precise criterion compared with to ensure the time scale separation .it suggests that the condition for the time scale separation would be more demanding with the increasing of the average degree .this also supports our assumption in the main text that should be much smaller than .here we give a rigorous dynamical analysis on eq .( [ model ] ) , based on which the main results in sec . [ results ] are obtained .rewriting eq .( [ model ] ) leads to where the cubic polynomial is given by the asymptotic properties of eq .( [ model ] ) are totally determined by , since is positive .note that , is a fixed point .when , .\ ] ] * if , is the only stable fixed point .the infection will finally die out ; * if , is an unstable fixed point , and becomes the only stable fixed point , corresponding to endemic infection .it is shown that there exists a phase transition at , which is quite similar to the conventional sis model in which the critical point is located at . when , it is possible for the model to give rise to bistability .let , we have * if , bistable ; * if , bistable . to show how we get the above results , we take the case as an example . in this case ,}_{g(i)},\ ] ] and its discriminant is denoted as , then the sufficient and necessary condition for bistability is given by by solving the above set of inequalities , we obtain that . similarly , we get the result for the case .we thank the referees for their helpful comments . discussions with prof .ming tang are greatly acknowledged .d.z . is grateful for funding by the national natural science foundation of china ( no . 11401499 ) , the natural science foundation of fujian province of china ( no .2015j05016 ) , and the fundamental research funds for the central universities in china ( nos . 20720140524 , 20720150098 ) . b.w . is grateful for funding by the national natural science foundation of china ( no .61603049 ) , and the fundamental research funds for the central universities .10 william o kermack and anderson g mckendrick .a contribution to the mathematical theory of epidemics . in _ proc .london ser .a _ , volume 115 , pages 700721 .the royal society , 1927 .roy m anderson and robert m may . .oxford university press , oxford , 1991 .herbert w hethcote .the mathematics of infectious diseases . , 42(4):599653 , 2000 .wei wang , ming tang , hui yang , younghae do , ying - cheng lai , and gyuwon lee .asymmetrically interacting spreading dynamics on complex layered networks ., 4:5097 , 2014 .shah m faruque , iftekhar bin naser , m johirul islam , asg faruque , an ghosh , g balakrish nair , david a sack , and john j mekalanos .seasonal epidemics of cholera inversely correlate with the prevalence of environmental cholera phages ., 102(5):17021707 , 2005 .mm telo da gama and a nunes .epidemics in small world networks . , 50(1 - 2):205208 , 2006 .marian bogu , claudio castellano , and romualdo pastor - satorras .nature of the epidemic threshold for the susceptible - infected - susceptible dynamics in networks ., 111(6):068701 , 2013 .thilo gross and bernd blasius .adaptive coevolutionary networks : a review . , 5(20):259271 , 2008 .sebastian funk , marcel salath , and vincent aa jansen . modelling the influence of human behaviour on the spread of infectious diseases : a review ., 7(50):12471256 , 2010 .romualdo pastor - satorras , claudio castellano , piet van mieghem , and alessandro vespignani .epidemic processes in complex networks . , 87(3):925 , 2015 .thilo gross , carlos j dommar dlima , and bernd blasius .epidemic dynamics on an adaptive network . , 96(20):208701 , 2006 .leah b shaw and ira b schwartz . fluctuating epidemics on adaptive networks ., 77(6):066101 , 2008 .c lagorio , mark dickison , f vazquez , lidia a braunstein , pablo a macri , mv migueles , shlomo havlin , and h eugene stanley .quarantine - generated phase transition in epidemic spreading ., 83(2):026102 , 2011 .leah b shaw and ira b schwartz .enhanced vaccine control of epidemics in adaptive networks ., 81(4):046120 , 2010 .damin h zanette and sebastin risau - gusmn .infection spreading in a population with evolving contacts ., 34(1 - 2):135148 , 2008 .sebastin risau - gusmn and damin h zanette .contact switching as a control strategy for epidemic outbreaks ., 257(1):5260 , 2009 .nh fefferman and kl ng .how disease models in static networks can fail to approximate disease in dynamic networks ., 76(3):031919 , 2007 .yonathan schwarzkopf , attila rkos , and david mukamel .epidemic spreading in evolving networks ., 82(3):036112 , 2010 .sven van segbroeck , francisco c santos , and jorge m pacheco .adaptive contact networks change effective disease infectiousness and dynamics ., 6(8):e1000895 , 2010 .ld valdez , pablo a macri , and lidia a braunstein .intermittent social distancing strategy for epidemic control ., 85(3):036108 , 2012 .dongchao guo , stojan trajanovski , ruud van de bovenkamp , huijuan wang , and piet van mieghem .epidemic threshold and topological structure of susceptible - infectious - susceptible epidemics in adaptive networks ., 88(4):042802 , 2013 .nathalie lydi , noah j robinson , benoit ferry , evina akam , myriam de loenzien , severin abega , study group on heterogeneity of hiv epidemics in african cities , et al .mobility , sexual behavior , and hiv infection in an urban population in cameroon ., 35(1):6774 , 2004 .lorenzo mari , enrico bertuzzo , lorenzo righetto , renato casagrandi , marino gatto , ignacio rodriguez - iturbe , and andrea rinaldo .modelling cholera epidemics : the role of waterways , human mobility and sanitation ., page rsif20110304 , 2011 .bin wu , da zhou , feng fu , qingjun luo , long wang , and arne traulsen . evolution of cooperation on stochastic dynamical networks . , 5:e11187 , 2010 .bin wu , da zhou , and long wang .evolutionary dynamics on stochastic evolving networks for multiple - strategy games . , 84(4):046111 , 2011 .bin wu , jordi arranz , jinming du , da zhou , and arne traulsen . evolving synergetic interactions . , 13(120 ) , 2016 .crispin w gardiner . , volume 4 .springer - verlag , berlin , 1985 .romualdo pastor - satorras and alessandro vespignani .epidemic spreading in scale - free networks ., 86(14):3200 , 2001 .rka albert and albert - lszl barabsi .statistical mechanics of complex networks . , 74(1):47 , 2002 .stephen eubank , hasan guclu , vs anil kumar , madhav v marathe , aravind srinivasan , zoltan toroczkai , and nan wang .modelling disease outbreaks in realistic urban social networks . , 429(6988):180184 , 2004 .beniamino guerra and jess gmez - gardees .annealed and mean - field formulations of disease dynamics on static and adaptive networks . , 82(3):035101 , 2010 .jorge m pacheco , arne traulsen , and martin a nowak .coevolution of strategy and structure in complex networks with dynamical linking ., 97(25):258103 , 2006 .richard durrett . .duxbury press , belmont , ca , usa , 2005 .c. taylor and m. a. nowak .evolutionary game dynamics with non - uniform interaction rates ., 69:243252 , 2006 .linda q gao and herbert w hethcote .disease transmission models with density - dependent demographics . , 30(7):717731 , 1992 .f vazquez , ma serrano , and m san miguel .rescue of endemic states in interconnected networks with adaptive coupling . , 2015 .oliver grser , pm hui , and c xu . separatrices between healthy and endemic states in an adaptive epidemic model . , 390(5):906913 , 2011 .ilker tunc and leah b shaw .effects of community structure on epidemic spread in an adaptive network . , 90(2):022801 , 2014 .thomas m liggett . , volume 324 .springer - verlag , new york , 1999 .ingemar nsell . on the quasi - stationary distribution of the stochastic logistic epidemic ., 156(1):2140 , 1999 .da zhou , bin wu , and hao ge .evolutionary stability and quasi - stationary strategy in stochastic evolutionary game dynamics ., 264(3):874881 , 2010 .hisashi ohtsuki , christoph hauert , erez lieberman , and martin a nowak . a simple rule for the evolution of cooperation on graphs and social networks ., 441(7092):502505 , 2006 .) , the model degenerates to the conventional sis model .there is only one internal equilibrium and it is stable provided . herewe solely adjust such that the duration time of links is shorter than the other two types of links , _i.e. _ .the three panels show the phase diagrams in the -plane .the quality of the si control is sensitively dependent on the reference uniform breaking probabilities .( a ) when they are small ( ) , decreasing the interaction between susceptible and infected individuals makes the phase diagram change from endemic state ( red ) to bistable state ( yellow ) and then to final extinct state ( blue ) .( b ) when , there is no bistable state ( yellow ) any more .this implies it becomes hard to eradicate disease when the population is even more mobile .( c ) the right panel shows that the si control is incapable of eradicating the disease provided the population is intrinsically highly mobile ( ) ., width=576 ] .here , and .the disease can not be eradicated by the si control , and the level of infection in the equilibrium state declines very slowly ( from to ) by increasing ( from to ) ., width=384 ] . herethe disease is solely controlled by increasing the duration time of the social ties between susceptibles , _i.e. _ .these phase diagrams are similar for all the reference uniform interaction rates : _i ) _ for small , the ss control makes the disease change from endemic state ( red ) directly to extinction state ( blue ) ; _ ii ) _ for large , the ss control can still eradicate the disease , but the phase diagram has to pass from endemic to bistable state ( yellow ) and finally to extinction ( blue ) . , width=576 ] .here , and .increasing the interaction time between susceptibles ( _ i.e. _ decreasing ) effectively eradicates the disease . in particular , for ( bistability ) , the disease dies out provided the initial infection is few in number . even when the initial number of infection is large , the final level of infection is still lower than the case with . for ,the disease is eradicated no matter what the initial state is ., width=576 ]
epidemic control is of great importance for human society . adjusting interacting partners is an effective individualized control strategy . intuitively , it is done either by shortening the interaction time between susceptible and infected individuals or by increasing the opportunities for contact between susceptible individuals . here , we provide a comparative study on these two control strategies by establishing an epidemic model with non - uniform stochastic interactions . it seems that the two strategies should be similar , since shortening the interaction time between susceptible and infected individuals somehow increases the chances for contact between susceptible individuals . however , analytical results indicate that the effectiveness of the former strategy sensitively depends on the infectious intensity and the combinations of different interaction rates , whereas the latter one is quite robust and efficient . simulations are shown in comparison with our analytical predictions . our work may shed light on the strategic choice of disease control .
wheeler - feynman electrodynamics ( wf ) describes the classical , electromagnetic interaction of a number of charges by action - at - a - distance .in contrast to maxwell - lorentz electrodynamics the theory contains no fields and is free from ultraviolet divergences originating from ill - defined self - fields .electrodynamics without fields was considered as early as 1845 by gauss and continued to be of interest , e.g. . in particular , it led wheeler and feynman to a satisfactory description of radiation damping : accelerated charges are supposed to radiate and to loose energy thereby .how can this be accounted for in a theory without fields ? to answer this question wheeler and feynman introduced a so - called _ absorber condition _ which needs to be satisfied by the world - lines of all charges , and they argue that it is satisfied on thermodynamic scales . under the absorber conditionit is straightforwardly seen that the motion of any selected charge is described effectively by the lorentz - dirac equation , an equation dirac derived to explain the phenomena of radiation damping ; see our short discussion in .the advantage in wheeler and feynman s derivation of the lorentz - dirac equation is that it bares no divergences in the defining equations which provoke unphysical , so - called _ run - away _ , solutions . at the same time wheeler and feynman s argument is able to explain the irreversible nature of radiation phenomena .these features make wf the most promising candidate for arriving at a mathematically well - defined theory of relativistic , classical electromagnetism . however , mathematically wf is completely open .it is not an initial value problem for differential equations because the wf equations contain time - like advanced and retarded state - dependent arguments for which no theory of existence or uniqueness of solutions is available .apart from two exceptions discussed below , it is not even known whether in general there are solutions at all . in tensor notation , wf is defined by : {+}^{\mu\nu}(z_i(\tau))+f[z_k]_{-}^{\mu\nu}(z_i(\tau))\right]\dot z_{i,\nu}(\tau ) , \end{aligned}\ ] ] where ^\mu_{\pm}(x ) & : = e_k\frac{\dot { z}_k^\mu(\tau_{k,\pm}(x))}{(x- z_{k}(\tau_{k,\pm}(x)))_\nu\;\dot { z}_k^\nu(\tau_{i,\pm}(x))},\end{aligned}\ ] ] and the world line parameters are implicitly defined through here , the world lines of the charges for are parametrized by proper time and take values in minkowski space equipped with the metric tensor .we use einstein s summation convention for greek indices , i.e. , and the notation for an in order to distinguish the time component from the spatial components .the overset dot denotes a differentiation with respect to the world - line parametrization . for simplicityeach particle has the same inertial mass ( all presented results however hold for charges having different masses , too ) .the coupling constant denotes the charge of the -th particle .if one were to insist on using field theoretic language then one may also say that equations ( [ eqn : cea wf ] ) describe the interaction between the charges via their advanced and retarded linard - wiechert fields {+},f[z_k]_{-} ] .given an and a time - like world line , i.e. one fulfilling , the solutions , , are unique and given by the intersection of the forward and backward light - cone of space - time point and the world - line , respectively ; see figure [ fig : advance_delay ] .the acceleration on the left - hand side of the wf equations depends through ( [ eqn : time ret ] ) on time - like advanced as well as retarded data ( with respect to ) of all the other world lines ; see figure [ fig : wf ladder ] .the delay is unbounded , and by ( [ eqn : cea lienard wiechert fields ] ) the right - hand side of ( [ eqn : cea wf ] ) again depends on the acceleration . it is noteworthy that in early 1900 the mathematician and philosopher a.n .whitehead developed a philosophical view on nature which rejects `` initial value problems '' as fundamental descriptions of nature .he developed his own gravitational theory and motivated synge s study of what is now referred to as synge equations , i.e. {-}^{\mu\nu}(z_i(\tau))\dot z_{i,\nu}(\tau).\end{aligned}\ ] ]the synge equations share many difficulties with the wf equations but , as we shall show , are simpler to handle because they only depend on time - like retarded arguments .we would like to remark that independent of whitehead s philosophy it seems to be the case that often fields are introduced to formulate a physical law , even though it may have a delay character , as initial value problem .maxwell - lorentz electrodynamics is a prime example .however , these very fields are then often the source of singularities of the theory , quantum or classical .whitehead s idea might therefore point towards a fruitful new reflection about the character of physical laws .the books provide a beautiful overview on the topic of delay differential equations .however , for the wf equations as well as similar types of delay differential equations with advanced and retarded arguments of unbounded delay there are almost no mathematical results available .the problem one usually deals with in the field of differential equations without delay is extension of local solutions to a maximal domain and avoiding critical points by introducing a notion of typicality of initial conditions . for wfthe situation is dramatic . because of the unbounded delay, the notion of local solutions does not make sense , so that the issue is not local versus global existence and also not explosion or running into singular points of the vector field .the issue is simply : _ do solutions exist ? _ and _ what kind of data of the solutions is necessary and/or sufficient to characterize solutions uniquely ? _to put our work in perspective we call attention to the following literature : angelov studied existence of synge solutions in the case of two equal point - like charges and three dimensions . under the assumption of an extra condition on the minimal distance between the charges to prevent collisions, he proved existence of synge solutions on the positive time half - line .uniqueness is only known in a special case in one dimension for two equal charges initially having sufficiently large opposite velocities and sufficiently large space - like separation . under these conditions driverhas shown that the synge solutions are uniquely characterized by initial positions and momenta . with regards to wf two types of special solutionsare known to exist : first , the explicitly known schild solutions composed out of specially positioned charges revolving around each other on stable orbits , and second , the scattering solutions of two equal charges constrained on the straight line .the latter result rests on the fact that the asymptotic behavior of world - lines on the straight line is well controllable ( due to this special geometry the acceleration dependent term on the right - hand side of ( [ eqn : cea wf ] ) vanishes ) .uniqueness of wf solutions was proven in one dimension with zero initial velocity and sufficiently large separation of two equal charges . in a recent work a well - defined analogue of the formal fokker variational principle for two charges restricted to finite intervals was proposed .it is shown that its minima , if they exist , fulfill the wf equations on these finite times intervals .furthermore , there are conjectures about uniqueness of wf solutions e.g. . while driver s result points to the possibility of uniqueness by initial positions and momenta , bauer s work suggests to specify asymptotic positions and momenta .furthermore , a wf toy model for two charges in three dimensions was given in for which a sufficient condition for a unique characterization of all its ( sufficiently regular ) solutions is the prescription of strips of time - like world lines long enough such that at least for one point on each strip the right - hand side of the wf equation is well - defined and the wf equation is fulfilled .our focus is on the bare existence of solutions of wf , i.e. on the question : _ do solutions exist ? _ for that question the issue that in a dynamical evolution of a system of point - like charges catastrophic events may happen is secondary ( compare the famous -body problem of classical gravitation ) .more on target , such considerations would have to invoke a notion of typicality of trajectories , so that catastrophic events can be shown to be atypical .but that would require not only existence of solutions but also a classification of solutions .we are far from that .to avoid such issues at this early state of research we regard as introduced in instead of wf , i.e. we consider extended rigid charges described by the charge distributions , , where singularities do not even occur when charges pass through each other . for our mathematical analysisit is convenient to express in coordinates where it takes the form ({{\mathbf{x}}})+{{\mathbf{v}}}({{\mathbf{q}}}_{i , t})\wedge{{\mathbf{b}}}_t[{{\mathbf{q}}}_k,{{\mathbf{p}}}_k]({{\mathbf{x}}})\right ) \end{split}\end{aligned}\ ] ] for and ({{\mathbf{x}}})\\ { { \mathbf{b}}}_t^{(e_{+},e_{-})}[{{\mathbf{q}}}_i,{{\mathbf{p}}}_i]({{\mathbf{x } } } ) \end{pmatrix}=\sum_\pm 4\pi e_\pm \int ds{{\int d^3y\ ; } } k_{t - s}^\pm({{\mathbf{x}}}-{{\mathbf{y}}})\begin{pmatrix } -\nabla\varrho_i({{\mathbf{y}}}-{{\mathbf{q}}}_{i , s})-\partial_s\left[{{\mathbf{v}}}({{\mathbf{p}}}_{i , s})\varrho_i({{\mathbf{y}}}-{{\mathbf{q}}}_{i , s})\right]\\ \nabla\wedge\left[{{\mathbf{v}}}({{\mathbf{p}}}_{i , s})\varrho_i({{\mathbf{y}}}-{{\mathbf{q}}}_{i , s})\right ] \end{pmatrix}\end{aligned}\ ] ] where as in ( [ eqn : wf equation written out ] ) most of the time we drop the superscript .here , are the advanced and retarded green s functions of the dalembert operator .the partial derivative with respect to time is denoted by , the gradient by , the divergence by , and the curl by . at time the charge for is situated at position in space , has momentum and carries the classical mass .the geometry of the rigid charges are described by the smooth charge densities of compact support , i.e. , for . using the notation and and replacing by the three dimensional dirac delta distribution times one retrieves from ( [ eqn : wf equation written out ] ) the wf equations ( [ eqn : cea wf ] ) for and the synge equations ( [ eqn : synge ] ) for . as discussed in theorem [ thm : lwfields ], the expression ( [ eqn : wf fields def ] ) for the choices for and is the advanced and retarded linard - wiechert field , respectively .the square brackets ] the trajectories are prescribed by hand . the fixed point argument -if that could be formulated - would then run on the time interval ] for arbitrary large , i.e. we show how one can circumvent the first difficulty albeit gaining conditionally solutions only .the extension to global solutions would require good control on the asymptotic behavior ( as e.g. in in the case of the motion on the straight line ) , which we do not pursue here .we stress , however , that the extension to infinite time intervals is an interesting and worthwhile task , joining the results of this paper with the removal technique for introduced in . the key ideato define a fixed point map on time intervals ] for all times because their difference is a solution to the homogeneous maxwell equations ( i.e. ( [ eqn : maxwell equations ] ) for ) which is zero ; compare ( [ eqn : crucial ] ). given the equality of fields for all times , equations ( [ eqn : lorentz force ] ) turn into the equations ( [ eqn : wf equation written out ] ) , and hence , the charge trajectories of the solution fulfilling ( [ eqn : special intial fields ] ) solve the equations . in other words , the subset of sufficiently regular solutions of that correspond to initial conditions fulfilling ( [ eqn : special intial fields ] ) have charge trajectories .we shall show that any once differentiable charge trajectory with bounded momenta and accelerations produces fields fields ( [ eqn : wf fields def ] ) that are regular enough to serve as initial conditions for .this covers all physically interesting solutions , including the known schild solutions .the advantage gained from this change of viewpoint is that is given in terms of an initial value problem .therefore , instead of working directly with the functional equations it will be more convenient to formulate a fixed point procedure for to find initial fields for which ( [ eqn : special intial fields ] ) holds .we now give an overview of our main results .let be the set of once differentiable charges trajectories having bounded momenta and accelerations and fulfilling equations ( [ eqn : wf equation written out])-([eqn : wf fields def ] ) ; see definition [ def : wf sols ] below .our first results is : [ thm : wf initial conditions ] for , and we define =({{\mathbf{q}}}_{i , t},{{\mathbf{p}}}_{i ,t},{{\mathbf{e}}}^{(e_+,e_-)}_{t}[{{\mathbf{q}}}_i,{{\mathbf{p}}}_i],{{\mathbf{b}}}^{(e_+,e_-)}_{t}[{{\mathbf{q}}}_i,{{\mathbf{p}}}_i])_{1\leq i\leq n}.\end{aligned}\ ] ] the following statements are true : a. for any we have \in d_w(a^\infty) ] holds .c. for any following map is injective : .\end{aligned}\ ] ] hence , for any choice of the coupling parameters we know that : ( i ) the charge trajectories in produce sufficiently regular initial fields for .( ii ) the expression ( [ eqn : wfsol ] ) coincides with a solution . (iii ) each solutions of ( [ eqn : wf equation written out])-([eqn : wf fields def ] ) can be identified by positions , momenta and fields ,{{\mathbf{b}}}^{(e_+,e_-)}_{t}[{{\mathbf{q}}}_i,{{\mathbf{p}}}_i])_{1\leq i\leq n} ] at time and use them as initial fields for .the charge trajectories of the time - evolved solutions then obey the synge equations for times .this way we shall prove : [ thm : exist and uni of synge ] let , and .a. ( existence ) there exists an extension such that the concatenation is an element of ) ] for any and suppose further that it solves the equations ( [ eqn : wf equation written out])-([eqn : wf fields def ] ) for all times . then implies for all . given theorem [ thm :wf initial conditions ] this existence and uniqueness result is not hard to prove , and the reason for this is that we only ask for solutions on the half - line .in contrast to , the notion of local solutions again makes sense since the histories simply act as prescribed external potentials .however , if we ask for solutions on whole we again face the problem as in , i.e. that by the unboundedness of the delay the notion of local solutions loses its meaning ( a conceivable way around this without necessarily sacrificing uniqueness is to give initial conditions for ) .we now come to the main part of this work where we discuss the existence of solutions . from now on we shall keep the choice , fixed although all the results hold also for any choices of .we take on the mentioned idea of conditional solutions : for given initial positions and momenta of the charges at we look for solutions on time intervals ] of the dynamics we need to prescribe how the charge trajectories continue for times because due to the delay the dynamics within ] denotes the solution of the maxwell equations for initial fields at time corresponding to a prescribed trajectory with a charge distribution ; see definition [ def : maxwell time evolution ] below .note that the above set of equations is a natural restriction of the dynamics onto the time interval ] .b. compute the advanced and retarded fields ,({{\mathbf{q}}}_i,{{\mathbf{p}}}_i)](t,\pm t ) \end{aligned}\ ] ] corresponding to the charge trajectories computed in ( i ) with prescribed initial fields ] .+ note that the boundary fields ] whose charge trajectories fulfill the conditional equations ( [ eqn : bwf equation written out])-([eqn : wf with boundary fields ] ) ; see definition [ def : wf sol for finite times ] and theorem [ thm : the map st ] below .we prove : [ thm : st has a fixed point ] let be given . for each finite the map has a fixed point .the essential ingredient in the proof of this result is the good nature of the dynamics which implies lemma [ lem : estimates for st ] below .here we rely heavily on the work done in .we close with a discussion of these fixed points . recall that the synge solutions on the time half - line for times sufficiently close to give rise to interaction with the given past trajectories on ] yet and , therefore , one should be more curious as the described scenario in the case of the synge equations could happen in the case of the wf equations in the past as well as the in future of .if the solution on ] only `` see '' the prescribed boundary fields ; see figure [ fig : wf extreme ] .the following result makes sure that given at least for some solutions this is not the case because on an interval ] and not with the given boundary fields ; i.e. the case as shown in figure [ fig : wf ] .we prove : [ thm : existence of l ] choose .then : a. the absolute values of the velocities of all charges of any solution with any initial data such that have an upper bound with .b. let be the smallest radius such that the support of lies within a ball around the origin with radius , i.e. , for all , and further .for sufficiently small there exist such that and any fixed point of gives rise to a solution (t,0) ] solve the equations ( [ eqn : wf equation written out])-([eqn : wf fields def ] ) .the form of in ( [ eqn : l ] ) is a direct consequence of the geometry as displayed in figure [ fig : wf ] and the nature of the free maxwell time evolution , see lemma [ lem : shadows of boundary fields ] , which can be seen from a direct computation using harmonic analysis .the proof further employs a very rough grnwall estimate coming from the dynamics to estimate the velocities of the charges during the time interval ] can uniquely be extended at to become a function such that for all and for all .[ rem : waveequation ] in the future we will denote the unique extension of by the same symbol .it is called the _ propagator _ of the homogeneous wave equation . 1 a direct consequence of this lemma is kirchoff s formula : [ cor : kirchoff ]let .the mapping defined by solves the homogeneous wave equation in the strong sense and for initial values and .we construct explicit solutions of the maxwell equations along the following line of thought : in the distribution sense every solution to the maxwell equations ( [ eqn : maxwell_equations_charge_current ] ) is also a solution to having initial values to make formulas more compact we sometimes abbreviate the pair of electric and magnetic fields in the form and let operators act thereon component - wisely . with the help of the green s functions from definition [ def : greens_dalembert ]one may guess the general form of any solution to these equations : where is a solution of the homogeneous wave equation , i.e. . considering the forward as well as backward time evolution we regard two different kinds of initial value problems : a. initial fields are given at some time and propagated to a time .b. initial fields are given at some time and propagated to a time .the kind of initial value problem posed will then determine and the corresponding green s function . for ( i )we use and for ( ii ) we use which are uniquely determined by and .furthermore , in the case of time - like charge trajectories and lemma [ lem : greens_function_dalembert ] implies terms of this kind can simply be included in the homogeneous part of the solution . this way we arrive at two solution formulas .one being suitable for our forward initial value problem , i.e. , and the other suitable for the backward initial value problem , i.e. , as a last step one needs to identify the homogeneous solutions which satisfy the given initial conditions ( [ eqn : wave_equations_initial_values ] ) .corollary [ cor : kirchoff ] provides the explicit formula : therefore , using the propagator and a substitution in the integration variable both initial value problems fulfill for all : [ thm : maxwell_solutions ] let .a. given fulfilling the maxwell constraints and for any , the mapping defined by for all is valued , infinitely often differentiable and a solution to ( [ eqn : maxwell_equations_charge_current ] ) with initial value . b. for all we have and .clearly , one needs less regularity of the initial values in order to get a strong solution . with regard to ,however , we will only need to consider smooth initial values .the explicit formula of the solutions ( after an additional partial integration ) was already found in [(a.24),(a.25 ) ] where it was derived with the help of the fourier transform ( there seems to be a misprint in equation ( a.24 ). however , ( a.20 ) from which it is derived is correct ) . for the rest of this paper the charge - current densities we will consider are the ones generated by a moving rigid charge on time - like trajectories : [ def : charge trajectory ] + a. we call any map a charge trajectory and denote with and the position and momentum of the charge , respectively .its velocity at time is given by .b. we collect all time - like charge trajectories in the set c. and all strictly time - like charge trajectories in the set where we use the abbreviation .furthermore , we use the notation and define the cartesian products and . for and for all which we call the induced charge - current density of .clearly , so that theorem [ thm : maxwell_solutions ] applies : [ def : maxwell time evolution ] given a charge trajectory which induces we denote the solution of the maxwell equations ( [ eqn : maxwell_equations_charge_current ] ) given by theorem [ thm : maxwell_solutions ] and initial values by (t , t_0):=f_t.\ ] ] one finds the following special solutions : [ thm : lwfields ] let such that and as well as for some and all are fulfilled .we distinguish two cases denoted by or and assume that for all , or ) ] are a solution to the maxwell equations ( [ eqn : maxwell_equations_charge_current ] ) including the maxwell constraints for all . we immediately get a simple bound on the linard - wiechert fields : [ cor : lw_estimate ]let .furthermore , assume there exists an such that .then the linard - wiechert fields ( [ eqn : lw_e_integrand ] ) and ( [ eqn : lw_b_integrand ] ) fulfill : for any multi - index there exists a constant such that for all , holds .next , we briefly summarize the results of on the equations ( [ eqn : maxwell equations])-([eqn : lorentz force ] ) : [ def : weighted spaces ] we define the class of weight functions for any and open we define the space of weighted square integrable functions by for regularity arguments we need more conditions on the weight functions .for we define and as computed in , .the space of initial values is then given by : [ def : phasespace ] we define any element consists of the components , i.e. positions , momenta and electric and magnetic fields and for each of the charges . if not noted otherwise , any spatial derivative will be understood in the distribution sense , and the latin indices shall run over the charge labels . for , open set and we define the following sobolev spaces which are equipped with the inner products respectively .we use the multi - index notation , , where denotes the derivative w.r.t . to the -th standard unit vector in . in order to appreciate the structure of the ml equations we will rewrite them using the following operators and : [ def : operator_a ] for a we defined and by the expression on their natural domain furthermore , for any we define [ def : operator_j ]together with we define by for .note that is well - defined because , . with these definitions , the lorentz force law ( [ eqn : lorentz force ] ) , the maxwell equations ( [ eqn : maxwell equations ] ) without the maxwell constraints take the form the two main theorems are : [ thm : globalexistenceanduniqueness ] for , and the following holds : a. _ ( global existence ) _ there exists an -times continuously differentiable mapping which solves ( [ eqn : dynamic_maxwell ] ) for initial value .furthermore , it holds for all and , b. _ ( uniqueness and growth ) _ any once continuously differentiable function for some open interval which fulfills for an , and which also solves the equation ( [ eqn : dynamic_maxwell ] ) on , has the property that holds for all .in particular , given , there exists such that for with \subset\lambda ] given in ( [ eqn : wf fields def ] ) fulfill the maxwell equations ( [ eqn : maxwell equations ] ) including the maxwell constraints for all and .hence , using ( i ) , the equality =a\varphi^{(e_+,e_-)}_t[({{\mathbf{q}}}_{i},{{\mathbf{p}}}_{i})_{1\leq i\leq n}]+j(\varphi^{(e_+,e_-)}_t[({{\mathbf{q}}}_{i},{{\mathbf{p}}}_{i})_{1\leq i\leq n}]),\ ] ] holds true ( recall the notation in section [ sec : summary of part i ] before equation ( [ eqn : dynamic_maxwell ] ) ) . due to (i ) also 1 \right] ] .hence , ] for all .however , , and is continuously differentiable so that we only need to check that is continuously differentiable at .now according to the assumption , at time the past history solves equations ( [ eqn : wf equation written out])-([eqn : wf fields def ] ) for and , and furthermore , theorem [ thm : lw solve m eq ] states that ,{{\mathbf{b}}}_{t}[{{\mathbf{q}}}^-_i,{{\mathbf{p}}}^-_i])_{1\leq i\leq n} ] for and any so that ( [ eqn : ret fields ] ) is well - defined . with the help of theorem [ thm : globalexistenceanduniqueness ] for all compute =a(\varphi_{t}-\varphi^+_{t})\end{aligned}\ ] ] because does only depend on the charge trajectories .the only solution to this equation is ; cf .definition [ def : wt ] .hence , ,{{\mathbf{b}}}_{t}[{{\mathbf{q}}}_i,{{\mathbf{p}}}_i]) ] because we want to impose certain regularity conditions at the connection times . since these trajectories will be generated within the iteration of by the time evolution this dependence can be expressed simply by the dependence on the initial data .we shall therefore use the notation ] .b. the map ] and (t,0) ] denotes the solution , cf .definition [ def : ml time evolution ] , for initial value .a. the map is well - defined .b. given , setting ] . c. for any such that ] is a once continuously differentiable map . by properties of stated in (* lemma 2.22 ) we know that is locally lipschitz continuous for any . by projecting onto field space , cf .definition [ def : awj ] , we obtain that also is locally lipschitz continuous .hence , by the group properties of we know that for any is continuous . furthermore , is closed .this implies the commutation as this holds for any , .furthermore , by definition [ def : boundary fields ] the term ] by the group properties .hence , the map is well - defined as a map .\(ii ) for let denote the charge trajectories of (t,0) ] are in and obey the maxwell constraints by the definition of .so we can apply lemma [ lem : connection maxwell time and wt j ] which states for ] .by ( ii ) this implies .let and be defined as in the proof of ( ii ) which now is infinitely often differentiable as since .we shall show later that the following integral equality holds )+\int_{\pm t}^tds\;w_{t - s}{\mathtt{f}}j(\varphi_s)\right ] \end{aligned}\ ] ] for all ; note that (t,0) ] , i.e. +\int_{\pm t}^tds\;{{\mathsf w}}_{-s}{{\mathsf j}}(\varphi_s)\right ] , \end{aligned}\ ] ] we find \big ) + \frac{1}{2}\sum_\pm w_t\int_{\pm t}^0ds\;w_{-s}\big(0,{{\mathsf j}}(\varphi_s)\big)+\int_0^tds\;w_{t - s}j(\varphi_s ) .\end{aligned}\ ] ] by the same reasoning as in ( i ) we may commute with the integral .this together with and proves the equality ( [ eqn : spxt integral equation ] ) for all which concludes the proof .[ def : coulomb field ] define , ] and (0,-\infty)=\left(\int d^3z\ ; \varrho_{i}(\cdot-{{\mathbf{z}}})\frac{{{\mathbf{z}}}}{\|{{\mathbf{z}}}\|^3},0\right ) .\end{aligned}\ ] ] note that the equality on the right - hand side of ( [ eqn : exp coulomb ] ) holds by theorem [ thm : lwfields ] .we need to show the properties ( i)-(v ) given in definition [ def : boundary fields ] .fix and .let such that and set .furthermore , we define (t,0) ] and that is well - defined .note that the right - hand side depends only on which is bounded by since the maximal velocity is bounded by one , i.e. the speed of light .hence , property ( i ) holds for instead of showing property ( ii ) , we prove the stronger property ( v ) . forthis let such that and set (t,0) ] .it holds \|_{{{\mathcal{f}}}_w^n(b_\tau^c(0))}&\leq \sum_{i=1}^n\sum_{|\alpha|\leq n}\left\|d^\alpha{{\mathbf{e}}}^c(\cdot-{{\mathbf{q}}}_{i , t})\right\|_{l^2_w(b_\tau^c(0))}\\ & \leq \sum_{i=1}^n\sum_{|\alpha|\leq n}\left(1+\namer{cw}\|{{\mathbf{q}}}_{i , t}\|\right)^{\frac{\namer{pw}}{2}}\left\|d^\alpha{{\mathbf{e}}}^c\right\|_{l^2_w(b_\tau^c({{\mathbf{q}}}_{i , t}))}. \end{aligned}\ ] ] we use again that the maximal velocity is smaller than one , i.e. .hence , for define such that we can estimate the norm by the norm and yield \|_{{{\mathcal{f}}}_w^n(b_\tau^c(0))}\leq \sum_{i=1}^n\sum_{|\alpha|\leq n}\left(1+\namer{cw}\|{{\mathbf{q}}}_{i , t}\|\right)^{\frac{\namer{pw}}{2}}\left\|d^\alpha{{\mathbf{e}}}^c\right\|_{l^2_w(b_{r(\tau)}^c(0))}\xrightarrow[\tau\to\infty]{}0\ ] ] this concludes the proof .when looking for global solutions , in view of ( [ eqn : wf with boundary fields ] ) and ( [ eqn : wf boundary fields ] ) , the boundary fields can be seen as a good guess of how the charge trajectories continue outside of the time interval ] ( the result is the lorentz boosted coulomb field ) .such boundary fields are also in since the derivative for ] ; see ( ii ) of theorem [ thm : globalexistenceanduniqueness ] .\(i ) as shown in ( * ? ? ?* lemma 2.19 ) , on generates a -contractive group ; cf .definition [ def : wt ] .this property is inherited from on which generates the group .hence , and commute for any which implies for all that for ( ii ) let . using then the definition of , cf .definition [ def : operator_j ] and [ def : awj ] , we find by applying the triangular inequality one finds a constant such that whereas in the last step we used the fact that the maximal velocity is smaller than one . using the properties of the weight function , cf .definition [ def : weighted spaces ] , we conclude collecting these estimates we yield that claim ( ii ) holds for claim ( iii ) is shown by repetitively applying estimate of ( * ? ? ?* lemma 2.22 ) on the right - hand side of \|_{{{\mathcal{h}}}_w } \end{aligned}\ ] ] which yields a constant where is given in the proof of ( * ? ? ?* lemma 2.22 ) .this concludes the proof .a. there is a such that for all we have \|_{{{\mathcal{f}}}_w^n}\leq \constr{st radius const}^{(n)}(t,\|p\| ) .\end{aligned}\ ] ] b. ] in since here .hence , the claim ( ii ) is true .\(iii ) let now . by ( v ) of definition[ def : boundary fields ] the term then behaves according to together with the estimate ( [ eqn : st lip est 2 ] ) this proves claim ( ii ) for since in our case .[ rem : uniqueness ] let , . then claim ( iii ) of lemma [ lem : estimates for st ] has an immediate consequence : for sufficiently small the mapping has a unique fixed point , which follows by banach s fixed point theorem .consider therefore , then ( i ) of lemma [ lem : estimates for st ] states \|_{{{\mathcal{f}}}_w^1}\leq \constr{st radius const}^{(1)}(t,\|p\|)=:r .\end{aligned}\ ] ] hence , the map restricted to the ball with radius around the origin is a nonlinear self - mapping .claim ( iii ) of lemma [ lem : estimates for st ] states for all and that -s_t^{p , x^\pm}[\widetilde f]\|_{{{\mathcal{f}}}_w^1}&\leq t \constr{st lipschitz const res}(t,\|p\|,\|f\|_{{{\mathcal{f}}}_w},\|\widetilde f\|_{{{\mathcal{f}}}_w})\|f-\widetilde f\|_{{{\mathcal{f}}}_w}\\ & \leq t\constr{st lipschitz const res}(t,\|p\|,r , r)\|f-\widetilde f\|_{{{\mathcal{f}}}_w}. \end{aligned}\ ] ] where we have also used that is a continuous and strictly increasing function of its arguments .hence , for sufficiently small we have such that is a contraction on .however , for larger we loose control on the uniqueness of the fixed point .this highlights an interesting aspect of dynamical systems .e.g. for the dynamics it means that solutions are still uniquely characterized not only by newtonian data and fields at time but also by specifying newtonian cauchy data at time and fields at a different time .the maximal will in general be inverse proportional to the lipschitz constant of the vector field .fix .given a finite , and claim ( i ) of lemma [ thm : the map st ] states for all \|_{{{\mathcal{f}}}_w^1}\leq\|s_t^{p , x^\pm}[p , f]\|_{{{\mathcal{f}}}_w^3}\leq \constr{st radius const}^{(3)}(t,\|p\|)=:r .\end{aligned}\ ] ] let be the closed convex hull of \;|\ ; f\in { { \mathcal{f}}}_w^1\}\subset b_r(0)\subset{{\mathcal{f}}}_w^1 ] , ; note that is an element of and therefore also of for any .we define for the electric and magnetic fields .\end{aligned}\ ] ] recall the definition of the norm of , cf .definition [ def : fwn ] , for some and therefore , since on is closed , has an convergent subsequence if and only if all the sequences , for and have a common convergent subsequence in . to show that this is the case we first provide the bounds needed for condition ( i ) of lemma [ lem : precompactness ] .estimate ( [ eqn : a3w radius ] ) implies that for all .furthermore , by ( ii ) of lemma [ thm : the map st ] the fields are the fields of a maxwell solution at time zero , and hence , by theorem [ thm : maxwell_solutions ] fulfill the maxwell constraints for which read also by theorem [ thm : maxwell_solutions ] , is in so that for every estimate ( [ eqn : st first bound ] ) implies for all that where we made use of the properties of the weight .note that the right - hand does not depend on . therefore , all the sequences , , , for and are uniformly bounded in .this ensures condition ( i ) of lemma [ lem : precompactness ] .second , we need to show that all the sequences , for and decay uniformly at infinity to meet condition ( ii ) of lemma [ lem : precompactness ] . define 1 ] by , .using lemma [ thm : the map st](ii ) and afterwards theorem [ thm : maxwell_solutions ] we can write the fields as (0,\pm t)\\ & = \frac{1}{2}\sum_\pm\bigg[\begin{pmatrix } \partial_t & \nabla\wedge\\ -\nabla\wedge & \partial_t \end{pmatrix } k_{t\mp t}*\begin{pmatrix } { { \mathbf{e}}}^{(m),\pm}_{i,\pm t}\\ { { \mathbf{b}}}^{(m),\pm}_{i,\pm t } \end{pmatrix } + k_{t\mp t}*\begin{pmatrix } -4\pi { { \mathbf{j}}}^{(m)}_{i,\pm t}\\ 0 \end{pmatrix } \\&\quad+ 4\pi \int_{\pm t}^{t } ds\ ; k_{t - s } * \begin{pmatrix } -\nabla & - \partial_s \\ 0 & \nabla\wedge \end{pmatrix } \begin{pmatrix } \rho^{(m)}_{i , s}\\ { { \mathbf{j}}}^{(m)}_{i , s } \end{pmatrix } \bigg]_{t=0 } = : \terml{uni decay 1}+\terml{uni decay 2}+\terml{uni decay 3 } \end{aligned}\ ] ] where and for all . we shall show that there is a such that for all the terms and are point - wise zero on . recalling the computation rules for from lemma [ lem : greens_function_dalembert ] we calculate for term ({{\mathbf{x}}})\|_{{{\mathbb{r}}}^3}\leq 4\pi t\underset{\partial b_t{({{\mathbf{x}}})}}\fint d\sigma(y)\;|\varrho_{i}({{\mathbf{y}}}-{{\mathbf{q}}}^{(m)}_{\pm t})| .\end{aligned}\ ] ] the right - hand side is zero for all such that . because the charge distributions have compact support there is a such that for all .now for any and we have since the supremum of the velocities of the charge ,m\in{{\mathbb{n}}}}\|{{\mathbf{v}}}({{\mathbf{p}}}_{i , t}^{(m)})\| ] are in , i.e. that they are time - like charge trajectories that solve the conditional equations ( [ eqn : bwf equation written out])-([eqn : wf with boundary fields ] ) for all times .as discussed in the introduction it is interesting to verify that among those solutions we see truly advanced and delayed interactions between the charges .this is the content of theorem [ thm : existence of l ] which we prove next .we introduce : for we define to be the set of time - like charge trajectories which solve the equations ( [ eqn : wf equation written out])-([eqn : wf fields def ] ) for times ] for initial value . in order to show that a conditional solution is also a partial solution we have to regard the difference between the wf fieldsproduce by them : (t,\pm t)-m_{\varrho_{i}}[{{\mathbf{q}}}_i,{{\mathbf{p}}}_i](t,\pm\infty)\\ & = \begin{pmatrix } \partial_t & \nabla\wedge\\ -\nabla\wedge & \partial_t \end{pmatrix } k_{t\mp t}*x^\pm_{i,\pm t } + k_{t\mp t}*\begin{pmatrix } -4\pi { { \mathbf{v}}}({{\mathbf{p}}}_{i,\pm t})\varrho_{i}(\cdot-{{\mathbf{q}}}_{i,\pm t})\\ 0 \end{pmatrix}\\ & \quad- 4\pi \int_{\pm\infty}^{\pm t } ds\ ; k_{t - s } * \begin{pmatrix } -\nabla & - \partial_s \\ 0 & \nabla\wedge \end{pmatrix } \begin{pmatrix } \varrho_{i}(\cdot-{{\mathbf{q}}}_{i , s})\\ { { \mathbf{v}}}({{\mathbf{p}}}_{i , s})\varrho_{i}(\cdot-{{\mathbf{q}}}_{i , s } ) \end{pmatrix}. \end{split}\end{aligned}\ ] ] the equality holds by definition [ def : maxwell time evolution ] , theorem [ thm : maxwell_solutions ] and ( [ eqn : lw_fields ] ) in theorem [ thm : lwfields ] .let be the smallest radius such that for all . whenever there is an such that this difference is zero at least for times ] for all .recall the estimate ( [ eqn : apriori lipschitz no diff ] ) from the existence and uniqueness theorem [ thm : globalexistenceanduniqueness ] which gives the following dependent upper bounds on these solutions for all : }\|m_l[\varphi](t,0)\|_{{{\mathcal{h}}}_w}\leq \constr{apriori ml rho}\left(t,\|\varrho_{i}\|_{l^2_w},\|w^{-1/2}\varrho_{i}\|_{l^2 } ; 1\leq i\leq n\right)\ ; \|\varphi\|_{{{\mathcal{h}}}_w}. \end{aligned}\ ] ] note further that by lemma [ lem : estimates for st ] since , there is a such that fields fulfill therefore , setting we estimate the maximal momentum of the charges by ,\|p\|\leq a , f\in{{\operatorname{range}\,}}s_t^{p , c},\|\varrho_{i}\|_{l^2_w}+\|w^{-1/2}\varrho_{i}\|_{l^2}\leq b,1\leq i\leq n\bigg\}\\ & \leq\sup\bigg\{\|{{\mathbf{p}}}_{i , t}\|_{{{\mathbb{r}}}^3}\;\bigg|\;t\in[-t , t],\varphi\in d_w(a),\|\varphi\|_{{{\mathcal{h}}}_w}\leq c,\|\varrho_{i}\|_{l^2_w}+\|w^{-1/2}\varrho_{i}\|_{l^2}\leq b,1\leq i\leq n\bigg\}\\ & \leq \constr{apriori ml rho}\left(t , b , b,\right)c=:p_t^{a , b}<\infty. \end{aligned}\ ] ] now , since as well as are in the map as is continuous and strictly increasing .we conclude that claim is fulfilled for the choice let be a fixed point ] . by the fixed point properties of we know that and therefore solve the conditional equations ( [ eqn : bwf equation written out])-([eqn : wf with boundary fields ] ) for newtonian cauchy data and boundary fields . in order to show that the charge trajectories are also in for the given we need to show that the difference ( [ eqn : bwf wf diff ] ) , which equals (t,\pm t)-m_{\varrho_{i}}[{{\mathbf{q}}}_i,{{\mathbf{p}}}_i](t,\pm\infty)\\ & = \begin{pmatrix } \partial_t & \nabla\wedge\\ -\nabla\wedge & \partial_t \end{pmatrix } k_{t\mp t}*x^\pm_{i,\pm t } + k_{t\mp t}*\begin{pmatrix } -4\pi { { \mathbf{v}}}({{\mathbf{p}}}_{i,\pm t})\varrho_{i}(\cdot-{{\mathbf{q}}}_{i,\pm t})\\ 0 \end{pmatrix}\\ & \quad- 4\pi \int_{\pm\infty}^{\pm t } ds\ ; k_{t - s } * \begin{pmatrix } -\nabla & - \partial_s \\ 0 & \nabla\wedge \end{pmatrix } \begin{pmatrix } \varrho_{i}(\cdot-{{\mathbf{q}}}_{i , s})\\ { { \mathbf{v}}}({{\mathbf{p}}}_{i , s})\varrho_{i}(\cdot-{{\mathbf{q}}}_{i , s } ) \end{pmatrix } , \end{split}\end{aligned}\ ] ] is zero for times ] in this particular space - time region .clearly , the position at time zero is in .hence , we estimate the earliest exit time of this space - time region of a charge trajectory , i.e. the time when the charge trajectory leaves the region . by lemma [ lem : uni vel bound ]the charges can in the worst case move apart from each other with velocity during the time interval $ ] . putting the origin at we can compute the exit time by _ g. bauer _ + fh mnster + bismarckstrae 11 , 48565 steinfurt , germany + _ d .- a ._ + department of mathematics , university of california davis + one shields avenue , davis , california 95616 , usa + _institut der lmu mnchen + theresienstrae 39 , 80333 mnchen , germany
we study the equations of wheeler - feynman electrodynamics which is an action - at - a - distance theory about world - lines of charges that interact through their corresponding advanced and retarded linard - wiechert field terms . the equations are non - linear , neutral , and involve time - like advanced as well as retarded arguments of unbounded delay . using a reformulation in terms of maxwell - lorentz electrodynamics without self - interaction , which we have introduced in a preceding work , we are able to establish the existence of conditional solutions . these are solutions that solve the wheeler - feynman equations on any finite time interval with prescribed continuations outside of this interval . as a byproduct we also prove existence and uniqueness of solutions to the synge equations on the time half - line for a given history of charge trajectories . + * keywords : * maxwell - lorentz equations , maxwell solutions , linard - wiechert fields , synge equations , wheeler - feynman equations , absorber electrodynamics , radiation damping , self - interaction . + * acknowledgments : * the authors want to thank martin kolb for his valuable comments . .- a.d . gratefully acknowledges financial support from the _ bayefg _ of the _ freistaat bayern _ and the _ universit bayern e.v . _ as well as from the post - doc program of the daad . 1 1
modelling dna sequences with stochastic models and developing statistical methods to analyse the enormous set of data that results from the multiple projects of dna sequencing are challenging questions for statisticians and biologists .many dna sequence analysis are based on the distribution of the occurrences of patterns having some special biological function .the most popular model in this domain is the markov chain model that gives a description of the local behaviour of the sequence ( see ) .an important problem is to determine the statistical significance of a word frequency in a dna sequence . discuss about this relevance of finding over- or under - represented words .the naive idea is the following : a word may have a significant low frequency in a dna sequence because it disrupts replication or gene expression , whereas a significantly frequent word may have a fundamental activity with regard to genome stability .well - known examples of words with exceptional frequencies in dna sequences are biological palindromes corresponding to restriction sites avoided for instance in _ e. coli _ ( ) , the cross - over hotspot instigator sites in several bacteria , again in _ e. coli _ for example ( ) , and uptake sequences ( ) or polyadenylation signals ( ) . the exact distribution of the number of a word occurrences under the markovian model is known and some softwares are available ( ) but , because of numerical complexity , they are often used to compute expectation and variance of a given count ( and thus use , in fact , gaussian approximations for the distribution ) .in fact these methods are not efficient for long sequences or if the markov model order is larger than or . for such cases ,several approximations are possible : gaussian approximations ( ) , binomial ( or poisson ) approximations ( ) , compound poisson approximations ( ) , or large deviations approach ( ) . in this paperwe only focus on the poisson approximation .we approximate by ^k(k!)^{-1} ] , there is an error bound of the form .thus , if is a test statistic then , for all , which can be used to construct confidence intervals and to find p - values for tests based on this statistic .we focus on markov processes in our biological applications ( see [ bio ] ) but the theorem given in the following subsection is established for more general mixing processes : the so called -mixing processes .let be a sequence of real numbers decreasing to zero .we say that is a -mixing process if for all integers , the following holds where the supremum is taken over the sets and , such that . for a word of , that is to say a measurable subset of , we say that if and only if with .then , the integer is the length of word . for , we define the hitting time , as the random variable defined on the probability space ( ,, ) : is the first time that the process hits a given measurable .we also use the classical probabilistic shorthand notations .we write instead of , instead of and instead of .also we write for two measurable subsets and of , the conditional probability of given as and the probability of the intersection of and by or . for and , we write for the event consisting of the _ last _ symbols of .we also write for the supremum of two real numbers and .we define the periodicity of as follows : is called the principal period of word . then , we denote by the set of words with periodicity and we also define as the set of words with periodicity less than ] defines the integer part of a real number : }\mathcal{r}_p.\ ] ] is the set of words which are self - overlapping before half their length ( see example [ mot ] ) .we define the set of return times of which are not a multiple of its periodicity : +1 , \dots , n-1\}| a\cap t^{-k}(a ) \neq { \emptyset}\right\}.\ ] ] let us denote , the cardinality of the set .define also if and otherwise . is called the set of secondary periods of and is the smallest secondary period of . finally , we introduce the following notation . for an integer ,let .the random variable counts the number of occurrences of between and ( we omit the dependence on ) .for the sake of simplicity , we also put .[ mot ] consider the word . since , we have where .see the following figure to note that , and . we present a theorem that gives an error bound for the poisson approximation .compared to the chen - stein method , it has the advantage to present non uniform bounds that strongly control the decay of the tail distribution of .[ bigt2 ] let be a -mixing process .there exists a constant , such that for all and all non negative integers and , the following inequality holds : ,\ ] ] this result is at the core of our study .it shows an upper bound for the difference between the distribution of the number of occurrences of word in a sequence of length and the poisson distribution of parameter .proof is postponed in section [ fin ] .our goal is to compute a bound as small as possible to control the error between the poisson distribution and the distribution of the number of occurrences of a word .thus , we determine the global constant appearing in theorem [ bigt2 ] by means of intermediary bounds appearing in the proof .general bounds are interesting asymptotically in , but for biological applications , is approximately between or , which is too small .then along the proof , we will indicate the intermediary bounds that we compute . before establishing the proof of that theorem [ bigt2 ] , we point out here , for easy references , some results of , and some other useful results . in ,these results are given only in the -mixing context .moreover exact values of the constants are not given , while these are necessary for practical use of these methods .we provide the values of all the constants appearing in the proofs of these results .[ p11 ] let be a -mixing process .there exist two finite constants and , such that for any , any word , and any ] and ] , for which the following inequality holds for all : \mbox { and } f_1(a , t)=(t{\mathbb{p}}(a)\vee1){e}^{-t{\mathbb{p}}(a)}.\ ] ] we prove an upper bound for the distance between the rescaled hitting time and the exponential law of expectation equal to one .the factor in the upper bound shows that the rate of convergence to the exponential law is given by a trade off between the length of this time and the velocity of loosing memory of the process .we have .we fix and given by proposition [ p11 ] .we define there are three steps in the proof of the theorem .first , we consider of the form with a positive integer .secondly , we prove the theorem for any of the form with positive integers and with .we also put .finally , we consider the remaining cases . here , for the sake of simplicity , we do not detail the two first steps ( for that , see ) , but only the last one .let be any positive real number .we write , with a positive integer and such that .we can choose a such that and with , as before . obtains the following bound : the first term in the triangular inequality is bounded in the following way : the second term is bounded like in the two first steps of the proof in .we apply inequalities ( [ in6 ] ) and ( [ in7 ] ) to obtain finally , the third term is bounded using the mean value theorem ( see for example ) thus we have and the theorem follows by the change of variables . then .[ pbc ] be a -mixing process .suppose that , with .the following inequality holds : since , obviously . by the -mixing property .we divide the above inequality by and the lemma follows .for all the following propositions and lemmas , we recall that .\ ] ] [ : lambda ] let be a -mixing process .let .then the following holds : * for all , ,\nonumber \end{aligned}\ ] ] and similarly .\nonumber \end{aligned}\ ] ] * for all , with , the above proposition establishes a relation between hitting and return times with an error bound uniform with respect to . in particular , says that these times coincide if and only if , namely , the string is non - self - overlapping . in order to simplify notation , for , } ] ( and then we can not study it : see assumptions in theorem [ bigt2 ] ). we can not assume the good significance of the first chi ( ` gatggtgg ` ) because we count only occurrences in the sequence , whereas occurrences are necessary to consider this word as exceptional .on the other hand , the uptake sequence is very significant ( and then very relevant ) .indeed , we could fix a significance degree equal to and consider it as an over - represented word from occurrences with the -mixing method . as ` aagtgcggt ` is counted times in the sequence , we obtain the well - known fact that this word is biologically relevant .to conclude this paper , we recall the advantages of our new methods .we give an error valid for all the values of the random variable corresponding to the number of occurrences of word in a sequence of length .then , we can find a minimal number of occurrences to consider a word as biologically relevant for a very large number of words and for all degrees of significance .that is the main advantage of our methods on the chen - stein one which is based on the total variation distance and for which small degrees of significance can not be obtained .results of our -mixing method and the chen - stein method remain similar but our method has less limitations .note that our methods provide performing results for general modelling processes such as markov chains as well as every - and -mixing processes . in terms of perspectives ,as we expect more significant results , we hope to improve these methods adapting them directly to markov chains instead of - or -mixing .moreover , it is well - known that a compound poisson approximation is better for self - overlapping words ( see and ) .an error term for the compound poisson approximation for self - overlapping words can be easily derived from our results .the authors would like to thank bernard prum for his support and his useful comments .the authors would like to thank sophie schbath for her program , vincent miele for his very relevant help in the conception of the software and catherine matias for her invaluable advices .
using recent results on the occurrence times of a string of symbols in a stochastic process with mixing properties , we present a new method for the search of rare words in biological sequences generally modelled by a markov chain . we obtain a bound on the error between the distribution of the number of occurrences of a word in a sequence ( under a markov model ) and its poisson approximation . a global bound is already given by a chen - stein method . our approach , the -mixing method , gives local bounds . since we only need the error in the tails of distribution , the global uniform bound of chen - stein is too large and it is a better way to consider local bounds . we search for two thresholds on the number of occurrences from which we can regard the studied word as an over - represented or an under - represented one . a biological role is suggested for these over- or under - represented words . our method gives such thresholds for a panel of words much broader than the chen - stein method . comparing the methods , we observe a better accuracy for the -mixing method for the bound of the tails of distribution . we also present the software ` panow ` dedicated to the computation of the error term and the thresholds for a studied word . poisson approximation , chen - stein method , mixing processes , markov chains , rarewords , dna sequences
_ photoacoustic tomography _ is a very promising medical imaging method that is currently improved by taking into account more complicated tissue properties . in this paperwe focus on a time reversal method in dissipative media and show that under an appropriate condition * the imaging functional in based on the non - causal _ thermo - viscous wave equation _ can be used for time reversal if the non - causal data is replaced by a _ time shifted _ set of causal pressure data and that * a similar imaging functional can be used for time reversal , which can be improved by solving an operator equation with the time reversal result as right hand side .we note that the _ time shift relation _ between causal and non - causal pressure data holds only approximately , but in principle the non - causal data can be calculated from the causal one . however the use of the _ time shift relation _ is less computational expensive and , in addition , it explains the successful use of the non - causal thermo - viscous wave equation .consider a fixed experimental set - up for which many experiments are performed .it seems natural that the experimenter performs - as a matter of routine - one and the same pressure data _ offsets _ to all data .if this data offset corresponds to the mentioned time shift , then causality is approximately `` restored '' and causality violations due to the thermo - viscous wave equation may not be observed . to outline the contents of this paper , we start with a basic description of the inverse problem of pat in dissipative media .the goal of pat is to estimate the function with from pressure data measured at a boundary , say , in which and are related by ( cf . ) where denotes the speed of sound , is a time convolution operator with kernel and is the complex attenuation law of the medium in which the wave propagates .we focus on the following complex attenuation laws and here denotes the relaxation time .only the first one obeys causality , i.e. the respective pressure function has a finite wave front speed ( cf . ) as mentioned above , it will be sown that holds under an appropriate condition .hence all results based on the non - causal thermo - viscous wave equation can be applied to time shifted causal data . for this settingwe introduce an imaging functional for estimating the initial pressure function .let be the solution of ( [ waveeqp ] ) with ( [ modeltv ] ) and , moreover , let denote the solution of the _ regularized time reversed _ thermo - viscous wave equation where is a regularization operator and variance with the necessary side condition . ] that guarantees the existence of the time reversal for . with these notations ,we introduce the functionals : = q^{tv}|_{t = t } \qquad\mbox{and}\qquad f_1 [ \phi_t|_{\omega},\beta ] : = 2\,f[\mathcal{r}_d\,\phi_t(\phi_t|_{\omega},\beta)]\,.\ ] ] in this paper we show under the assumption that the _ imaging functional _ satisfies for = \mathcal{r}_d\varphi = f_1[\phi_t|_{\omega},\beta]|_{\tau_1=0 } \qquad \mbox{on}\qquad \omega\,,\ ] ] in particular \sim \mathcal{r}_d \,\varphi \ , \qquad\quad \mbox{for sufficiently small } \,,\ ] ] and that satisfies the operator equation \qquad \mbox{on }\ , , \end{aligned}\ ] ] where is defined as the solution of ] and ] and ] and for ] we have \ ] ] according to lemma [ lemm : tvb ] . the functions and are real valued and bounded on .on these functions are real valued but not bounded . because of we have on .thus has the following representation on : \,.\ ] ] from this and we infer that for ] .the first claim follows from the fact that solves which is equivalent to wave equation ( [ waveequ ] ) with . for the second claim .let , and for .because * is real for , * is imaginary for and and * , it follows that and holds , where is oscillating similarly as for and decreases exponentially like for . from this and , it follows that which proves .the last claim follows from the fact that the convolution is well - defined by ( [ defconvj ] ) and for ] is defined by ( [ defimagf ] ) .in particular , this means that the regularized time reversed thermo - viscous wave equation has a solution in ( cf .remark [ rema : tv2 ] ) , the operator is well - defined and compact and holds according to proposition [ prop : shift2 ] . here denotes the causal solutions of ( [ waveeqp ] ) with defined by ( [ modeltvksb ] ) .that is to say , the following results hold approximately if time shifted causal data are used instead of the non - causal one . for the convenience of the readerwe recall that : = q|_{t = t} ] and . let denote the solutions of for convenience we set and . from the convolution theorem get and consequently {t = t } \,\hat\varphi_d\,.\ ] ] from lemma [ lemm : tva ] and lemma [ lemm : tvb ] , it follows for that with and , the last result simplifies to with we obtain finally = q|_{t = t } = \left(\delta({\mathbf{x } } ) + c_0 ^ 2\,\delta\ , { \mathcal{f}}^{-1}\left\{\frac{\sin^2(\vartheta\,t)}{(2\,\pi)^{3/2}\,\vartheta^2}\right\ } \right ) * _ { \mathbf{x}}\varphi_d\,,\ ] ] which concludes the proof .we now come to the case .[ theo : tv2 ] if ( ad 1 ) holds and or , then = \left(\mbox{id }+ \tau_2 ^ 2\,c_0 ^ 2\,\delta\right)^2\,\mathcal{r}_d\,\varphi + c_0 ^ 2\,\delta\,{\mathcal{j}}_t \,\mathcal{r}_d\,\varphi \ , \end{aligned}\ ] ] according to remark [ rema : tv2 ] , with implies that exists for ] and the operator is defined by here and are defined as in ( [ invgs1 ] ) and ( [ invgs2 ] ) , respectively . because the kernel of the operator defined by ( [ defj ] ) is an element of for and is well - defined only if ( cf .section [ sec - opj ] ) , it is required that . but this means nothing else than .moreover , exists and lies in .hence we get the following proposition .a necessary condition for the solvability of operator equation ( [ opeqa ] ) is that the next two propositions discuss the injectivity and surjectivity of the operator . if ( ad 1 ) holds , then is injective .we show that the null space contains only the zero function .we note that defined as in ( [ gauss ] ) satisfies for with and that .because the fourier transform is an isometry on , it follows that if and only if \,e^{-4\,d\,|{\mathbf{k}}|^2}\,\hat\varphi = 0\ , \end{aligned}\ ] ] with we assume that is not the zero function and prove a contradiction . because has compact support , the paley - wiener theorem ( cf . ) implies that the set of zeros of can not contain an open ball and thus follows .but this is equivalently to we recall that ( cf .lemma [ lemm : tv2 ] ) oscillates between for and increases exponentially for .hence ( [ identity ] ) has finite many zeros on ] is similar to if the attenuation is weak .[ theo : relf1 ] let .if and assumption ( [ asst ] ) holds , then identity ( [ relf1 ] ) is true .let and satisfy ( [ waveequ ] ) .then solves the standard wave equation } \qquad ( c_1:=2\,c_0)\,.\ ] ] via the forward euler method on ^ 2\times [ 0,t]$ ] with , , we note that the cfl - condition is satisfies for .the first example is visualized in fig .[ fig : setup ] .the circular peaks in are of the form . the right picture in the first row in fig .[ fig : setup ] shows the initial pressure function and the second row shows the time reversal images for ( water ) and ( fictive ) .we see that the smallest features is not well mapped for strong dissipation , it is twice as thick and its maximum intensity is about 30 percent of the correct maximum intensity .numerical simulations show that the time reversal image blows up for , even if the time step size is decreased by the factor . a second numerical example for a piecewise constant function is presented in fig . [ fig : sim ] .the last row shows which is up to a constant the solution of wave equation ( [ waveeqw ] ) and its laplacian .we note that non - smooth initial pressure functions and stronger dissipation causes more artifacts in the solution of . as in the first examplethe time reversal image blows up for .in this paper we showed that a method for solving pat based on the non - causal thermo - viscous wave equation can be used if * strictly speaking the time reversal image exists only if the data are regularized , e.g. by the operator for restriction ( cf .( [ defrd ] ) ) and that * the regularized time reversal image for the case of dissipative media like water is very similar to a smoothed version of the initial pressure function . above all , we would like to emphasize that this paper has analyzed the quality of an idealized estimation ( noise - free case ) , but did not discussed the quality of a reconstruction using real noisy data .p. burgholzer and h. grn and m. haltmeier and r. nuster and g. paltauf : compensation of acoustic attenuation for high - resolution photoacoustic imaging with line detectors . in a.a .oraevsky and l.v .wang , editors , _ photons plus ultrasound : imaging and sensing 2007 : the eighth conference on biomedical thermoacoustics , optoacoustics , and acousto - optics _ ,volume 6437 of _ proceedings of spie _ , page 643724 .spie , 2007 .hristova , y. and kuchment , p. and nguyen , l. : reconstruction and time reversal in thermoacoustic tomography in acoustically homogeneous and inhomogeneous media ._ inverse problems _ ,24(5):055006 ( 25pp ) , 2008 .kowar , r. and scherzer , o. : attenuation models in photoacoustics . in_ mathematical modeling in biomedical imaging ii : _ lecture notes in mathematics 2035 , doi 10.1007/978 - 3 - 642 - 22990 - 9_4 , springer - verlag 2012 .
this paper is concerned with time reversal in _ photoacoustic tomography _ ( pat ) of dissipative media that are similar to water . under an appropriate condition , it is shown that the time reversal method in based on the non - causal thermo - viscous wave equation can be used if the non - causal data is replaced by a _ time shifted _ set of causal data . we investigate a similar imaging functional for time reversal and an operator equation with the time reversal image as right hand side . if required , an enhanced image can be obtained by solving this operator equation . although time reversal ( for noise - free data ) does not lead to the exact initial pressure function , the theoretical and numerical results of this paper show that regularized time reversal in dissipative media similar to water is a valuable method . we note that the presented time reversal method can be considered as an alternative to the causal approach in and a similar operator equation may hold for their approach .
consider the regression model , where for , and conditionally on the regressors s , are i.i.d . with . suppose that we are interested in knowing whether the true regression curve is nondecreasing on some sub - interval of ] , the function , ] .then , under this least favorable case , the test statistic converges weakly to the maximum difference between a standard brownian motion on ] starting at 0 and its concave majorant on ] , this maximum difference converges weakly to the distribution of as in the regression setting above .as proved in proposition 4 ( iii ) in , one interesting property of the distribution of is that we can replace in ( [ m ] ) by a standard brownian _bridge _ ; i.e , the distribution of is also that of the maximum difference between a standard brownian bridge and its concave majorant . furthermore , the random variable can be given under a more useful form .let denote the maximum of a brownian excursion and an infinite sequence of independent random variables distributed as .if is an infinite sequence of independent uniform random variables on ], it follows that \end{aligned}\ ] ] occurs with at least probability .hence , to ensure an error of order , the sample size should be chosen of order . therefore , very large sample sizes are needed to get accurate results . to give an order of magnitude , table [ j0c ] shows several values of and corresponding to desired precision targets .all the values are computed for , where 0.33 appears to be the numerical limit of what we can compute without violating the basic properties of a distribution function .this point will be brought up again in the next section .note that the main purpose of table [ j0c ] is to give an idea about how and behave as functions of the precision .for instance , a precision of order is useless if the goal is to compute an approximation of the value distribution function of at since it is of order as found with the gs algorithm .we use the above mc approach to estimate the distribution function of for as well as the upper quantiles .the algorithm is implemented in c. this method turns out to be very slow for large sample sizes .moderate sample sizes ( of order ) do not give the desired accuracy for small .the estimates of the distribution function for large ( of order 0.80 and above ) as well as the upper quantiles match with those obtained by gs algorithm ( see next section ) . in the same vein , one can consider a second variant of mc .it is mainly based on the following result due to kennedy 1976 ( see corollary on page 372 ) : } { b^{\mbox{}}}(t ) - \inf_{t \in [ 0,1 ] } { b^{\mbox{}}}(t)\ ] ] where is a brownian bridge on length 1 . now using the well - known donsker approximation ,the distribution of can be approximated for large by the distribution of the random variable } \sqrt{n } ( \mathbb g_n(t ) - t )- \inf_{t \in [ 0,1 ] } \sqrt{n } ( \mathbb g_n(t ) - t)\ ] ] where is the uniform empirical process based on independent uniform random variables in ] and that is infinitely differentiable on , an extensive computation study carried out by has shown that if the function is bounded by 1 say , then the approximation in ( [ approxinvlt ] ) for well - behaved functions ( in the sense given above ) coincides with the truth up to significant digits .hence , the bigger is , the better is the approximation .however , for large values of , the binomial coefficients in become extremely large and require high numerical precision .such a facility is typically provided by a multiple precision ( mp ) numerical library or is built - in in some programming languages . for a given integer , let denote the gs approximation of . from the formula of in ( [ expinvlt ] ) and ( [ approxinvlt ] ), it is easily seen that where is the same function defined by the infinite product in ( [ g ] ) . for and a given we approximate by the product of the first terms , where is a positive integer depending on and .define the truncated version of .this truncation induces an additional error which we need to control .in fact , in computing the gaver - stehfest approximation of the distribution function , we actually replace in ( [ fk ] ) by the following shows that the error due to replacing by does not exceed a given threshold provided than is large enough . [ approxg ] for , we have if where * proof .* see appendix . from lemma [ approxg ]it follows that the second term in the left side is known to be of order , and hence the approximation is of the same order if is chosen to be , and of order if the latter dominates and is chosen to be larger or equal than given in ( [ lowern ] ) .we implement the multiple precision calculation of in c++ using two open - source libraries for arbitrary precision computation : the gnu multiple precision arithmetic library ( see ) and the multiple precision floating - point reliable library ( mpfr ) ; see .gmp is an optimized library written in c with assembly code for common inner loops .mpfr is built on top of gmp and adds support for common floating - point operations such as . to approximate the bessel functions in ( [ g ] ) , we use bessel routines from the alglib library based on piecewise rational and chebyshev polynomial approximations .we use a precision of 4000 bits to represent multiple precision floating - point numbers .however , the provided aglib bessel approximations only guaranty a maximal error of order . as a proof - of - concept, we have also implemented the same algorithm using a much slower but more accurate numerical library in python . for small values of such as 0.30 , 0.31 , and 0.32 , and unlike with the c library , we obtain results consistent with the monotonicity and positivity of a cumulative distribution function . for , , the python code gives the following approximations for , for and for .computing ] ( or a brownian bridge of length 1 ) and its concave majorant .this random variable determines the asymptotic critical region of a nonparametric test for monotonicity of a density or regression curve .we find the numerical inversion of laplace transform , based here on the gaver - stehfest algorithm , to be much more accurate and faster than the monte carlo method .numerical inversion of laplace transform was then very well adapted to this problem .however , it would not have been possible to use such an efficient method if a laplace transform representation of the distribution of was not available , see .finally , we would like to draw the reader s attention to the earlier computational work of on chernoff s distribution .the latter appears as the limit distribution of the grenander estimator ; that is the maximum likelihood estimator of a decreasing density on . in their work , have also used a mathematical characterization of chernoff s distribution .this allowed for a very efficient and fast approximation procedure which also outperformed monte carlo estimation .[ simu1000 ] .order of the lower bound and sample size . [ cols="^,^,^ " , ]the following facts will be used in the proof of lemma [ lemmaapproxj ] .the first identity can be proved recursively . for , we have .suppose that for all .it is easy to check that by independence of and , we can write and the identity is proved for all . for the second inequality, we will use the fact that for a given consider the function the study of variations of shows that is increasing on ] and hence .it follows that the function is increasing on .since , the inequality follows . define we have \\ & \le & e[\delta_j ] \\ & = & e\left[\delta_j 1_{\delta_j \le \epsilon}\right ] + e\left[\delta_j 1_{\delta_j > \epsilon}\right ] \\ & \le & \epsilon + p(\delta_j > \epsilon).\end{aligned}\ ] ] let be the event and its complement .we can write using lemma a.1 ( ii ) and the chebyshev inequality , we get and hence , to have this approximation error smaller than , it suffices to take if we can take and lemma [ approxj ] is proved . the modified bessel function of the second kind is known to converge to 0 as .moreover we have and see lemma a.2 . for , define so that . also , for let so that .we have where .let us write again for the gaver - stehfest approximation of the inverse of laplace transform of .the corresponding approximation error due to truncating is given by by ( [ fondineq ] ) , we can write where .now , and so for . the coefficients can be loosely bounded using the following upper bounds for binomial coefficients for , we have so that hence , if we impose that , then it is enough to choose such that * proof .* let us recall some well - known facts about modified bessel functions of the second kind . see e.g. abramowitz and stegun 1964 .note first that by ( [ propk1 ] ) , the inequality stated in the lemma is equivalent to from ( [ propk1 ] ) , ( [ propk2 ] ) and ( [ propk3 ] ) , it follows that let us write .suppose now that there exists such that .this would imply that there exists such that and .now , using ( [ propk5 ] ) and ( [ propk6 ] ) it follows that hence , satisfies it follows that since and for all , we must have .but if , then the previous inequality implies which is impossible by ( [ propk7 ] ) .
in this paper , we describe two computational methods for calculating the cumulative distribution function and the upper quantiles of the maximal difference between a brownian bridge and its concave majorant . the first method has two different variants that are both based on a monte carlo approach , whereas the second uses the gaver - stehfest ( gs ) algorithm for numerical inversion of laplace transform . if the former method is straightforward to implement , it is very much outperformed by the gs algorithm , which provides a very accurate approximation of the cumulative distribution as well as its upper quantiles . our numerical work has a direct application in statistics : the maximal difference between a brownian bridge and its concave majorant arises in connection with a nonparametric test for monotonicity of a density or regression curve on $ ] . our results can be used to construct very accurate rejection region for this test at a given asymptotic level .
the detection of periodic signals in astronomical data has been usually addressed by classical fourier - based or epoch folding methods .these methods have different problems when dealing with non - sinusoidal periodic signals or with very low signal - to - noise ratios .when the analysed data set contains several periodic signals , the behavior of classical period determination methods highly depends on intrinsic signal characteristics ( see , for example , ; ; ; ; , ; ; ) .the reader is referred to the introduction of ( hereafter paper i ) for a general discussion . to avoid this problem, we presented there , as a preliminary step , a wavelet - based approach that only works with evenly time sampled data . in astronomy, however , this is not a usual situation , since data are mostly acquired on irregular intervals of time .in such a case there are two possibilities : resample the data into a new evenly sampled data set , or use a method able to deal with the original unevenly sampled data set .in the first case we are forced to modify the original data , which necessarily implies a loss of information . moreover ,this is not always possible if the temporal gaps are larger than some of the periods present in the data . in order to avoid these problems , a technique capable to deal with unevenly sampled datais needed .the present paper is a natural extension of paper i that allows to work on unevenly time sampled data .we show how the methodology of multiresolution decomposition ( similar to the wavelet decomposition philosophy ) is very well suited to this problem , since it is completely oriented towards decomposing functions into several frequential characteristics . as in paper i, the main objective is to isolate every signal present in our data and to analyse them separately , avoiding their mutual influences . in section [ multiresolution ]we outline some concepts in multiresolution analysis and their similarities with wavelet theory that are relevant to the stated problem . in section [ period ]we propose an algorithm to detect each of the periodic signals present in a data set by combining multiresolution analysis decomposition with classical period determination methods . in sections[ simulated ] and [ results ] we present some examples of synthetic data we used to test the algorithm and the results we obtained .we summarise our conclusions in section [ conclusions ] .multiresolution decomposition introduces the concept of the presence of details between successive levels of resolution . many wavelet decomposition algorithmsare based on multiresolution analysis schemes ( ; ; ; ; ; ; ) , and some astronomical applications using wavelets for timing analysis have been reported ( ; ; ; ) .in fact , all these methods use the same philosophy and the obtained results can be interpreted in the same way .however , wavelet theory presents some constraints on mathematical functions .these constraints are not respected in all multiresolution decomposition schemes .therefore , when we are using a certain multiresolution decomposition algorithm that fulfills the wavelet constraints , we are obtaining a wavelet decomposition .in contrast , when a given multiresolution decomposition algorithm violates these wavelet constraints , we are obtaining a result similar to , but which is not , a wavelet decomposition .as discussed in paper i , we note that there are wavelet approaches that are based on approximations of a continuous wavelet transform and on the subsequent study of wavelet space coefficients ( see , e.g. , ) , which are able to deal with unevenly sampled data sets .however , these algorithms present a _non - direct _ inverse wavelet transform , in the sense that the search for periodicities is based on the fit between the wavelet base function profile and the signal one , and therefore on the values of the wavelet transform coefficients , which highly depend on the wavelet base used . moreover , the period analysis has to be performed on the wavelet coefficients space but , since it is usually decimated , accurate period detection is a difficult task .the multiresolution decomposition scheme we use in this work performs the decomposition on the temporal space , which allows to find accurate values for periodicities . in order to obtain a multiresolution decomposition for signals , an algorithm to decompose the signal into frequency planescan be defined as follows .given a signal we construct the sequence of approximations : performing successive convolutions with gaussian filters .it is important to note the difference between this sequence of convolutions and the one used in paper i. in the latter we were dealing with a discrete convolution mask , and hence forced to work with evenly spaced data .in contrast , the continuous nature of the convolution functions used in the present paper , allows us to work with unevenly sampled data . similarly to the wavelet planes ,the multiresolution frequency planes are computed as the differences between two consecutive approximations and .letting , in which , we can write the reconstruction formula : in this representation , the signals are versions of the original signal at increasing scales ( decreasing resolution levels ) , are the multiresolution frequency planes and is a residual signal ( in fact , but we explicitly substitute by to clearly express the concept of ) . in our case , we are using a dyadic decomposition scheme .this means that the standard deviation of the gaussian function associated with the filter is .thus , similarly to the wavelet approach , the original signal has double resolution compared to , and so on .all these signals have the same number of data points as the original signal .since depends on , a value for has to be carefully chosen for every data set .it has to be fixed considering a likely minimum value for the time - duration of the features and the characteristic time sampling of the data set , in order to include a significant number of points on which to perform the convolutions .too small values do not accurately describe feature profiles , and suffer from poor or noisy data .in contrast , too large values reduce the noise effect by integrating a lot of data points , but may ignore interesting high frequency features .hence , a first analysis of data has to be performed in order to obtain a useful initial value .we have used the same notation as in the wavelet decomposition described in paper i because , as explained above , the idea of these multiresolution planes is similar to the wavelet ones .we note that this particular decomposition scheme that uses a gaussian kernel can also be interpreted as a scale - space filtering ( , , ) or as a particular case of more general image diffusion approaches ( , ) .smoothed data sets can be interpreted as diffused scale - space images , and the difference between them as the details at different scales .we propose to apply this multiresolution analysis algorithm to solve our initially stated problem : to isolate each of the periodic signals contained in a set of unevenly sampled data and study them separately . in order to do so ,we proceed as in paper i , and the period detection algorithm we propose is as follows : 1 .choose values for and ; decompose the original signal into its multiresolution frequency planes .2 . detect periods in each of the obtained frequency planes .phase dispersion minimization ( pdm ) and clean methods are used to detect periods in the original data and in every multiresolution frequency plane . in paperi we described several undesirable effects of pdm and clean methods on data with superimposed signals , as well as the advantages of using multiresolution - based methods over the classical ones . hereafter , and for notational convenience , the multiresolution - based pdm and clean methods will be called and mrclean , respectively .in order to check the benefit of applying mrpdm versus pdm , or mrclean versus clean , we proceeded similarly as in paper i by generating and analysing several sets of simulated data containing two superimposed periodic signals .each data set is composed of a high - amplitude primary sinusoidal function and a secondary low - amplitude gaussian one . finally , we added a white - gaussian noise to this combination of signals .we increased the value of the noise standard deviation , , up to the value where detection of periodic signals became statistically insignificant in both the classical and the multiresolution - based methods .the primary signal is intended to simulate variable sources with a pure sinusoidal intensity profile ( like precession of accretion discs ) , and the secondary , burst - like events ( like pulses or eclipses ) superimposed to it .we note that in paper i we also studied the superposition of two sinusoidal functions , the so called sine+sine case .however , here we have directly focused on the more realistic situation of the so called sine+gaussian case , where the difference between the classical and the multiresolution - based methods is more critical ( see paper i ) .the characteristics of the signals are : 1 .each signal is generated as an unevenly spaced data set .its time sampling has been taken from observations of the x - ray binary lmc x-1 by the all sky monitor onboard the rxte satellite ( ) .we have used an observation period which has 8270 measurements during 679 days . 2 .the high - amplitude sinusoidal function has an amplitude equal to 1 and a period of 108.5 days .the amplitudes of the low - amplitude periodic gaussian function are 0.1 and 0.5 .the periods used for the secondary function are 13.13 and 23.11 days . 5 .we have used two values for the full width at half - maximum ( fwhm ) of the gaussian signal , corresponding to 2 and 6 days , respectively . in the first four columns of table [ gauss_table ] we present the parameters used to generate each simulated data set .we recall that the simultaneous use of two independent methods , such as clean and pdm , is usually applied to discriminate false period detections from the true ones .a similar procedure can be used with each one of the multiresolution planes in the mrpdm and mrclean methods .therefore , when comparing the behavior of these different methods , we have to compare the usual pdm - clean method combination for period estimation prior to the new mrpdm - mrclean combination .taking into account the characteristics of the simulated data we have chosen a day . as in paperi , the primary period ( 108.5 days ) is always detected by all methods , and does not appear in table [ gauss_table ] . in this table , the four last columns show the detected low - amplitude periods for each data set using pdm , clean , mrpdm and mrclean methods , respectively .a dash is shown when a period is not detected , and a question mark when the detection is difficult or doubtful . when a period is found in the multiresolution - based methods ,we also show in parentheses the mutiresolution planes where it is detected . [ cols="^,^,^,<,<,<,<,<",options="header " , ] we must note that the use of two different fwhm for the gaussian , combined with two different periods ( 13.13 and 23.11 days ) , gives 4 different profiles .hence , the phase duration of the burst - like event ranges from very low to relatively high values in the following order : fwhm=2 and period=23.11 , fwhm=2 and period=13.13 , fwhm=6 and period=23.11 , and finally fwhm=6 and period=13.13 . in view of the resultsdisplayed in table [ gauss_table ] we can make the following comments : 1 . in all methods , with high noise - to - signal ratios the detected periods are slightly different from the simulated ones .2 . there is a better performance of clean over pdm .we must note that when the fwhm is only 2 days , pdm never detects the secondary period . only with fwhm=6 days and a relatively high amplitude ( 0.5 ) , can pdm detect the low - amplitude periodic signals .3 . as the noise increases ,the detection starts to fail in the lower multiresolution planes ( higher frequencies ) , and only the higher ones ( lower frequencies ) are noise - free enough to allow period detection .4 . [ iv ] mrpdm and mrclean perform better or similar than pdm and clean methods ( see exception below ) .when clean marginally detects the secondary period , mrpdm and most of times mrclean have no problems to detect it , and they work properly even with higher noise . in the mrpdm case ,the results are always better than with pdm .5 . in all cases with fwhm=2 ,the mrpdm performance is much better than mrclean , because the signal is clearly non - sinusoidal . in the fwhm=6 cases , mrpdm is only slightly better than mrclean , since the signals are closer to a sinusoidal profile .6 . for a given amplitude of the gaussian signal , the maximum noise - to - signal ratio achieved with mrpdm and mrclean increases with the phase duration of the fwhm .the only exception to [ iv ] is for the most extreme of the simulated cases , i.e. , the one with the lower amplitude , lower fwhm value and longer period .however , we note that the period detected by clean is slightly different than the simulated one .all these results are very similar to those shown in table 2 of paper i. nevertheless the maximum noise - to - signal ratios achieved in the present cases are around 2.5 times higher .this can be explained because , although the time span of the data sets used here is around 1.5 times smaller , the number of points per unit time is around 12 times higher than in paper i. finally , and for illustrative purposes , we show in fig .[ fig : simulated ] the simulated data set generated with the following : 13.13-day period , fwhm=6.0 days and amplitude=0.1 with .the outputs of pdm and clean , are also shown .none of these methods is able to detect the 13.13-day period .we show in fig .[ fig : mrpdm ] the outputs from mrpdm .the simulated period is detected , with its corresponding subharmonics , in the multiresolution planes and .the mrclean outputs , shown in fig . [ fig : mrclean ] , reveal a marginal detection of the 13.13-day period in multiresolution plane .we note that we would not consider this detection as a true one when taken alone .however , since the same period is clearly detected in two multiresolution planes of mrpdm , we can establish the existence of this period in the analysed data set .in this paper we have presented a multiresolution - based method for period determination able to deal with unevenly sampled data .this constitutes a significant improvement with respect to the wavelet - based method presented in paper i , which is unable to deal with unevenly sampled data .the overall performance of the present method is similar to the wavelet - based one , in the sense that it allows us to detect superimposed periodic signals with lower signal - to - noise ratios than in classical methods .we stress that one advantage of the present method over classical methods is the simultaneous detection of a period in more than one multiresolution plane , allowing to improve the confidence of a given detection .moreover , since here we are not forced to lose or modify the information when averaging or interpolating the original data , we can reach higher noise - to - signal ratios than in the wavelet - based method described in paper i. we note that the multiresolution decomposition scheme that we have used can be interpreted as a particular case of scale - space filtering . in order to improve isolation of periodic features ,more general approaches could be used to perform this decomposition . in this context, anisotropic diffusion schemes proposed by could be useful if properly tuned .we thank useful comments and suggestions from an anonymous referee . we acknowledge partial support by dgi of the ministeriode ciencia y tecnologa ( spain ) under grant aya2001 - 3092 , as well as partial support by the european regional development fund ( erdf / feder ) .this research has made use of facilities of cesca and cepba , coordinated by c4 ( centre de computaci i comunicacions de catalunya ) .xo is a researcher of the programme _ ramn y cajal _ funded by the spanish ministery of science and technology andcentre de visi per computador .mr acknowledges support by a marie curie fellowship of the european community programme improving human potential under contract number hpmf - ct-2002 - 02053 .mp is a researcher of the programme _ ramn y cajal _ funded by the spanish ministery of science and technology and universitat de girona .rib m. , peracaula m. , paredes j.m . ,nez j. , otazu x. , 2001 , in gimnez a. , reglero v. , winkler c. , eds , proc .fourth integral workshop , exploring the gamma - ray universe .esa publications division , noordwijk , p. 333
in this paper we present a multiresolution - based method for period determination that is able to deal with unevenly sampled data . this method allows us to detect superimposed periodic signals with lower signal - to - noise ratios than in classical methods . this multiresolution - based method is a generalization of the wavelet - based method for period detection that is unable to deal with unevenly sampled data , presented by the authors in . this new method is a useful tool for the analysis of real data and , in particular , for the analysis of astronomical data . [ firstpage ] methods : data analysis methods : numerical stars : variables : other x - rays : binaries .
quantum correlation is one of the most striking features in quantum theory .entanglement is by far the most famous and best studied kind of quantum correlation , and leads to powerful applications .another kind of quantum correlation , called quantum discord , captures more correlations than entanglement in the sense that separable states may also possess nonzero quantum discord .quantum discord has been attracted much attention in recent years , due to its theoretical interest to quantum theory , and also due to its potential applications . up to now, the studies on quantum correlations , like entanglement and quantum discord , are mainly focused on the bipartite case .quantifying the multipartite correlations is a fundamental and very intractable question .the direct idea is that we can properly generalize the quantifiers of bipartite correlations to the case of multipartite correlations .recently , generalizing the quantum discord of bipartite states to multipartite states has been discussed in different ways . as an important measure of bipartite correlations , the geometric quantum discord , proposed in , has been extensively studied . in this paper, we generalize the geometric quantum discord to multipartite states .this paper is organized as follows . in sec.2, we review the definition of geometric quantum discord for bipartite states . in sec.3 , we give the definition of geometric global quantum discord ( ggqd ) for multipartite states , and give two equivalent expressions for ggqd . in sec.4 , we provide a lower bound for ggqd by using the high order singular value decomposition of tensors . in sec.5, we obtain the analytical expressions of ggqd for three classes of states .sec.6 is a brief summary .the original quantum discord was defined for bipartite systems over all projective measurements performing only on one subsystem .that is , the quantum discord ( with respect to ) of a bipartite state of the composite system ( we suppose ) was defined as in eq.(1 ) , is von neumann entropy , , is a projective measurement performing on , is the abbreviation of without any confusion , here is the identity operator of system .note that =tr_{b}[\pi _ { a}(\rho _ { ab})]$ ] , that is , taking partial trace and performing local projective measurement can exchange the ordering .it can be proved that where , , is any orthonormal basis of system , are density operators of system , , .the original definition of quantum discord in eq.(1 ) is hard to calculate , even for 2-qubit case , by far we only know a small class of states which allow analytical expressions .dakic , vedral , and brukner proposed the geometric quantum discord , as :d_{a}(\sigma _ { ab})=0\}.\end{aligned}\ ] ] obviously , for many cases is more easy to calculate than since avoided the complicated entropy function .for instance , allows analytical expressions for all 2-qubit states , and also for all states .in , the authors generalized the original definition of quantum discord to multipartite states , called global quantum discord ( gqd ) .consider an -partite system , each subsystem corresponds hilbert space with dim ( we suppose ) .the gqd of an -partite state is defined as ( here we use an equivalent expression for gqd ) ,\end{aligned}\ ] ] where , is a locally projective measurement on .similar to eqs.(2 , 3 ) , we have lemma 1 below . _ lemma 1 ._ where , is any orthonormal basis of , , , . __ eq.(7 ) is proved in .eq.(8 ) can be proved as follows .noting that , then by eq.(3 ) and induction , eq.(8 ) can be proved .with lemma 1 , in the same spirit of defining geometric quantum discord for bipartite states in eq.(4 ) , we now define the ggqd below . _ definition 1 . _the ggqd of state is defined as ^{2}:d(\sigma _{ a_{1}a_{2} ... a_{n}})=0\}.\end{aligned}\ ] ] with this definition , it is obvious that in , two equivalent expressions for eq.(4 ) were given ( theorem 1 and theorem 2 in ) , and they are very useful for simplifying the calculation of eq.(4 ) and yielding lower bound of eq.(4 ) .inspired by this observation , we now derive the corresponding versions of these two equivalent expressions for ggqd .these are theorem 1 and theorem 2 below . _ theorem 1 ._ is defined as in eq.(9 ) , then ^{2}\ } \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\nonumber \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ = tr[\rho _ { a_{1}a_{2} ... a_{n}}]^{2}-\max_{\pi } \{tr[\pi ( \rho _ { a_{1}a_{2} ... a_{n}})]^{2}\},\end{aligned}\ ] ] where , is any locally projective measurement performing on _ proof . _ in eq.(9 ) , for any satisfying , can be expressed in the form latexmath:[\ ] ] and the minimum can be achieved by taking , .we then proved eq.(41 ) .we make some remarks . for states in eq.(29 ) and states in eq.(36 ) , the gqd can also be analytically obtained , we then can compare the gqd and ggqd for these two classes of states . for states in eq.(36 ) and states in eq.(40 ) , when , the ggqd in eq.(37 ) and eq.(41 ) recover the corresponding results in .we also remark that , from eq.(37 ) , let the state eq.(36 ) undergo a locally phase channel performing on any qubit , similar discussions as in show that ggqd may also manifest the phenomena of sudden transition and freeze .in summary , we generalized the geometric quantum discord of bipartite states to multipartite states , we call it geometric global quantum discord ( ggqd ) . we gave different characterizations of ggqd which provided new insights for calculating ggqd .as demonstrations , we provided a lower bound for ggqd by using the high order singular value decomposition of tensors , and obtained the analytical expressions of ggqd for three classes of multipartite states .we also pointed out that ggqd can also manifest the phenomena of sudden transition and freeze .understanding and quantifying the multipartite correlations is a very challenging question , we hope that the ggqd proposed in this paper may provide a useful attempt for this issue .this work was supported by the fundamental research funds for the central universities of china ( grant no.2010scu23002 ) .the author thanks qing hou for helpful discussions .100 s. luo and s. fu , phys . rev .a * 82 * , 034302 ( 2010 ) .s. rana and p. parashar , phys .a * 85 * , 024102 ( 2012 ) .a. s. m. hassan , b. lari and p. s. joag , phys .a * 85 * , 024302 ( 2012 ) .f. t. hioe and j. h. eberly , phys .lett . * 47 * , 838 ( 1981 ) ; j. schlienz and g. mahler , phys .a * 52 * , 4396 ( 1995 ) ; r. a. bertlmann and p. krammer , j. phys .a * 41 * , 235303 ( 2008 ) .l. d. lathauwer , b.d .moor , and j. vandewalle , siam j. matrix anal .* 21 * , 1253 ( 2000 ) .j. xu , phys .a * 376 * , 320 ( 2012 ) .w. song , l. b. yu , d. c. li , p. dong , m. yang and z. l. cao , arxiv:1203.3356 .
geometric quantum discord , proposed by dakic , vedral , and brukner [ phys . rev . lett . 105 ( 2010 ) 190502 ] , is an important measure for bipartite correlations . in this paper , we generalize it to multipartite states , we call the generalized version geometric global quantum discord ( ggqd ) . we characterize ggqd in different ways , give a lower bound for ggqd , and provide some special states which allow analytical ggqd .
evolutionary game theory is used on different levels of biological systems , ranging from the genetic level to ecological systems . the language of game theory allows to address basic questions of ecology , related to the emergence of cooperation and biodiversity , as well as the formation of coalitions with applications to social systems .darwinian dynamics , in particular frequency - dependent selection , can be formulated in terms of game - theoretic arguments .the formation of dynamical patterns is considered as one of the most important promoters of biodiversity .here we consider games of competition , where the competition is realized as predation among species , where each species preys on others in a cyclic way .a subclass of these -games are cyclic games , that is with being the famous rock - paper - scissors game .an extensive overview on cyclic games is given in .the -game has been studied in various extensions ( spatial , reproduction and deletion , swapping or diffusion , mutation ) .one of the first studies of a -game without spatial assignment , but in a deterministic and stochastic realization revealed that fluctuations due to a finite number of agents can drastically alter the mean - field predictions , including an estimate of the extinction probabilities at a given time .this model was extended to include a spatial grid in , where the role of stochastic fluctuations and spatial diffusion was analyzed both numerically and analytically .the influence of species mobility on species diversity was studied in , pattern formation close to a bifurcation point was the topic of , see also and the impact of asymmetric interactions was considered in .an extension to four species , first without spatial assignment , shows interesting new features as compared to : already in the deterministic limit the trajectories show a variety of possible orbits , and from a certain conserved quantity the late - time behavior can be extrapolated .the four species can form alliance pairs similarly to the game of bridge .under stochastic evolution various extinction scenarios and the competition of the surviving set of species can be analyzed .domains and their separating interfaces were studied in . cyclic games on a spatial grid were the topic in .a phase transition as a function of the concentration of vacant sites is identified between a phase of four coexisting species and a phase with two neutral species that protect each other and extend their domain over the grid . for an extension of this model to long - range selectionsee . in this paperwe focus on the -game , including both spiral formation inside domains and domain formation .it is a special case of -games , which were considered for and by and more recently by .the authors of were the first to notice that for certain combinations of n and r one observes the coexistence of both spiral formation and domain formation .however , it should be noticed that our set of reactions , even if we specialize to the -game , is similar , but not identical with the versions , considered in or in - .the seemingly minor difference refers to the implementation of an upper threshold to the occupation number of single sites ( set to 1 or a finite fixed number ) , while we use a bosonic " version .we introduce a dynamical threshold , realized via deletion reactions , so that we need not explicitly restrict the occupation number per site . due to this difference , the bifurcation structure of the mean - field equations is changed . the reason why we are interested in the particular combination of and is primarily motivated by two theoretical aspects rather than by concrete applications . as to the first aspect ,this game is one of the simplest examples of games within games " in the sense that the domains effectively play a -game as transient dynamics on a coarse scale ( the scale of the domain diameter ) , while the actors inside the domains play a -game on the grid scale .finally , one of the domains gets extinct along with all its actors . as such, this game provides a simple , yet non - trivial example for a mechanism that may be relevant for evolution : in our case , due to the spatial segregation of species , the structural complexity of the system increases in the form of patterns of who is chasing whom , appearing as long - living transients , along with a seemingly change of the rules of the game that is played between the competing domains on the coarse scale , while the rules , which individuals use on the elementary grid sites , are not changed at all .as outlined by goldenfeld and woese , it is typical for processes in evolution , in particular in ecology , that the governing rules are themselves changed " , as the system evolves in time and the rules depend on the state . in our exampleit is spatial segregation , which allows for a change of rules from a coarse perspective , as we shall see .as to the second aspect , an interesting feature of such an arrangement is the multitude of time and spatial scales that are dynamically generated . concretely in the -game the largest of the reaction / diffusion rates sets the basic time unit . when the species segregate and form domains , the next scale is generated : it is the time it takes the two domains to form until both cover the two - dimensional grid or the one - dimensional chain .the domains are not static , but play the -game that has a winner in the end .so the extinction time of one of the domains sets the third scale .a single domain then survives , including the moving spirals from the remaining -game inside the domain .the transients can last very long , depending on the interaction rates and the system size . in the very end , however , in a stochastic realization as well as due to the finite accuracy in the numerical solutions even in the mean - field description , only one out of the three species will survive , and the extinction of the other two species sets the fourth scale . along with these events , spatial scalesemerge , ranging from the basic lattice constant to the radii of spirals and the extension of the domains .one of the challenges is to explore which of the observed features in the gillespie simulations can be predicted analytically .we shall study the predictions on the mean - field level , which is rather conclusive in our ultralocal implementation of reactions and reproduces the results of the gillespie simulations quite well , since fluctuations turn out to play a minor role for pattern formation .the deterministic equations are derived as the lowest order of a van kampen expansion .the eigenvalues of the jacobian are conclusive for the number of surviving species in a stable state , the composition of the domains , and transient behavior , which is observed in the gillespie simulations .the mean - field equations , including the diffusion term , will be integrated numerically and compared to the results of the gillespie simulations .the paper is organized as follows . in section [ sec_reactions ]we present the model in terms of basic reactions and the corresponding master equation . for generic games we summarize in section [ sec_vankampen ] the derivation of the mean - field equations from a van kampen expansion , followed by a stability analysis via the jacobian with and without spatial dependence for the specific game , and a derivation of the numerical solutions of the mean - field equations in section [ sec_jacobian ] . in section [ sec_numerical ]we present our results from the gillespie simulations in comparison to the mean - field results .section [ sec_conclusions ] summarizes our conclusions and gives an outlook to further challenges related to this class of games . for comparison, the supplementary material contains a detailed stability analysis for the -game with spiral formation and the -game with domain formation , as well as the numerical solutions of the mean - field equations and the corresponding gillespie simulations .we start with the simplest set of reactions that represent predation between individuals of different species , followed by reproduction , deletion and finally diffusion represents an individual of species at lattice site , while the total number of individuals of species at site will be denoted with .( we use small characters for convenience , although the meaning of is not a density , but the actual occupation number of a certain species at a certain site . ) in view of applications to ecological systems , each lattice site stands for a patch housing a subpopulation of a metapopulation , where the patch is not further spatially resolved .( [ pred ] ) represents the predation of species on species with rate , where the parameter does not stand for the physical volume , but parameterizes the distance from the deterministic limit in the following way : according to our set of reactions , larger values of lead to higher occupation numbers of species at sites , since predation and deletion events are rescaled with a factor , and therefore to a larger total rate .the fluctuations in occupation numbers , realized via the gillespie algorithm , are independent of or the occupation numbers of sites , since only relative rates enter the probabilities for a certain reaction to happen .therefore the size of the fluctuations relative to the absolute occupation numbers or to the overall gets reduced for large v , that is , in the deterministic limit .predation is schematically described in figure [ ( 6,3 ) ] .( [ repr ] ) represents reproduction events with rate , and eq .( [ anih ] ) stands for death processes of species with rate .death processes are needed to compensate for the reproduction events , since we do not impose any restriction on the number of individuals that can occupy lattice sites . herewe should remark why we implement death processes in the form of eq .[ anih ] rather than simpler as .the latter choice could be absorbed in a term in the mean - field equation ( [ eq : pde ] ) below with uniform couplings and .this choice would not lead to a stable coexistence - fixed point and therefore not to the desired feature of games within games - game we would have 40 fixed points , the sign of the eigenvalues would then only depend on the sign of the parameter . at fixed points collide and exchange stability through a multiple transcritical bifurcation . for ( the only case of interest ) , the system has no stable fixed points , and the numerical integration of the differential equations diverges .( similarly for the ( 3,1)-game , for , the trivial fixed point with zero species is always an unstable node , while the coexistence fixed point is always a saddle . ) ] .the species diffuse within a two - dimensional lattice , which we reduce to one dimension for simplicity if we analyze the behavior in more detail .we assume that there can be more than one individual of one or more species at each lattice site .individuals perform a random walk on the lattice with rate , where is the diffusion constant , the lattice constant and the dimension of the grid .diffusion is described by eq .( [ diff ] ) , where represents the site from which an individual hops , and is one of the neighboring sites to which it hops .it should be noticed that diffusion is the only place , which leads to a spatial dependence of the results , since apart from diffusion , species interact on - site , that is , within their patch . in summary ,the main differences to other related work such as references are the ultralocal implementation of prey and predation , no swapping , no mutations as considered in , and a bosonic version with a dynamically ensured finite occupation number of sites .even if qualitatively similar patterns like spirals or domains are generated in all these versions , the bifurcation diagram , that is , the stability properties and the mode of transition from one to another regime depend on the specific implementation .we can now write a master equation for the probability of finding particles at time in the system for reaction and diffusion processes , where stands for and is the number of species , the number of sites . \right.\nonumber \\ & + & \left .\underset{\alpha}{\sum } \frac{p_{\alpha}}{v } \left [ \left ( n_{\alpha , i}+1 \right ) n_{\alpha , i } p\left ( ... , n_{\alpha , i}+1, ... ;t\right ) - n_{\alpha , i}\left ( n_{\alpha , i}-1 \right ) p(\{n\};t ) \right ] \right . \nonumber \\ & + & \left .\underset{\alpha}{\sum } r_{\alpha } \left [ ( n_{\alpha , i}-1)p(n_{\alpha , i}-1, ...;t ) - n_{\alpha , i}p(\{n\};t ) \right ] \right\}\end{aligned}\ ] ] \end{aligned}\ ] ] with for all , and as uniform ( with respect to the grid ) random initial conditions we assume a poissonian distribution on each site where is the mean initial number of individuals of species per site .the master equation is continuous in time and discrete in space .the diffusion term is included as a random walk .next one takes the continuum limit in space , in which the random walk part leads to the usual diffusion term in the partial differential equation ( pde ) for the concentrations .the mean - field equations can then be derived by calculating the equations of motion for the first moments from the master equation , where the average is defined as with being a solution of the master equation , and factorizing higher moments in terms of first - order moments . alternatively , we insert the ansatz for the van kampen expansion according to in the reaction part . toleading order in we obtain the deterministic pde for the concentrations of the reaction part .combined with the diffusion part this leads to the full pde that is given as eq .[ eq : pde ] in the next section .while this leading order then corresponds to the mean - field level , the next - to - leading order leads a fokker - planck equation with associated langevin equation , from which one can determine the power spectrum of fluctuations . in our realization , the visible patterns are not fluctuation - induced , differently from noise - induced fluctuations as considered in . therefore our power spectrum of fluctuations is buried under the dominating spectrum that corresponds to patterns from the mean - field level .therefore we do not further pursue the van kampen expansion here .we perform a linear stability analysis of the mean field equations by finding the fixed points of the system of partial differential equations with the concentration of species , by setting .we will focus on the system with homogeneous parameters , if species preys on and 0 otherwise , , and , and consider the special case of the ( 6,3)-game .after finding the fixed points , we look at the eigenvalues of the jacobian of the system ( [ eq : pde ] ) to determine the stability of the fixed points .we then extend our analysis to a spatial component by analyzing a linearized system in fourier space , with jacobian where is the fourier transform of the laplacian and is the diffusion matrix evaluated at a given fixed point . in our case , the diffusion matrix is a diagonal matrix .this leads to a dependence of the stability of the fixed points on diffusion . in the following we focus on the special case to be considered .* stability analysis of the ( 6,3)-game . *the ( 6,3)-game is given by the system of mean field equations : in total there are 64 different fixed points to , of which some have the same set of eigenvalues and differ only by a permutation of the fixed - point coordinates of the eigenvalues , so that we can sort all fixed points in 12 groups - : for example , the fixed points and are in the same group .we will refer to all fixed points by the number of the group they belong to , that is to to , instead of to .+ the zero - fixed point , where all components are equal to zero , with all eigenvalues equal to for , and equal to for , is unstable for a system without spatial assignment , while it can become stable for a spatial system if , as in the cases of the ( 3,1 ) and ( 3,2 ) games , which are discussed in detail in the supplementary material . in the coexistence - fixed point , all components are equal to .it is stable for , three of the eigenvalues are always negative , the first one being and an the second and third one equal to , two are complex conjugates , and the last one is real . at , becomes a saddle , three of the six eigenvalues change sign , complex conjugates change sign of their real part , so a hopf bifurcation occurs , and the direction corresponding to the last eigenvalue becomes unstable .+ other fixed points include the survival of one species ( ) , two species ( for both and ) , three ( for and ) , four ( for , and ) , and five species ( for ) .all fixed points to are always saddles in the case of .+ for all eigenvalues get a -term , which can extend the stability regime in the parameter space , as long as , and lead to the coexistence of stable fixed points , which can not be found for . in view of pattern formation we shall distinguish three regimes . before we go into detail ,let us first give an overview of the sequence of events , if we vary the bifurcation parameters and as .these are events , which we see both in the gillespie simulations and the numerical integration of the mean - field equations in space , described as a finite grid upon integration .* : the first regime with smaller than its value at the first hopf bifurcation at , where the 6-species coexistence fixed point becomes unstable .as long as this fixed point is stable , we see no patterns , as the system converges at each site of the grid to the 6-species fixed point without dominance of any species , so that the uniform color is gray .* : the second regime with chosen between the first and second hopf bifurcations , where the second one happens at for the fixed points .when is approached from below , that is from , two fixed points , belonging to the -group , become stable through a transcritical bifurcation until , where they become unstable through the second hopf bifurcation . each of the two predicts the survival of three species , the ones , which are found inside the domains .each of these fixed points is , of course , a single - site fixed point , so in principle a subset of the nodes of the grid can individually approach one of the two fixed points , while the complementary set of the nodes would approach the other fixed point . however , as a transient we see two well separated domains with either even or odd species . at the interfaces between them all six speciesare present and oscillate with small amplitude oscillations , caused by the first hopf bifurcation of the six - species coexistence fixed - point , where it became a saddle .which one of the domains wins the effective ( 2,1)-game in the end , where a single domain with all its three species survives , depends on the initial conditions and on the fact that diffusion is included ; the mere stability analysis only suggests that six species at a site destabilize the interface between domains with either even or odd species .in fact , the numerical integration and the gillespie simulations both show that one domain gets extinct if the lattice size is small enough and/or the diffusion fast enough .as long as the two fixed points are stable , the ( 3,1)-game is played at each site of a domain in the sense of coexisting three species , which are not chasing each other , related to the neighboring sites only via diffusion , without forming any patterns .patterns are only visible at the interface of the domains as a remnant of the unstable six - species coexistence fixed point .* : the third regime , which is of most interest for pattern formation .starting from random initial conditions , the species segregate first into two domains , each consisting of three species , one with species 1,3 , and 5 , the second one with species 2 , 4 , and 6 , and inside both domains the three species play a rock - paper - scissors game , chasing each other , since the two fixed points of the group became unstable at the second hopf bifurcation .due to the interactions according to an effective -game at the interfaces of the domains ( here with either two or four species coexisting ) , one of the domains will also here get extinct , including the involved three species , while the remaining three survive . which domain survives depends also here on the initial conditions .as we shall see , the temporal trajectories of the concentrations of the three species in the surviving domain show that they still explore the vicinity of the second hopf bifurcation from time to time , while they otherwise are attracted by the heteroclinic cycle .the three species in the surviving domain live the longer , the larger the grid size is , in which the species continue playing ( 3,1 ) .in contrast to the second regime , however , two of the three species in the surviving domain will get extinct as well , and a single one remains in the end .this extinction is caused by fluctuations in the finite population in the stochastic simulation or by the numerical integration on a spatial grid with finite numerical accuracy , respectively .so the linear stability analysis indicates options for when we can expect oscillatory trajectories : it is the hopf bifurcations in the ( 6,3)-game for the and fixed points that induce the creation of limit cycles , which here lead to the _ formation of spirals _ in space in the third regime and only temporary patterns at the interfaces in the second regime , before the system converges to one of the fixed points .moreover , it is the two fixed points in the ( 6,3)-game that correspond to the _ formation of two domains_. in both the ( 3,2 ) and the ( 6,3)-games , one of these fixed points will be approached as a collective fixed point ( shared by all sites of the grid ) , while the domain corresponding to the other one gets extinct , and patterns are seen if this fixed point is unstable .so in the ( 6,3)-game the existence of domains including their very composition is due to two stable ( second regime ) or unstable ( third regime ) fixed points .their coexistence is in both regimes transient . in the second regimethree species will survive in the end , because the three - species coexistence - fixed point is stable , and it would need a large fluctuation to kick it towards a 1-species unstable fixed point . in contrast , only one species will survive in the third regime , where the same fixed point is unstable . obviously hereit does not need a rare , large fluctuation to kick the system towards the 1-species unstable fixed point , as we always observed a single species to survive in the end , both in the gillespie simulations and the numerical integration in a relatively short time .we should mention , however , that from our gillespie simulations we can not exclude that after all , a large fluctuation would also kick the system in the second regime from its metastable state towards one of the unstable 1-species fixed points as well as in the first regime to either one of the two three - species fixed points , or to one of the six 1-species fixed points , when the six - species fixed point is stable in the deterministic limit .so far we have not searched for these rare events , in which two , three or five species would get extinct , respectively . *numerical solutions of the ( 6,3)-game .* , , and .the right and middle columns show the species of each domain separately . for further explanationssee the text . ] in the following we show evolutions of species concentrations in space and time for parameters , chosen from the second and third regime of the ( 6,3)-game .these solutions are obtained from the numerical integration of eq .[ eq : mf(6,3 ) ] . for the representation on a lattice we will use the following procedure to visualize site occupation :odd species are represented by the rgb-(red , green , blue ) color scheme , while even species are represented by cmy - colors ( cyan , magenta , yellow ) .the three numbers of species , or , divided by the total sum of all species at the site , give a color in the rgb- , or cmy - spectrum that results from a weighted superposition of individual colors , where the weights ( color intensities ) depend only on the ratios of occupation numbers , rather than on absolute numbers .moreover , we display the rgb - color scheme if odd species make up the majority at a site and the cmy - scheme otherwise .we should note that a well mixed occupation of odd ( even ) species leads to a dark ( light ) gray color in these color schemes .figure [ ( 6,3)_array_dom ] shows coexisting domains with oscillations at the interfaces in the second regime . to justify the visualization of data according to the majority rule ", we show even and odd species also separately in the two right panels .this way we can see the transitions at the interfaces of the domains between even and odd species more clearly .the light ( dark ) gray domain corresponds to a well mixed occupancy with even ( odd ) species , respectively . on the boundaries of the domains all six speciesare present , and if we zoom into the boundary , we can see small amplitude oscillations caused by the hopf bifurcation of the 6-species coexistence - fixed point , see figure [ ( 6,3)_tx_dom ] .figures [ ( 6,3)_array_dom ] ( a)-(c ) show the evolution of the system at the first 100 t.u . at which timethe domains are already starting to form . in panel ( a ) it is seen how the transient patterns , generated by transient small domains , shortly after the domains disappear also fade away , so that the transient patterns are generated by oscillations at the interfaces .the figure also reminds to the early time evolution of condensate formation in a zero - range process , where initially many small condensates form , which finally get absorbed in a condensate that is located at a single site with macroscopic occupation in the thermodynamic limit . hereinitially many small and short - lived domains form , which get first absorbed into four domains as seen in the figure , but later end up in a single domain with three surviving species .so we see a condensation " in species space , where three out of six species get macroscopically occupied as a result of the interaction , diffusion and an unstable interface , while the remaining three species get extinct , so that the symmetry between the species in the cyclic interactions with identical rates gets dynamically broken . at an interface .red ( 1 ) , green ( 3 ) , and blue ( 5 ) represent odd species , cyan ( 2 ) , magenta ( 4 ) , and yellow ( 6 ) even species , from smaller to larger labels , respectively .( a ) and ( b ) show temporal and spatial trajectories , respectively , at the beginning of the integration , corresponding to ( a)-(c ) in figure [ ( 6,3)_array_dom ] , while ( c ) and ( d ) refer to late times . for further explanationssee the text.,title="fig : " ] + panels ( d)-(f ) show the evolution from 10000 - 10100 time units ( t.u . ) .the displayed domains were checked to coexist numerically stable up to t.u . , while for smaller lattices and faster diffusion one domain gets extinct .figures [ ( 6,3)_array_dom ] ( a ) and ( d ) should be compared with figures [ ( 6,3)1d_2 ] ( a ) and ( b ) of the gillespie simulations , respectively .figure [ ( 6,3)_tx_dom ] shows the corresponding oscillating concentration trajectories at early ( a ) and late ( c ) times at a site of an interface ( x=124 ) , where all six species oscillate around the coexistence - saddle fixed point , as indicated by the horizontal black line in ( c ) , while the spatial dependence at ( b ) ( early ) and ( d ) ( late ) times displays the domain formation due to two stable fixed points , corresponding to figures [ ( 6,3)_array_dom ] ( a ) and ( d ) , respectively , so that the oscillations are restricted to the interfaces . , , and .the middle and right columns show the species of each domain separately . for further explanationssee the text . ]the evolution of the ( 6,3)-game in the third , oscillatory regime in one dimension is shown in figure [ ( 6,3)_array_osc ] .species are represented in the same way as in figure [ ( 6,3)_array_dom ] .panel ( a)-(c ) show the evolution of the system in the first 100 t.u . , ( d)-(f ) in the first 10000 t.u .the two stable fixed points from the second regime became unstable ( saddles ) through the second hopf bifurcation .as in the second regime , at the beginning of the integration there is a separation of odd and even species , but at the same time they start to chase each other , resulting in oscillatory behavior in space and time .here we see no longer traces of the limit cycle around the six - species coexistence fixed point as in the second regime , since no sites have six species coexisting , even not for a short period of time . at the interfaces between even and odd speciesusually three species coexist , either two odd and one even , or vice versa , two even and one odd , but these mixtures are not stable , as these 3-species coexistence - fixed points in the deterministic limit are saddles .it also happens that just two or four species coexist at the interface , but also their coexistence - fixed points are saddles .therefore also here the coexistence of domains is not stable , only one of them survives , and which one depends on the initial conditions , resulting in the extinction of three either odd or even species . in view of gillespie simulations , figures [ ( 6,3)_array_osc ] ( a ) ( early times ) and ( b ) ( late times )should be compared with figures [ ( 6,3)1d_1 ] ( a ) ( early ) and ( b ) ( late ) , respectively .figure [ ( 6,3)_tx_osc ] ( a ) shows the evolution in time at late times , when only one domain survives .all three species oscillate between zero and one , corresponding to the heteroclinic cycle . from time to time the trajectories are also attracted by the saddle - limit cycle , which is created by the second hopf bifurcation of the three species - fixed point ( black line ) as indicated by the small amplitude oscillations .apart from the amplitude , the heteroclinic and saddle - limit cycles differ in their frequency : the saddle - limit cycle has a higher frequency than the heteroclinic cycle .panel ( b ) shows the spatial trajectories at the beginning of the integration when both domains still coexist .yet we see no mixing of all six species at a single site , the 6-species coexistence - fixed point is no longer felt in this regime .( a ) temporal trajectories of the surviving domain at late times in the interval 10200 - 10600 t.u . when only even species exist , and ( b ) spatial trajectories at early times when still both domains exist . for further explanations see the text.,title="fig : " ] + as we see from figure [ ( 6,3)_array_oscend ] , the numerical integration evolves to one of the saddles after having spent a finite time on the heteroclinic cycle and not according to the analytical prediction , where it were only in the infinite - time limit that the trajectory would get stuck in one of the saddles , which are connected by the heteroclinic cycle .according to figure [ ( 6,3)_array_oscend](a ) all trajectories get absorbed in one ( the pink one ) of the 1-species saddles already at finite time as a result of the finite accuracy of the numerical integration . yet figure [ ( 6,3)_array_oscend ] ( b ) shows the characteristics of a heteroclinic cycle at finite time : the dwell time of the trajectory in the vicinity of the 1-species saddles gets longer and longer in each cycle , before it fast moves towards the next saddle in the cycle .this escape stops after a finite number of cycles , when the concentration of two of the three species are zero within the numerical accuracy , and therefore no resurrection " is possible .going back to the set of reactions , in this section we describe their gillespie simulations .we solve the system ( [ eq : rec_sys ] ) by stochastic simulations on a regular square lattice as well as on a one - dimensional ring , using the gillespie algorithm , combined with the so - called next - subvolume method .this method is one option to generalize gillespie simulations to spatial grids .we choose periodic boundary conditions on a square lattice or on a ring with nodes . in our case nodes , or synonymously sites , represent subvolumes .all reactions except of the diffusion happen between individuals on the same site ( in the same subvolume ) , and a diffusion reaction is a jump of one individual to a neighboring site .one event can change the state of the system of only one ( if a reaction happens ) or two neighboring ( if a diffusion event happens ) subvolumes . at each sitethe initial number of individuals of each species is chosen from a poisson distribution , with a mean , which is randomly chosen for each species . in the next - subvolume methodwe assign the random times of the gillespie algorithm to subvolumes rather than to a specific reaction . to each subvolume , or site ,we assign a time , at which one of the possible events , in our case reactions ( predation , birth or death ) , or diffusion , will happen .the time is calculated as , where is a random number generated from a uniform distribution between 0 and 1 . the total rate depends on the reaction rates and the number of individuals which participate in the event .events happen at sites in the order of the assigned times .once it is known at which subvolume the next event happens , the event ( reaction or diffusion ) is chosen randomly according to the specified reaction rates .we start the simulations with initial conditions from a poisson distribution such that each site of the entire lattice is well mixed with all species .we want to study the dynamics of the system in a parameter regime , where we expect pattern formation . from the linear stability analysis of the mean - field systemwe expect stable patterns in the regime without stable fixed points in the ( 6,3)-game , as long as three species are alive , and transient patterns in the ( 6,3)-game for coexisting stable fixed points .we use the same color scheme as we used before to visualize the numerical solutions of the mean - field equations . as one gillespie step ( gs )we count one integration step here .our results confirm the predictions from the mean - field analysis : there are on - site oscillations in time in a limit - cycle regime .there are also oscillations in space , which form spirals on two - dimensional lattices .if the evolution approaches the stochastic counterpart of a stable fixed point in the deterministic limit , we call it shortly noisy fixed point " , where the trajectories fluctuate around a value that is the mean field - fixed point value multiplied by the parameter as defined before . of particular interestis the influence of the diffusion in relation to the ratio on the patterns .it was the ratio of that determines the stability of the fixed points . as mentioned earlier, the value of , which enters the stability analysis , can extend the stability regime .so in the gillespie simulations it is intrinsically hard to disentangle the following two reasons for the absence of patterns in the case of fast diffusion : either the stability regime of a fixed point with only one surviving species is extended , or the diffusion is so fast , that the extension of visible patterns is larger than the system size , so that a uniform color may just reflect the homogeneous part within a large pattern .all the mean - field - fixed points are proportional to the value of the parameter .if this value is much larger than and , the fixed - point value is very large .this leads to a large occupation on the sites , which slows down the formation of patterns .the reason is that the number of reactions , which are needed for the system to evolve to stable trajectories , either to oscillations , or to fixed points , increases with the number of individuals in the system .we study the stochastic dynamics of a ( 6,3)-game in regimes , for which we expect pattern formation , i.e. for .when the coexistence - fixed point becomes unstable at , we find the formation of two domains , each consisting of three species , one domain containing odd species , in the figures represented by shared colors red , green , and blue in the rgb - color scheme .the other domain consists of even species , represented by shared colors of cyan , magenta , and yellow in the cmy - color representation , see figure [ ( 6,3)2d ] . inside the domainsthe three species play the ( 3,1)-game and form spiral patterns .we have checked that the domains in figure [ ( 6,3)2d ] are not an artefact of the visualization , and determined , for example , the occupancy on a middle column of the lattice ( not displayed here ) . on sites with oscillations of species from one domain, there is a very small or no occupation of species of the second domain , confirming the very existence of the domains .the time evolution of the six species on two sites , chosen , for example , from the middle column of the lattice confirm that the species trajectories oscillate in time , reflecting the stable limit cycles in the deterministic limit . herea remark is in order as to whether radii , propagation velocity or other features of the observed spiral patterns can be predicted analytically . while spiral patterns in spatial rock - paper - scissors games were very well predicted via a multi - scale expansion in the work of , we performed a multi - scale expansion ( see , for example ) to derive amplitude equations for the time evolution of deviations from the two unstable fixed points , which lose their stability at the two hopf bifurcations .however , the resulting amplitude equations differ from ginzburg - landau equations by a missing imaginary part , which can be traced back to the absence of an explicit constraint to the occupation numbers on sites and the absence of a conserved total number of individuals . as a result, the amplitude equations only predict the transient evolution as long as the trajectory is in the very vicinity of the unstable fixed point , but can not capture the long - time behavior , which here is determined by an attraction towards the heteroclinic cycle that is responsible for the spiral patterns in our case .so it seems to be this non - local feature in phase space that the multi - scale expansion about the hopf bifurcation misses . for a further discussion of how the patterns depend on the choice of parameters we shall focus on the results on a one - dimensional lattice , since the simulation times are much longer for two dimensions .( in two dimensions , the period of oscillations is as long as about one fifth of the gillespie steps . ) -lattice for weak diffusion and far from the bifurcation point .snap shots are taken at ( a ) , ( b ) , and ( c ) gs . two domains are formed , each containing three species , indicated by the different color groups .these species play a ( 3,1)-game inside the domains and evolve spiral patterns .the parameters are , , , and . ] , that is in the second regime , where the coexistence - fixed points are stable , for weak diffusion ( ( a ) and ( b ) ) , and strong diffusion ( ( c ) and ( d ) ) . for both strengths of the diffusion domains form . in the case of weak diffusionno extinction of domains is observed within the simulation time of gs . for strong diffusion , one domain goes extinct after gs .initially , oscillatory patterns appear as remnants of many interfaces between small domains , where within the interfaces six species oscillate due to the unstable 6-species coexistence - fixed point , which fade away with time .this confirms the analytical results that in this parameter regime the -fixed points with coexisting species are both stable , leading to the black color in ( c ) and ( d ) for the one surviving domain .panel ( a ) shows the time evolution on a lattice for the time interval , ( b ) for , ( c ) for , and ( d ) for gs . the parameters are , . ] , that is , in the third regime , where the coexistence - fixed points are unstable , weak diffusion ( ( a ) and ( b ) ) , and strong diffusion ( ( c ) and ( d ) ) .the parameters are and .two domains form , of which one goes extinct after gs in the case of weak diffusion and gs in the case of strong diffusion .the surviving domain keeps playing the ( 3,1)-game . for weak diffusion no further extinction is observed for the simulation time of gs , while for strong diffusion , an extinction of all but one species , here the red one , happens after gs .panel ( a ) shows patterns in a time interval of , ( b ) for , ( c ) for , and ( d ) for gs . ] as to diffusion , for stronger diffusion patterns are more homogeneous , extinction events happen faster , sometimes they happen only for sufficiently strong diffusion , see figure [ ( 6,3)1d_2 ] .the extinction time depends also on the -ratio , i.e. if the ratio is from the interval ( i ) , where is stable and is unstable , or , ( ii ) from , where both fixed points are unstable . for case ( i ) , the -fixed points are stable , yet at the beginning of the simulations the dynamics shows oscillatory behavior , caused by the interfaces between the small domains , where six species feel the unstable coexistence - fixed point at ; but after about gillespie steps for weak diffusion , and gillespie steps for strong diffusion , and for our choice of parameters , the patterns fade away and the system evolves to a homogeneous state in both domains as long as they coexist , see figures [ ( 6,3)1d_2 ] ( b ) and [ ( 6,3)1d_2 ] ( d ) . the closer the system is to the bifurcation point , the longer live the oscillatory patterns , the stronger feels the system the unstable 6-species fixed point .figure [ ( 6,3)1d_2 ] ( a ) should be compared with the corresponding mean - field solution of figure [ ( 6,3)_array_dom ] ( a ) at early times and figure [ ( 6,3)1d_2 ] ( b ) with figure [ ( 6,3)_array_dom ] ( d ) at late times , from which we see that the mean - field solutions reproduce the qualitative features including the transient patterns . in case ( ii ) , the third regime , domains get faster extinct , both domains play a ( 3,1)-game inside the domains .after one domain gets extinct , the surviving one keeps on playing the ( 3,1)-game , until only one species survives. here one should compare figure [ ( 6,3)1d_1 ] ( a ) with the corresponding mean - field solution of figure [ ( 6,3)_array_osc ] ( a ) at early times and figure [ ( 6,3)1d_1 ] ( b ) with figure [ ( 6,3)_array_osc ] ( d ) at later times , where one is left with one domain and three species chasing each other ; the final extinction of two further species is not visible in this figure due to the weak diffusion and the larger extinction time .beyond the emerging dynamically generated time - and spatial scales , the most interesting feature of the ( 6,3)-game is the fact that the rules of the game , specified initially as ( 6,3 ) , dynamically change to effectively ( 2,1 ) and ( 3,1 ) as a result of spatial segregation . in view of evolution , here the rules of the game change while being played .they change as a function of the state of the system if the state corresponds to the spatial distribution of coexisting species over the grid . in preliminary studies ,we investigated the -game with the following set of coexisting games in a transient period : from a random start we observe segregation towards nine domains playing with each other and inside the domains again the game . from the superficial visualization of gillespie simulations ,this system looks like a fluid with whirling vortices " , where the ( 3,1)-game is played inside the domains . we expect a rich variety of games with new , emerging , effective rules on a coarser scale , if we not only increase the number of species , or release the restriction to cyclic predation , but allow for different time scales , defined via the interaction rates .so far we chose the same rates for all interactions and always started from random initial conditions .we performed a detailed linear stability analysis , which together with the numerical integration of the mean - field equations reproduced all qualitative features of the gillespie simulations , even extinction events .that the mean - field analysis worked so well is due to the ultralocal implementation of the interactions , so that the spatial dependence enters only via diffusion .the stability analysis revealed already a rather rich structure with 12 groups of in total 64 fixed points for the ( 6,3)-game .we focussed on coexistence - fixed points of six , three or one species .along with the fixed points repulsion or attraction properties we observed three types of extinction , whose microscopic realization is different and deserves further studies : \(i ) in the second regime of the ( 6,3)-game , both domains with either even or odd species are in principle stable , as long as they are not forced to coexist .we have seen a spatial segregation towards a domain with only even and one with only odd species , occupying the sites . at the interface between both domains , six species can not escape from playing the ( 6,3)-game . since the 6-species coexistence - fixed point is unstable , the unstable interface seems to be the driving force to initiate the extinction of one of the two domains including its three species , since interface areas should be reduced to a minimal size . from the coarse - grained perspective, one domain preys on the other domain , which is a ( 2,1)-game .\(ii ) in the third regime of the ( 6,3)-game , the domain structure in odd and even domains is kept , but in the interior of the domains the species follow heteroclinic cycles , which explain the patterns of three species , chasing each other , inside each domain . at the interface between the domains ,two to four species coexist at a site , but for small enough diffusion , coexistence - fixed points of the respective species are always saddles , so also here the instability of the interfaces seems to induce their avoidance , leading again to the extinction of one of the two domains .so from the coarse - grained perspective , again a ( 2,1)-game is played between the domains . + it should be noticed that in contrast to systems , where the fate of interfaces between domains is explained in terms of the competition between free energy and interface tension , here the growth of domains and the reduction of interfaces are traced back to the linear stability analysis of the system in the deterministic limit , which is conclusive for the dynamics .\(iii ) the third type of extinction event was the extinction of two species , when the individual trajectories move either in the vicinity of , or along a heteroclinic cycle , and either a fluctuation from the gillespie simulations , or the finite numerical accuracy on the grid ( used for integration ) captured the trajectory in one of the 1-species saddles .we have not studied rare large fluctuations , which could induce other extinction events and kick the system out of the basin of attraction from the 6- or 3-species stable coexistence - fixed points when stochastic fluctuations are included .neither have we measured any scaling of the extinction times with the system size or of the domain growth with the system size .this is left for future work .+ furthermore , for future work it would be challenging to derive and predict the domain formation on the coarse scale from the underlying -game on the basic lattice scale in the spirit of the renormalization group , here , however , applied to differential equations rather than to an action .one of us ( d.l . ) is grateful to the german research foundation ( dfg)(me-1332/25 - 1 ) for financial support during this project .we are also indebted to the german exchange foundation ( daad)(id 57129624)for financial support of our visit at virginia tech blacksburg university , where we would like to thank michel pleimling for valuable discussions .we are also indebted to michael zaks ( potsdam university ) for useful discussions .100 nowak m a and sigmund k , 2004 , _ science _ * 303*(6 ) , 793 .we add a detailed bifurcation analysis of the ( 3,1)- and ( 3,2)-games with only spiral ( ( 3,1 ) ) or only domain ( ( 3,2 ) ) formation , including the numerical integration of the mean - field solutions and the results of the gillespie simulations .the mean field equations for the ( 3,1)-game with homogeneous parameters are given by with jacobian of the system ( [ eq : mf(3,1 ) ] ) the following table gives the list of all fixed points and the corresponding eigenvalues for the system ( [ eq : mf(3,1 ) ] ) without and with the spatial component .let us next analyze the stability of the fixed points , first without the spatial component .the zero - fixed point has all three eigenvalues positive for any choice of the positive parameters ( we only consider positive parameters for this set of reactions ) , so it is always unstable in all directions .the second fixed point , the coexistence - fixed point , has one real eigenvalue that is always negative , so it corresponds to a stable direction , and two complex eigenvalues whose real parts change sign at through a supercritical hopf bifurcation . for fixed point is stable , otherwise it is unstable in the directions corresponding to the pair of complex conjugate eigenvalues .+ the other fixed points are always saddles , but the number of stable / unstable directions depends on the ratio of and changes at through a transcritical bifurcation . fixed points - , for which only one species is different from zero have always one stable ( eigenvalue ) and one unstable ( eigenvalue ) direction .the third direction is stable for , otherwise it is unstable. three fixed points - , which correspond to the survival of two species , also have always one stable ( eigenvalue ) and one unstable ( eigenvalue ) direction .the third direction changes sign at the same point in phase space as in case of the previous fixed points , at .the direction corresponding to this eigenvalue is stable for , in contrast to the previous case .for example , for , the fixed point has two stable and one unstable direction , while the fixed point has one stable and two unstable directions . at the fixed points and collide and exchange the stability of one of the directions ,so that for has one stable and two unstable directions , while has two stable and one unstable direction .the same scenario happens for other fixed points of the same type , collides with and with .+ after the collision , the saddles are connected by a heteroclinic cycle . for heteroclinic cycle is repelling , the system is permanent . in our systemthis means that the heteroclinic cycle repels all trajectories towards the stable coexistence - fixed point and bounds the phase space towards this interior .all trajectories evolve to this fixed point , regardless of the initial conditions . for ,the heteroclinic cycle becomes an attractor , and for all initial conditions , except for those that lie on a diagonal of the phase space ( connecting the zero - species fixed point with the coexistence - fixed point ) , the system evolves to the heteroclinic cycle . both results( repelling property for and attracting for of the heteroclinic cycle ) follow from the proof in .+ the numerical integration shows the typical behavior for a heteroclinic cycle , the trajectory oscillates between the three saddles , staying longer and longer in their vicinity , as time passes by , until it gets absorbed by one of the saddles due to finite numerical accuracy .+ figure [ ( 3,1)traj ] shows the stability of the fixed points and typical trajectories , when the initial conditions are close to the zero - fixed point .+ trajectories first get attracted by a saddle fixed point ( pink ) , since there is a stable direction from a zero fixed point to saddles , and then get repelled to the stable coexistence fixed point .( b ) the coexistence fixed point for is still stable , but the two saddles collided and exchanged stability , now the trajectories first go close to the saddles on the axes and then spiral into the stable fixed point .the existence of the oscillatory regime is indicated by the spiralling nature of the trajectories in the stable fixed point regime .( c ) the coexistence - fixed point for is unstable , the system evolves to a stable heteroclinic cycle . before the trajectories evolve to the limit cycle , they approach the coexistence - fixed point , which indicates a stable direction from the unstable zero fixed to the saddle coexistence - fixed point . ]if we now include the spatial component , the jacobian gets an additional term on the diagonal , so that for the eigenvalues change , and so do the conditions for the stability of the fixed points .we can see that all eigenvalues are shifted by the value of .the first obvious consequence is that the zero - fixed point becomes stable for , a case , which is not of interest in view of applications . in order to have stable solutions , which do not cause the extinction of all species , we are interested in the case .the coexistence fixed point still has one stable direction for any choice of positive parameters , and the hopf bifurcation happens for , which is larger than 2 in the case of using all parameters positive , so the stability regime increases .+ fixed points , which were exchanging their stability " by collision in a transcritical bifurcation , do no longer undergo the bifurcation at the same point .the fixed point , corresponding to the survival of one species , changes its stability in one direction for , while the one with two species surviving goes through the transcritical bifurcation for .the dominant effect of the spatial dependence to the local stability of the fixed points is an extension of the parameter interval , in which the fixed points , or some directions , are stable .+ as a solution of the system ( [ eq : mf(3,1 ) ] ) we observe no patterns in the regime , where the coexistence fixed point is stable , every site of the lattice is well mixed , every species evolves to the same value of the coexistence fixed point .once we change the parameters such that this fixed point becomes unstable , the patterns observed on the lattice are spiral waves .these are oscillations both in space and time .our linear stability analysis predicts oscillations in time , since the fixed point goes through a supercritical hopf bifurcation , where a stable limit cycle is born .contrary to similar models of rock - paper - scissors games as considered in - , we do not choose a restriction on occupation numbers at sites , a feature that we had to compensate with a deletion reaction . at the end , however , this results in mean - field equations , for which one fixed point undergoes a supercritical hopf bifurcation , without the need for introducing mutation reactions as in . in order to comparehow well the mean - field equations describe our original stochastic system , we compare results of the numerical solutions of the mean field partial differential equations [ eq : mf(3,1 ) ] with gillespie simulations in both the fixed - point and the oscillatory regime , first for the ( 3,1)-game eq .[ eq : mf(3,1 ) ] , and later for the ( 3,2 ) and ( 6,3)-games. at ( for ) the coexistence fixed point ( ) goes through a supercritical hopf bifurcation . due to this bifurcation a saddle limit cycle is created , one of the three eigenvalues stays negative , so there is a stable direction in the vector field towards the coexistence fixed point . at the same critical value heteroclinic cycle connecting three one species saddles becomes stable .so in the oscillatory regime the system evolves to the heteroclinic cycle .if is sufficiently close to the bifurcation point , the system will feel for a transient time the saddle - limit cycle , before it evolves to the heteroclinic cycle . for coexistence fixed point is stable , and independently of the initial conditions , the system will evolve to it .we determine the numerical solutions of the mean field equations of a ( 3,1)-game for parameters , , and ( fixed point ) and ( oscillatory ) regime , where is chosen as numerical grid constant and , if not otherwise stated , the grid size is 64 in units of dx , or 1 in dimensionless units .a plot in one dimension of the occupation of all sites x as a function of time t shows that the area of the -diagram is gray ( therefore not displayed ) by using the following color scheme , which we also use later in other figures for the ( 3,1)-game . each ( x , t)-square in a lattice is represented by a rgb - color scheme ( r , g , b ) , where numbers r , g , b (0,1 ) are the value of species one , two , three , respectively , divided by the sum of values of all three species .this way species 1 is represented by red , species 2 by green , and species 3 by blue .therefore the gray color corresponds to a well mixed distribution of all species through the lattice . in order to show that the gray color actually corresponds to the situation , where the concentrations at all sites approach the fixed - point value, we have checked that the concentrations at any site approaches the fixed - point value ( it actually does so within less than 20 time units ) , and , vice versa , that after 100 time units the three concentrations at all sites have approached the values of the coexistence - fixed point . , far off the bifurcation point.evolution in the phase spaceis represented by colored points at early ( b ) and late ( c ) time .color of the points represents time .( b ) evolution at lattice site 32 from 0 - 1700 t.u . ,( c ) 1700 - 3000 t.u . ] in figure[(3,1)_array_beg ] we show the spatial evolution in one dimension with periodic boundary conditions of a ( 3,1)-game in the oscillatory regime at , the other parameters are chosen as in the fixed - point regime . herethe system is far off the bifurcation point .part ( a ) displays the spatio - temporal patterns until the system gets absorbed by one of the 1-species saddles , with the color scheme as indicated before . part ( b )shows the phase portrait for the first 1700 t.u . at the middle of the chain .each dot represents a triple , where denotes the concentration of the species .the color of the dots represents the time instant upon time evolution , see the color bar in figure [ ( 3,1)_array_beg ] .large black dots represent the saddle - fixed points , three in the corners of the box are saddle - fixed points , which are connected by the heteroclinic cycle , while the fixed point in the middle is the coexistence - fixed point after the lost of stability . as transient behaviorthe system oscillates between all four saddle - fixed points , feeling also the saddle - limit cycle that is created by the hopf bifurcation , even for far away from the bifurcation value .after sufficiently long time , the system will evolve to a heteroclinic cycle , which is shown in the phase portrait of figure [ ( 3,1)_array_beg](c ) .+ in figure [ ( 3,1)_tx_beg ] we show the evolution in time at a site in the middle of the lattice , as displayed in figure [ ( 3,1)_array_beg ] .the coexistence - fixed point is at 0.25 , indicated as the horizontal line at the fixed - point value .trajectories oscillate between zero and one , which corresponds to the heteroclinic cycle , but are also attracted in irregular intervals by the small - amplitude limit cycle around the coexistence - fixed point , since the coexistence - fixed point became a saddle at the supercritical hopf bifurcation point . as we have seen also for the ( 6,3)-game of the main text , the time evolution of shows for later times the increasing dwell time of the trajectories in the vicinity of the saddles .it should , however , be mentioned that the numerical integration on larger lattices reveals also regular large - amplitude oscillations between the three 1-species saddles for certain initial conditions , which are not characteristic for a stable heteroclinic cycle , for which the period of oscillations increases with every revolution .since our stability analysis does not predict this kind of attractor , it is most likely transient behavior that seems to depend on the algorithm used for the numerical integration of the mean - field equations .the figure , corresponding to [ ( 3,1)_array_beg ] ( c ) in gillespie simulations , is figure [ ( 3,1)1d_2 ] ( b ) . as a function of time at lattice site 32 .system oscillates between a small saddle limit cycle surrounding the coexistence fixed point and the one species saddles until it reaches the heteroclinic cycle and eventually gets absorbed by one of the saddles due to numerical accuracy.,scaledwidth=100.0% ]in the ( 3,2)-game , described by eqs . [ eq : mf(3,2 ) ] , we have eight different fixed points as in the ( 3,1)-case .none of the fixed points has any eigenvalues with imaginary part , in agreement with the numerical absence of oscillatory behavior , see figs .[ ( 3,2)st ] and [ ( 3,2)ust ] for typical trajectories of the system .here we also have an unstable zero - fixed point with all three eigenvalues positive .the coexistence fixed point is a stable focus for . at two of the eigenvalueschange sign . with increasing fixed point becomes a saddle , having two unstable directions and one stable corresponding to the eigenvalue . at the same time , the - fixed points become stable , and the system evolves to one of these three fixed points , where only one species survives , depending on the initial conditions .the three remaining fixed points - are always saddles , with two stable and one unstable direction . at fixed points also undergo a bifurcation , in which two of the eigenvalues change sign , but without changing the number of stable and unstable directions , one of the stable directions , corresponding to the eigenvalue , becomes unstable , while an unstable direction corresponding to the eigenvalue becomes stable . in summary , for the system evolves to a stable fixed point , where all species coexist , and for the system evolves to one of the three fixed points - , where only one species survives . which fixed point ( species ) this is , depends on the initial conditions . .( a ) arrows in lines point to the stability direction , green lines connect saddles with one stable ( gray ) and two stable ( pink ) directions , black lines the stability direction between the unstable fixed point ( black ) and saddles ( gray and pink ) , the red line between the unstable and the stable fixed point , and blue lines between the stable and saddle fixed points .( b ) and ( c ) : starting with slightly different initial conditions in the vicinity of the unstable fixed point , trajectories take different paths in ( b ) and ( c ) : they first approach saddle fixed points , then following stable directions to finally approach the stable fixed point . ] .the color code is the same as in figure [ ( 3,2)st ] .as the stability of the fixed points changes , so do the stable directions between the fixed points .the difference between ( b ) and ( c ) are the initial conditions . ] as in the ( 3,1 ) case , for the `` zero''-fixed point can become stable for if .the same happens with the fixed points - , which were always saddles in the case of .the parameter interval , in which other fixed points are stable , increases as in the case of the ( 3,1)-game .this leads to the possibility that all fixed points are stable for the same choice of parameters . on the latticethis may lead to the coexistence of different stable fixed points at different lattice sites .which one is then approached at a single site depends on the initial conditions . as we have seen , in the ( 3,2)-game we have also two regimes of stationary behavior , but both regimes are fixed point regimes , for the bifurcation point is at .for the system evolves to a coexistence - fixed point . in this regimewe see no spatio - temporal pattern formation .the system evolves fast into the fixed point . : ( a ) time evolution of the concentrations at the middle ( x=32 ) of the lattice towards a 1-species fixed point with one species dominant and two species at almost vanishing concentration , ( b ) spatial dependence at time 10500 t.u ., showing the coexistence of domains . ] for the above choice of parameters , it is for that the coexistence fixed point becomes a saddle , with two eigenvalues positive ( unstable directions ) , while three fixed points become stable , so the system evolves into one of them , into which one depends on the initial conditions .for sufficiently large lattice size , or weak diffusion , the transient to one of the fixed points is very long in comparison to the transient to the coexistence - fixed point .so we distinguish between the coexistence - fixed point regime and the domain - formation regime .* coexistence - fixed point regime . * the results for this regimeare not displayed , as no patterns are seen in the ( x , t)-representation , while the concentrations approach the fixed - point value rather fast for all x at given t and for all time units at given x. * domain formation regime . *the domain formation is displayed in figure[(3,2)_array_dom ] at earlier ( a ) and later ( b ) times , where the domains have stabilized and the system is far off from the bifurcation point .for a sufficiently small diffusion , the numerical solution of eq .( [ eq : mf(3,2 ) ] ) seems to be stable in one dimension , we checked it up to time time units .results of the corresponding gillespie simulations are displayed in figure [ ( 3,2)1d_1 ] ( a)-(c ) below .the patterns , which form in a -game on a two - dimensional lattice , are multiple spirals .the time for the patterns to form depends on the diffusion , the system size , the reproduction rate with respect to predation and deletion events , and on the volume . on the one and two - dimensional lattices ,the trajectories oscillate in time , while the patterns that form in space depend on the diffusion strength .for the two - dimensional lattice we present results for two values of the diffusion strength .for the one - dimensional lattice we analyze in addition the dependence on the ratio of , which determines the distance from the bifurcation point in parameter space ; this distance has also impact on the patterns . in the vicinity of the bifurcation points precursors of the other side " of the bifurcations are visible .the extension of the lattices that we show in the figures below are in two - dimensions and in one dimension .( a ) , ( b ) , and ( c ) gs . the patterns formed on the lattice are multiple spirals. the parameter values are , , , and , corresponding to slow diffusion . ] and for two diffusion strengths for ( a ) and ( b ) , and for ( c ) and ( d ) .the time evolution on the lattice is shown for a period of gs , starting at time 0 ( a ) , ( b ) , ( c ) gs . panel ( d ) shows an evolution from gs .other parameters are . for further comments see the text . ] , but for far off the bifurcation point .panels ( a ) and ( b ) show the evolution of patterns for weak diffusion , ( c ) and ( d ) for strong diffusion .extinction events happen for both strengths of diffusion , around gs for weak diffusion and around gs for strong diffusion .panel ( a ) shows the evolution in space for the time interval gs , panel ( b ) for gs , and ( c ) for gs . ] both on the one - dimensional and two - dimensional lattices , stronger diffusion causes wider patterns with respect to the system size , as can be seen from the comparison of the last panels of figures [ ( 3,1)2d_1 ] and [ ( 3,1)2d_2 ] , figure [ ( 3,1)1d_1 ] ( b ) and [ ( 3,1)1d_1 ] ( d ) , and figures [ ( 3,1)1d_2 ] ( b ) and [ ( 3,1)1d_2 ] ( c ) . in case of weak diffusion none of the species goes extinct for a period of gs , while for stronger diffusion an extinction of two species ( green and red ) happens already at gs . the patterns look qualitatively similar : almost vertical waves that propagate in space and time .figure [ ( 3,1)1d_2 ] ( b ) should be compared with the mean - field prediction of figure [ ( 3,1)_array_beg ] ( c ) , although the -values were chosen differently , but out of the same regime .the patterns qualitatively agree . for stronger diffusionthe spatial extension of waves is wider , the period of oscillations is larger , they are more homogeneous on the lattice at a given instant of time .as expected , the stronger the diffusion , the more homogeneous the lattice looks like .stronger diffusion also leads to a faster extinction of all but one species , this can be explained in the following way . in the stochastic simulations ,extinctions happen after some time , and since the reproduction rate is proportional to the existing individuals of a certain kind , no resurrection " is possible .extinctions are caused both by the finite spatial size and by the finite number of individuals .the finite size goes along with a smaller total number of individuals on the grid , while the finite number of individuals per site is tuned by the parameter v. whenever the number of individuals is smaller , the size of the demographic fluctuations gets larger in relation to average occupations of the deterministic limit . these fluctuations in the trajectoriesin phase space can kick the system into a parameter regime , where only a subset of species survives as the final fate of the system ; the kick happens the faster , the stronger the fluctuations . also for stronger diffusion , extinction happens faster than for weak diffusion , since the system faster feels the finite grid size .since extinction events do not happen coherently all over the grid , extinction of a certain species all over the grid takes the longer , the larger the grid .one may wonder whether strong diffusion can compensate for a large grid size with smaller demographic fluctuations and accelerate the extinction of species .for the range of parameter values , which we have chosen , the effect of demographic fluctuations dominated the impact of diffusion , so that extinction events decreased with increasing system size for both weak and stronger diffusion . figure [ ( 3,1)1d_1 ] ( d ) shows the time evolution on a one - dimensional lattice for strong diffusion and close to the bifurcation point .this regime allows for more regular oscillations in time ( compare figure [ ( 3,1)1d_1 ] ( d ) with [ ( 3,1)1d_2 ] ( c ) ) and a homogeneous arrangement in space .it is the distance from the bifurcation point in the parameter space influences the patterns .the larger the distance from the bifurcation point is , that is , the larger ratio , the more disordered the patterns are , see figure [ ( 3,1)1d_2 ] ( c ) .the reason is that the amplitudes of the `` noisy limit cycles '' are the larger , the farther the hopf bifurcation to the stable fixed point is .the larger their amplitude , the closer the trajectories come to the unstable fixed points , which are saddles , so the trajectories first get attracted and later repelled by the saddles and this way distorted , as we see in figure [ ( 3,1)1d_2 ] ( c ) .although the width of the wave - like patterns , which are oscillations in space and time , are of the same order for both considered ratios of , for a larger ratio of this width fluctuates more , see figures [ ( 3,1)1d_1 ] and [ ( 3,1)1d_2 ] .the `` v''- and `` z''-like shapes of waves on an -lattice are analogous to the multiple spiral centers on a two - dimensional lattice , in 2d we find multiple sources , which emit wave fronts that propagate in ( t , x)-space and create `` v '' and `` z '' shapes , when colliding with each other .the diffusion strength and the distance from the bifurcation point have a similar influence on the patterns as in the ( 3,1)-game .figures [ ( 3,2)2d_1 ] and [ ( 3,2)2d_2 ] show a domain formation in the -game on a two - dimensional lattice for two different strengths of the diffusion constant .a domain consists of connected lattice sites , on which one species is dominant .consider the number of individuals of the dominant species relative to the total number of individuals on a certain site within domain .this number is smaller on the edges of the domain than in the center of it . in the centerit often happens that the dominant species is the only one occupying the sites in the center .the interaction between domains , and therefore between species , happens only at the boundaries of the domains .the boundaries are distinguished as sites with a well mixed occupation , with individuals of two or sometimes all three species . , , that is far off the bifurcation point , and =0.1 ( for ( a ) , ( b ) and ( c ) ) , and ( for ( d ) ) .the panels show the evolution of patterns staring at 0 ( a ) , ( b ) , ( c ) , and 0 ( d ) gs . ] here the distance from the bifurcation parameter does not seem to influence the extinction times on one - dimensional lattices .the corresponding figures can be found in .the time it takes for one or more species to go extinct , is of the same order of magnitude for different ratios of if the diffusion constant is the same .this is qualitatively different from the ( 3,1)-game , where the extinction times depend on the distance from the bifurcation point in parameter space . in case of the ( 3,1)-game ,the mean - field solutions are a heteroclinic cycle in the infinite volume limit , in which all species coexist , so the main mechanisms by which extinction can happen in a stochastic realization are demographic fluctuations and the finite lattice size . differently in the ( 3,2)-game : after the 3-species coexistence - fixed point becomes unstable , one - species fixed points become stable , so we have an extinction of all but one species already on the mean - field level for all , and the diffusion in the continuum homogenizes the individual fixed - point values to one and the same collective one . before this happens in the stochastic realization , first domains form with different species occupying the sites , where the one - species fixed points on the sites may differ in the surviving species , neither need the extinction events happen on all sites simultaneously .the faster the random walk on the grid , the more it resembles a strong diffusion in the space - continuum , the more homogeneous the occupation becomes , so that all sites tend to evolve to the same fixed - point value that tells us which species survives . for weaker diffusion or slowerrandom walk , domains may coexist for a long time until a single species occupies the entire lattice . for comparison with the mean - field predictions of figure [ ( 3,2)_array_dom ] ( a ) and ( b ) we show figure [ ( 3,2)1d_1 ] ( a ) and ( b ) with ( c ) , respectively .patterns forming on a ( t , x)-grid are horizontal stripes , consisting of sites on which only one species dominates . on the boundaries of the stripes , sites are well mixed . with time , some of the stripes narrow and afterwards disappear , until the whole lattice is populated by only one species . as in the ( 3,1)-game , for weak diffusionno extinction of all but one species happens within gs , while for strong diffusion it happens after gs . for further details on the gillespie simulations of the ( 3,2)-game we refer to
we consider games of prey and predation with species and prey and predators , acting in a cyclic way . further basic reactions include reproduction , decay and diffusion over a one- or two - dimensional regular grid , without a hard constraint on the occupation number per site , so in a bosonic " implementation . for special combinations of and and appropriate parameter choices we observe games within games , that is different coexisting games , depending on the spatial resolution . as concrete and simplest example we analyze the ( 6,3 ) game . once the players segregate from a random initial distribution , domains emerge , which effectively play a ( 2,1)-game on the coarse scale of domain diameters , while agents inside the domains play ( 3,1 ) ( rock - paper - scissors ) , leading to spiral formation with species chasing each other . as the ( 2,1)-game has a winner in the end , the coexistence of domains is transient , while agents inside the remaining domain coexist , until demographic fluctuations lead to extinction of all but one species in the very end . this means that we observe a dynamical generation of multiple space and time scales with emerging re - organization of players upon segregation , starting from a simple set of rules on the smallest scale ( the grid constant ) and changed rules from the coarser perspective . these observations are based on gillespie simulations . we discuss the deterministic limit derived from a van kampen expansion . in this limit we perform a linear stability analysis and numerically integrate the resulting equations . the linear stability analysis predicts the number of forming domains , their composition in terms of species ; it also explains the instability of interfaces between domains , which drives their extinction ; spiral patterns are identified as motion along heteroclinic cycles . the numerical solutions reproduce the observed patterns of the gillespie simulations including even extinction events , so that the mean - field analysis here is very conclusive , which is due to the specific implementation of rules .
it is one of the innermost consequences of the laws of quantum mechanics that non - orthogonal states can not be discriminated with certainty .this allows for applications such as quantum key distribution ( qkd ) , but it also ultimately limits the capacity in communication channels . in an optical communication protocol , a sender encodes information into one or more parameters of the light field .such a parameter could for instance be the light s frequency , the phase or the amplitude .the prepared signal states are subsequently sent through an optical channel and directed to the receiver where the information is retrieved via an adequate measurement . however ,if the power of the received signals is small , i.e. on the order of single photons , quantum mechanics has to be taken into account . in this regime ,the minimum error rate for the discrimination of the signals is not only limited by the shortcomings of the technical apparatus but also by the laws of quantum mechanics .these laws impose strict bounds , depending on the implemented type of encoding , which can not be overcome by any measurement device .a lot of attention has already been devoted to the development and characterization of optimal and near - optimal discrimination strategies for the elementary binary encoding into optical coherent states of the light field , which allows to transmit one bit of information per state . a more efficient encodingis provided by quadrature phase shift keying ( qpsk ) , a technique which is widely used in wireless networks for mobile phones and backbone fiber networks . the qpsk alphabet comprises four states equally separated by a phase of and allows for the transmission of two bits of information per signal state .the minimal error rates for the discrimination of the qpsk alphabet have been derived by helstrom . in the case of binary alphabets, it has been shown that the feasible secret key rates of quantum key distribution systems can be largely improved by optimizing the receiver scheme . since qkd protocols with alphabets of four or higher number of states are also investigated , optimized receivers for such alphabets are of great interest . in this paper , we present a novel discrimination scheme .we use a hybrid approach which means that we consider both fundamental representations of our quantum states : the discrete and the continuous representation .we prove in theory and provide experimental evidence that the standard scheme - heterodyne detection - can be outperformed for any signal amplitude .let us discuss different discrimination strategies for the qpsk alphabet . besides heterodyne detection , where the received state is inferred from the beat signal between the signal and a local oscillator of slightly different frequency , there are two other advanced discrimination schemes , based on a photon counting detector and feedback that were proposed by bondurant . in all these receivers ,the measurement is performed by a single detection stage .in contrast , it is also possible to divide the state into parts which can be distributed among serveral detection stages .this method is for instance utilized in dual homodyne detection , where the received state is inferred by first splitting it on a balanced beam splitter and subsequently measuring the projections along two orthogonal quadratures via two homodyne detectors . however , the retrieved information in a dual homodyne measurement and in a heterodyne detection is identical such that the error rate is not reduced by the additional detection stage .it is for this equivalence that the terms heterodyne detection and dual homodyne detection are commonly used in a synonymic way .recently , another receiver capable of achieving error rates below the heterodyne limit was proposed by becerra et al . . this scheme is based on successive measurements on parts of the state and feed forward .our strategy is to perform two successive measurements on parts of the quantum state .the result of the first measurement reveals partial information about the state and is used to optimally tune the receiver for the second measurement .a schematic of the discrimination procedure is presented in fig.[discriminationscheme3 ] .the first measurement is performed by a homodyne detector ( hd ) , best described by continuous variables .the homodyne measurement under a proper quadrature allows us to discard half of the possible states by making a binary decision based on the quadrature projections of the signal .the homodyne result is forwarded to a photon counting receiver , which finally identifies the input state by discriminating between the two remaining states .this task is performed near - optimally by an optimized displacement receiver , which is an advancement over the kennedy receiver .we implemented the hybrid scheme employing both the kennedy ( k ) receiver and the optimized displacement ( od ) receiver .the homodyne - kennedy receiver ( hd - k ) beats the heterodyne detection for signal powers above a threshold ( around ) .however the homodyne - optimized displacement receiver ( hd - od ) outperforms heterodyne detection for any signal power .suppose , we are given a quadrature phase - shift keyed ( qpsk ) coherent signal , where and each of the states in the mixture has an _ a priori _ probability of .the quantum limit - the helstrom bound - for the discrimination of these signals is asymptotically given by for .the input signal is divided by a beam splitter ( bs ) with transmittance and reflectivity .the transmitted and reflected parts are guided to the homodynde detector and the photon counting stage .first , one performs a homodyne detection along the p quadrature in phase space and makes a decision whether the signal is in the upper or the lower half plane .the result is forwarded to the photon counting receiver , which is then tuned for the discrimination of the remaining pair of states .let us recall the expression for the error probability in hypothesis testing : in the case of qpsk and the expression contains 12 terms corresponding to detection errors expressed by the conditional probabilities which correspond to choosing the hypothesis :state was sent " when the correct hypothesis is :state was sent " . in communications ,the bit error rate ( ber ) is of particular interest .it is defined as the ratio between the number of erroneous bits and the total number of sent bits . for the qpsk alphabetthe ber can explicitly be written as where for and otherwise .this means that higher bit errors are assigned to errors between distant states which will occur less frequently .however , in this work we will concentrate on the minimum error rate . in order to calculate the error probability , it will be more convenient to first evaluate the success probabilities for the homodyne detector hd , the kennedy receiver k and the optimized displacement receiver od separately and then simply find : and , which have only 4 terms . in the case of the hybrid detectors analyzed here ,the probability of success of the individual binary receivers is independent , and we can write : where are the a priori probabilities .let us illustrate the procedure in more detail by assuming the signal is prepared in the state as indicated in fig.[discriminationscheme ] .the reflected part is directed to the homodyne detector , which discriminates between positive and negative values of the projection onto the p quadrature and is described by the povm elements the probability to observe the erroneous outcome is given by \right).\ ] ] note , that due to the projection onto the p quadrature , the effective signal amplitudes in the homodyne detection are reduced by a factor of . supposing the measurement yielded the correct hypothesis , the next task is to discriminate between and via the kennedy or the optimized displacement receiver . for simplicity ,let us first consider the kennedy receiver .the signal is displaced such that one of the remaining candidate states is shifted to the vacuum state , while the other state gets amplified to an amplitude of .the states are identified by observing whether or not a click occurs in the detector . in the scenariodepicted in fig.[discriminationscheme ] , the displacement was ( arbitrarily ) chosen to shift to the vacuum .therefore , the hypothesis is , whenever no click was detected , whereas the input state is identified as if a detection event is recognized .the corresponding povm elements of the kennedy receiver are where denotes the displacement operator .as the vacuum state is an eigenstate of the photon number operator ( ) , the state shifted to the vacuum state will never generate a click and the error probability ] , where the errors originate from the remaining overlap between the displaced state and the vacuum state .the total error rate for the detection of is given by note , that the same error rates follow for the other signals ( n = 3 , 4 ) .consequently , the average error probability for the hd - k hybrid receiver is ) ( 1- \frac{1}{2 } e^{-2|t\,\alpha|^2 } ) , \end{aligned}\ ] ] where is the average error rate of the kennedy receiver stage .the error rates of the kennedy receiver can however be lowered by optimizing the displacement , which leads to the optimized displacement ( od ) receiver .the error rates of the kennedy- and the optimized displacement receiver for the discrimination of binary states have been compared to the optimal gaussian approach ( homodyne detection ) in .the kennedy receiver is superior to homodyne detection for signals with a mean photon number , whereas the optimized displacement receiver outperforms the optimal gaussian approach for any signal power . to derive the optimal displacment parameter for the qpsk signal it is convenient to separate the total displacement into two elementary steps as illustrated in fig.[displacementsketch ] .first , the states are displaced to the x quadrature , which is described by the displacement operator .the situation is then equivalent to a binary state discrimination problem for two states with amplitude . in this configuration ,the optimal displacement is given by the solution of the transcendental equation which is obtained by requiring .as illustrated in fig.[displacementsketch ] , the optimal displacement amplitude and phase for the qpsk signal are then following as combining the two elementary displacements , the od receiver is finally described by the povms and , with .the error rates for the hd - od hybrid receiver follow directly by exchanging the kennedy error rates for the error rates of the od receiver .the total error rate is then given by + \mathrm{tr}\left[\hat{\pi}_{\mathrm{b}}^{od}\,|t\alpha_2 \rangle \langle t\alpha_2 |\right ] \right ) \nonumber \\ & = & \frac{1}{2 } - \exp\left(-t^2\frac{|\alpha|^2}{2 } + |\beta|^2 \right ) \sinh\left({\sqrt{2}\,t\,\alpha\beta } \right).\end{aligned}\ ] ] the optimal displacement parameters for the kennedy receiver and the od receiver are shown as a function of the transmitted signal in fig.[opt_beta ] . the displacement in the od receiver is clearly increased for small signal powers and has a minimum value of in the limit of very low signal powers .asymptotically , the displacement of the od receiver approaches the values of the kennedy receiver , which is identical to the transmitted signal power .the phase describes the direction of the displacement in phase space as sketched in fig.[displacementsketch ] . in case of bright signals ,both detectors displace the states towards the vacuum state , which corresponds to a phase of . with decreasing signal power the phase in the od receiver is asymptotically approaching , which corresponds to a displacement parallel to the x quadrature . andoptimal displacement phases in dependence of the transmitted part of the signal for both the kennedy and the optimized displacement receiver ., width=604 ] besides of the displacement parameter , the transmittance of the beam splitter can be optimized to minimize the error rate .the optimal parameters are shown in fig.[transmission ] . in case of small signal powers , the quantum state in the hd - od receiver is distributed nearly equally among the two receiver stages . with increasing signal power the share of the photon counting receiver is monotonically decreasing .in contrast , the optimized transmission for the hd - k receiver shows a distinct maximum around , but approaches the optimal transmission parameter of the hd - od receiver asymptotically with increasing signal power . in the limit of very high signal powers ( not shown in the figure ) ,the share of the photon counting receivers tends to .this reflects the increasing imbalance between the performance in binary state discrimination of the photon counting receivers compared to hd detection . in this regime ,the photon counting receivers performance is ( in theory ) exceedingly superior to the quadrature measurements .the homodyne detection thus constitutes the main source of errors .the total error is minimized by allocating the major share of the state to the hd detector . practically however, the performance of click detectors in the high amplitude regime is technically limited by dark counts .we proceed with a description of the experimental setup which is shown in fig .[ setup ] .our source is a grating - stabilized diode laser operating at a wavelength of nm . the laser has a coherence time of and is measured to be shot noise limited within the detection bandwidth .first , the beam passes a single mode fiber to purify the spatial mode profile .subsequently , the beam is split asymmetrically into two parts : a bright local oscillator ( lo ) , which is directed to the hd stage and a weak auxiliary oscillator ( ao ) , which is used both to prepare the signal states and to realize the displacement at the photon counting receiver stage .directly after the first beam splitter , the ao passes an attenuator ( att . ) to reduce its intensity to the few photon level .the use of of electro - optical modulators ( eoms ) and wave plates allows to generate signal states as pulses of and at a repetition rate of in the same spatial mode as the ao but with an orthogonal polarization .the signal is split on a beam splitter and the parts are guided to the homodyne detector and the photon counting receiver , respectively . in the hd path , the signal mode is separated from the ao via an optical isolator aligned to absorb the remaining ao .moreover , the isolator avoids back - propagation of photons from the lo to the photon counting receiver .subsequently , the signal is spatially superposed with the lo on a polarizing beam splitter ( pbs ) . up to this point signal and lo are still residing in orthogonal polarization modes . the required interference is achieved by a combination of a half - wave plate hwp and a pbs .the wave plate is aligned to rotate the polarization axis by an angle of . at this point , the signal and the lo have equal support on the principal axis of the subsequent pbs , such that they are split symmetrically and the interference is achieved .the measured quadrature in the hd is adjusted via a feed back controlled piezo - electric transducer in the lo path .the measured visibility between the signal and the lo is and the quantum efficiency of the photo diodes is measured to be . from this, the total quantum efficiency of the homodyne detection follows as . in the photon counting receiver paththe displacement is generated by coupling photons from the ao to the orthogonally polarized signal mode .this is achieved by first rotating the polarization of the signal and the ao via a hwp , followed by a projection onto the original signal polarization mode by a pbs .the angle of the hwp , and hence the displacement strength , is controlled by a stepper motor .if the required rotation angle is small , i.e. for a sufficiently bright ao , the disturbance of the signal states is small and the operation is equivalent to a perfect displacement operation .the displacement operation can be described as , where denotes the coherent state in the auxiliary oscillator mode . experimentally however , increasing the ao power results in an increased dark count rate originating from the limited extinction ratio of the eoms , which is measured to be .we therefore adjusted the mean photon number in the ao to optimize the trade off between state disturbance and dark count rate , which leads to an ao with about 20 photons .finally , the displaced signal is coupled to a multi - mode fiber connected to an avalanche photo diode ( apd ) .the apd is operated in an actively gated mode and has a measured quantum efficiency of .we probe the receiver with a sequence of test signals .each sequence is composed of an initial block of phase calibration pulses used to lock the quadrature in the homodyne measurement , followed by 9 blocks of probe pulses .each block contains the full qpsk alphabet for 34 different amplitudes in the range ] .the aim of the experiment is to provide a proof - of - principle demonstration of the hybrid receivers performance unaffected by any imperfections of the implemented hardware , but only limited by the physical concept . in the analysis of the experimental data ,we therefore assume unit quantum efficiencies for the individual receivers .losses and detection inefficiencies , which can also straightforwardly be described as loss , merely result in a linear rescaling of the states amplitudes . by combining this with the linearity of a beam splitter interaction, we can assign the detection inefficiencies to the state generation stage .this trick has proven to ease the understanding of the protocol by removing unnecessary prefactors .the assignment leads to a beam splitter with an effective splitting ratio : and .we measured the error rates for both the hd - k receiver and the hd - od receiver at an effective splitting ratio of t / r = 53/47 .the results are compared to the performance of an ideal heterodyne receiver in fig.[error_rate](left ) .the solid curves correspond to the theoretical error rates under ideal conditions , whereas the dashed curves include the detrimental effects of dark counts , which occurred with the probability of 2,72 .the error bars were derived by error propagation of the experimental uncertainties of the input amplitude and the displacement amplitude as well as the fluctuations among repeated realizations of the experiment which were around 0.5% . .the curves differ in the direction of the displacment in phase space ., width=604 ] we find the error rates for the hd - k receiver by evaluating the data where the signal power of the displaced state is minimal , i.e. when one state in the photon counting stage has been displaced to the vacuum .the error rates for the hd - od receiver are derived by minimizing the error rate over the range of measured displacements and displacement phases .the results for both receivers are in good agreement with the theoretical predictions .the measured error rates for the hd - od receiver are below the corresponding error rate of the ideal heterodyne detector for any input amplitude .moreover , most of the measurements beat the heterodyne receiver s performance with about one standard deviation .the essential difference between the hd - k and the hd - od receiver is illustrated in fig.[error_rate](right ) , where the dependence of the error rates over the displacement is shown for an input signal with mean photon number .the curves differ in the respective displacement angles in the two receivers .while the hd - k receiver was measured at , the phase in the hd - od receiver was adjusted to fulfill the optimality criterion ( see fig.[displacementsketch ] ) corresponding to .the configurations for the hd - k ( ) and the hd - od receiver ( minimal error rate ) are highlighted .obviously , the performance of the hd - k receiver can already be enhanced by increasing the displacement amplitude , however the minimal error rates are only achieved if both the displacement amplitude and phase are optimized .the corresponding error rate for the standard heterodyne receiver is shown as a reference and is surpassed by the hd - od receiver for a wide range of displacement amplitudes .the curvature of the error rate around the minimum is remarkably flat , such that the dependence on the absolute amplitude of the displacement is low .the relative error rates of the hybrid receivers , normalized to the error rates of heterodyne detection are shown fig.[comparison ] . additionally , the relative error rates of the before mentioned bondurant receiver is shown .bondurant had proposed two similar discrimination schemes which he termed type i and type ii , respectively .the curve shown in the figure correponds to the bondurant reveiver of type i , which provides the better performance in the considered region .while this receiver outperforms heterodyne detection and also our hybrid approaches for conventional signal amplitudes , it can not provide an enhanced performance in the domain of highly attenuated signals .the hd - od receiver provides to the best of our knowledge the hitherto minimal error rates for signals with mean photon numbers .we have proposed and experimentally realized a hybrid quantum receiver for the discrimination of qpsk coherent signals .we showed experimentally , that our novel receiver can outperform the standard scheme - heterodyne detection - for any signal amplitude .this work was supported by the dfg project le 408/19 - 2 and by the danish research agency ( project no .fnu 09 - 072632 ) .10 c w helstrom , _ detection theory and quantum mechanics _ inform . control 1967 * 10 * k tsujino , d fukuda , g fujii , s inoue , m fujiwara , m takeoka , and m sasaki _ sub - shot - noise - limit discrimination of on - off keyed coherent signals via a quantum receiver with a superconducting transition edge sensor _ , opt .express , 2010 , * 18 * , 8107 c wittmann , u l andersen , m takeoka , d sych and g leuchs , _ demonstration of coherent - state discrimination using a displacement - controlled photon - number - resolving detector _ phys .2010 * 104 * , 100505
we propose and experimentally demonstrate a near - optimal discrimination scheme for the quadrature phase shift keying protocol ( qpsk ) . we show in theory that the performance of our hybrid scheme is superior to the standard scheme - heterodyne detection - for all signal amplitudes and underpin the predictions with our experimental results . furthermore , our scheme provides the hitherto best performance in the domain of highly attenuated signals . the discrimination is composed of a quadrature measurement , a conditional displacement and a threshold detector .
division multiplexing ( ofdm ) is a technique widely used in many digital communication systems such us digital television ( dtv ) , digital audio broadcasting ( dab ) , terrestrial digital video broadcasting ( dvb - t ) , digital suscriber line ( dsl ) broadband internet access , standards for wireless local area networks ( wlans ) , standards for wireless metropolitan area networks ( wmans ) , and 4 g mobile communications .it has many advantages such us high bit rate , strong immunity to multipath and high spectral efficiency . however , one of the most serious problems is the high peak - to - average power ratio ( papr ) of the transmitted ofdm signal , since this large peaks introduce a serious degradation in performance when the signal passes through a nonlinear high - power - amplifier ( hpa ) .the non - linearity of hap leads to in - band distortion which increases bit error rate ( ber ) , and out - of - band radiation , which causes adjacent channel interference .there are several proposals to deal with the papr problem in ofdm systems , .the different techniques can be classified into different groups according to their characteristics .the most general classification is : clipping techniques , coding techniques , the distortionless schemes with side information and distortionless techniques without side information .the simplest implementation method is clipping technique , which consists in to deliberately clip the ofdm signal before amplification .clipping can reduce papr but this is a nonlinear process and may cause both in - band and out - of - band interference while destroying the orthogonality among the subcarriers .then , coding techniques are found , which are introduced in .the key of those techniques is to select the codewords that minimize the papr . in the next groupwe have the techniques that cause no distortion and create no out - of - band radiation , but they may be require the transmission of the side information to the receiver . techniques that require the transmission of side information are for example partial transmit sequence ( pts ) , tone reservation ( tr ) , etc . on the other hand selected mapping ( slm ) , constellation extension or orthogonal pilot sequences ( ops )do not require the transmission of side information .in this paper we describe some relevant papr reduction techniques of the literature .therefore , the paper is organized as follows . section [ sec : signal - model ] briefly shows the ofdm signal model . in section [ sec : papr ] the papr problem of the ofdm systemis presented .the clipping techniques are exposed in section [ sec : clip ] . in section [ sec : coding ] coding schemes are described .the analysis of the different distortionless techniques are addressed in section [ sec : dstless ] .simulation results are provided in section [ sec : resul ] .finally conclusions are drawn in section [ sec : viii ] .the ofdm signal is the sum of independent signals modulated onto subchannels of equal bandwidth , which can be efficiently implemented by an inverse discrete fourier transform ( idft ) operation , as illustrated in figure [ fig : ofdm - signal ] .we denote as the frequency - domain complex data sequence , consisting of the frequency - domain complex symbols over subcarrier , to be transmitted in the ofdm symbol. then the time - domain signal \rbrace ] is defined as the ratio between the maximum instantaneous power and its average power , that is : where denotes expected value . in the literature ,the most common way to evaluate the papr is to determine the probability that this papr exceeds a certain threshold .this is represented by the complementary cumulative distribution function ( ccdf ) , which is a random variable , as : simplest papr reduction method consists basically in clipping the high parts of the signal amplitude that are outside the allowed region .if the ofdm symbol is clipped at a level , then the the clipped signal is : where is the phase of .this technique is the simplest of implementation but it has the following drawbacks : * clipping causes in - band distortion , which degrades the performance of the ber * clipping causes out - of - band radiation , resulting in adjacent interference .this can be reduced by filtering , and thus clipping and filtering ( cf ) operation is used in .when the signal passes through the low - pass filtering there is a peak power regrowth . in has been shown that the nyquist - rate clipping suffers from a much higher peak power regrowth compared to the clipping with oversampling .thus , the results suggest that for efficient reduction of the peak power , the ofdm should be sufficiently oversampled ( _ i.e. _ ) before clipping .* there is the possibility to use iterative cf , but it takes many iterations to reach a desired amplitude level .coding techniques consist in selecting the codewords that minimize or reduce the papr .initially the idea was introduced in .this scheme requires exhaustive computational load to search the best codewords and to store the large lookup tables for encoding and decoding , specially with large number of subcarriers .moreover , these techniques do not address the error correction problem . in , an optimum code set for achieving minimum papris introduced and , moreover , to deal with the error correction it uses an additive offset .it enjoys the twin benefits of power control and error correction , but requires extensive calculation to find good codes and offsets .also , there are approaches in which the use of complementary block coding ( cbc ) was proposed to reduce the papr without the restriction of the frame size , . in reed - solomon ( rs )codes over the galois field are employed to create a number of candidates , from which the best are selected .considering the characteristics of those coding techniques , the main disadvantage of those coding methods is the good performance at the cost of coding rate , and a high computational load to search the adequate codewords .as an alternative to combat the papr problem , there are several techniques without distortion such us partial transmit sequences ( pts ) , selected mapping ( slm ) , tone reservation ( tr ) , orthogonal pilot sequences ( ops ) and constellation extension among other .those methods may require the transmission of side information to the receiver .in the next lines we dedicate to explain the several distortionless techniques put into two groups according to if they require or not the transmission of side information .many techniques require the transmission of side information to the receiver in order to the receiver can determine the type of processing that has been employed at the transmitter side .we introduced the more significant schemes .the pts technique was originally introduced in , which consists in that the frequency - domain input data block is subdivided into disjoint carrier subblocks , which are represented by the vectors .in general , for pts scheme , the known subblock partitioning methods can be classified into three categories : adjacent partition , interleaved partition and pseudorandom partition .then , the subblocks are transformed into time - domain partial transmit sequences : = \mbox{idft } \lbrace{\mathbf{s}}^{(v)}\rbrace.\ ] ] these partial sequences are independently rotated by phase factors .the objective is to optimally combine the subblocks to obtain the time - domain ofdm signals with the lowest papr . therefore , there are two important issues that should be solved in pts technique : high computational complexity to reach optimal phase factors and the overhead of the phase factors as side information needed to be transmitted to the receiver for correct decoding .recently , some approaches have been proposed in order to reduce the complexity , such us , where an iterative pts ( ipts ) is presented .the performance of this scheme is not good although the ipts is very simple .the dual - layer phase sequencing ( dlps ) with different implementations also is proposed in .there are other techniques that have been introduced in , and where the authors propose to reduce the computational complexity of pts .for example , in is introduced the idea of the relationship between the weighting factors and the transmitted bit vectors .tr and ti are two efficient techniques to reduce the papr of ofdm signals and they are proposed in . in these schemesboth transmitter and receiver reserve a subset of tones that are not used for data transmission to generate papr reduction signals .the objective of tr is to find the time - domain signal to be added to the original time - domain signal to reduce the papr .denoting as a frequency - domain vector for tone reservation , that added to the data input symbols , then the new time - domain signal after tone reservation processing is : therefore , the main aim of the tr is to find out the proper to make the vector with lower papr . to find the value of , we must solve a convex optimization problem that can easily be cast as a linear programming problem .similarly , ti also uses an additive correction , although the idea is to increase the constellation size so that each of the points in the original basic constellation can be mapped into several equivalent points in the expanded constellation . each symbol in a data block can be mapped into one of several equivalent constellation points .this method is called tone injection because substituting a point in the basic constellation for a new point in the larger constellation is equivalent to injecting an appropriate tone .the ti technique is more problematic than the tr scheme since the injected signal occupies the frequency band as the information bearing signals .moreover , the alternative constellation points in ti technique have an increased energy and the implementation complexity increases due to the computation of the optimal translation vector . in the next lines of this subsection we expose several distortionless techniques , which do not require the transmission of side information to the receiver . in this technique ,the transmitter generates a sufficiently large number of alternative ofdm signal sequences , all representing the same information as the original symbol .then each of these alternative input data sequences is made the idft operation and the one with the lowest papr is selected for transmission .each input data symbol is multiplied by different phase sequences , each of length , , resulting in modified data symbol . after applying slm to ,the time - domain signal becomes : where .the original slm scheme has the next characteristics : * information about the selected phase sequence should be transmitted to the receiver as side information . at the receiver , the reverse operation is performed to recover the original data symbol . however , in and an slm algorithm without explicit side information is proposed .* for implementation , the slm technique requires a bank of idft operations to generate a set of candidate transmission signals , and this requirement usually results in high computational complexity .there are some approaches attempting to decrease the complexity like , and .* this approach is applicable with all types of modulation and any number of subcarriers .* the amount of papr reduction for slm depends on the number of phase sequences and the design of the phase sequences . in constellation extension techniquesthe key is to play intelligently with the outer constellation points , that are moved within the proper quater - plane as shown in figure [ fig : constellation ] , such that the papr is minimized .the main advantages of these techniques are enumerated next : * the minimum distance of the constellation points is not affected and it consequently guarantees no ber degradation . * these methods do no require the transmission of side information to the receiver .* there is no data rate loss .nevertheless , constellation expansion schemes introduce an increase in the energy per symbol . in active constellation extension ( ace )is presented , and in this method all symbols are expanded but its computational burden is high .more recent , a metric - based symbol predistortion scheme has been introduced in and . in this method , a metric , defined mathematically in ( [ eq : metrica ] ) ,is used to measure how much each frequency - domain symbol contributes to large peaks , and the frequency - domain symbols with the highest metric values are selected and predistorted with a constant scaling factor . where , is a function which gives an appropriate measure of the phase angle between the output sample ]can be separated into two parts , as : = \left\ { \begin{array}{lcr } x\left[n\right ] & = \frac{1}{\sqrt{n}}\sum_{k\notin\upsilon}{x(k)e^{j \frac{2\pi}{n}kn } } \\p\left[n\right ] & = \frac{1}{\sqrt{n}}\sum_{k\in\upsilon}{p(k)e^{j \frac{2\pi}{n}kn } } \end{array } \right.\ ] ] where ] refer to the time - domain data and pilot signals , respectively .ops technique proposes the use of a predetermined set of orthogonal pilot sequences of length ( ) in order to reduces complexity and avoids any side information since blind detection is possible at the receiver due to the orthogonality condition .the pilot symbols of each ofdm symbol can be collected in a sequence denoted as where , the element of this sequence is given by : }_k= \left\ { \begin{array}{lr } p(k ) , & k \in\upsilon \\ 0 & k \notin\upsilon \end{array } \right.\ ] ] as stated before , a set of pilot sequences are available so the alphabet of is .each pilot sequence of this finite set , contains the frequency - domain pilot symbols at pilot positions while zeros are inserted in the remaining ones .these pilots sequences are orthogonal so then the ortoghonality conditions is fulfilled where denotes the inner product . in particular , if the well - known walsh - hadamard sequences are employed where , then ] denotes the kronecker delta function .simulations are presented by averaging over randomly ofdm symbols with quaternary phase - shift keying ( qpsk ) modulation .the performance of the several papr schemes is presented in terms of ccdf .figure [ fig : ccdf - ns ] illustrates the papr of an ofdm system before applying any papr reduction technique .we present the ccdf for a set of different values of subcarriers .the horizontal and vertical axis represent the threshold ( ) for the papr and the probability that the papr of a certain ofdm symbol exceeds a threshold , respectively .it is shown that the unmodified ofdm signal has a papr that exceeds 10.5 db at the probability for .subcarriers.,scaledwidth=49.0% ] figure [ fig : ccdf - slm ] shows the performance comparison in terms of papr reduction with slm technique when we employ different values of the number of phase sequences .the solid line curve represents the ccdf of the ofdm symbols without any papr technique , and the solid marked line curves show the performance of the ofdm symbols after applying slm method for different values of .it can observed how a reduction close to 1.5 db with at a probability of .the solid - line curve corresponds to the conventional ofdm signal without any papr reduction scheme .the solid marked line curves represent the slm technique with different values of ,scaledwidth=51.0% ] simulations in figure [ fig : ccdf - ops ] depict the ccdf of the papr reduction after applying ops technique with different values of .we employ subcarriers .the ccdf of the ofdm symbols without any papr technique is represented by the solid line curve and the solid marked line curves show the performance of ops scheme for different values of .it can observed how the reduction is in the order of 1.5 db with at a probability of .the solid - line curve corresponds to the conventional ofdm signal without any papr reduction scheme .the solid marked line curves represent the ops technique with different values of .,scaledwidth=51.0% ] results of simulations of simple amplitude presitortion ( sap ) technique are presented in figure [ fig : ccdf - sap ] .we employ subcarriers .the ccdf of the ofdm symbols without any papr technique is represented by the solid line curve and the solid marked line curves illustrate the performance of sap scheme for different values of and .it can be noticed how the reduction is close to 2.5 db with at a probability of .the solid - line curve corresponds to the conventional ofdm signal without any papr reduction scheme .the solid marked line curves represent the sap technique with different set values of .,scaledwidth=50.0% ]ofdm is an attractive technique for digital communications systems due to its high bit rate , strong immunity to multipath and high spectral efficiency .however , one of the most serious problems is the high peak - to - average power ratio ( papr ) of the transmitted ofdm signal , since this large peaks introduce a serious degradation in performance when the signal passes through a nonlinear high - power - amplifier ( hpa ) . in this paper, we address the papr problem of ofdm systems and the more relevant techniques to achieve papr reduction .the more remarkable characteristics of those techniques are discussed as well as it is provided their mathematical description although there is an extensive state of the art , nowadays , the papr problem is still an active area of research with many open issues .this work has been partly funded by the spanish national projects gre3n - syst ( tec2011 - 29006-c03 - 03 ) and comonsens ( csd2008 - 00010 ) , and senescyt ( ecuador ) .t. jiang and y. wu , _ an overview : peak - to - average power ratio reduction techniques for ofdm signals " _ , ieee trans . on broadcasting , vol .54 , no . 2 , pp . 257 - 268 , jun . 2008 .seung hee han and jae hong lee , _ an overview of peak - to - average power ratio reduction techniques for multicarrier transmission " _ , ieee wireless communications , vol . 12 , no . 2 , pp . 56 - 65 , april 2005 . h. ochiai and h. imai , _ performance analysis of deliberately clipped ofdm signals " _, ieee trans . on communications , vol .50 , no . 1 ,89 - 101 , jan . 2002 .j. armstrong , _ peak - to - average power reduction for ofdm by repeated clipping and frequency domain filtering " _ , electronics letters , vol .38 , no . 8 , pp . 246 - 247 , feb . 2002 .s .- k . deng and m .- c .lin , _ recursive clipping and filtering with bounded distortion for papr reduction " _ , ieee trans . on communications ,55 , no . 1 ,pp . 227 - 230 , jan . 2007 .a. e. jones , t. a. wilkinson and s. k. barton , _ block coding scheme for reduction of peak to mean envelope power ratio of multicarrier transmission schemes " _ , electronics letters , vol .30 , no . 25 , pp .2098 - 2099 , dec . 1994 .a. e. jones and t. a. wilkinson , _ combined coding for error control and increased robustness to system nonlinearities in ofdm " _ , vehicular technology conference , 1996 . ` mobile technology for the human race ' , ieee 46th , vol . 2 , pp . 904 - 908 , 28 apr.-1 may 1996 . t. jiang and z. guangxi , _ complement block coding for reduction in peak - to - average power ratio of ofdm signals " _ , ieee communications magazine , vol .9 , pp . s17-s22 , 2005 . j. a. davis and j. jedwab , _ peak - to - mean power control for ofdm transmission using golay sequences and reed muller codes " _ , electronics letters , vol . 33 , no . 4 , pp . 267268 , feb .r. f. h .fischer and c. siegl , _ reed solomon and simplex codes for peak - to - average power ratio reduction in ofdm " _ , ieee trans . on information theory , vol .55 , no . 4 , pp . 1519 - 1528 ,s. h. muller and j. b. huber , _ ofdm with reduced peak - to - average power ratio by optimum combination of partial transmit sequences " _ , electronics letters , vol .33 , no . 5 , pp .368 - 369 , 1997 .l. j. cimini jr . and n. r. sollenberger , _ peak - to - average power ratio reduction of an ofdm signal using partial transmit sequences " _ , ieee communications letters , vol . 4 , no . 3 , pp . 8688 ,w. s. ho , a. s. madhukumar , and f. chin , _ peak - to - average power reduction using partial transmit sequences : a suboptimal approach based on dual layered phase sequencing " _ , ieee trans . on broadcasting ,49 , no . 2 , pp .225231 , jun .l. yang , r. s. chen , y. m. siu and k. k. soo , _ papr reduction of an ofdm signal by use of pts with low computational complexity " _ , ieee trans . on broadcasting , vol .52 , no . 1 ,83 - 86 , 2006 .dae - woon lim , seok - joong heo , jong - seon no and habong chung , _ a new pts ofdm scheme with low complexity for papr reduction " _ , ieee trans . on broadcasting , vol .52 , no . 1 ,77 - 82 , 2006 .a. ghassemi and t. a. gulliver , _ a low - complexity pts - based radix fft method for papr reduction in ofdm systems " _ , ieee trans . on signal processing , vol .56 , no . 3 , pp .1161 - 1166 , 2008 . j. tellado and j. m. cioffi , _ par reduction in multicarrier transmission systems " _ , ansi document ,t1e1.4 , pp . 114 , 1999 .x. b. wang , t. t. tjhung , and c. s. ng , _ reduction of peak - to - average power ratio of ofdm system using a companding technique " _ , ieee trans . on broadcasting ,45 , no . 3 , pp . 303307 ,sept . 1999 .r. w. bauml , r. f. h. fisher and j. b. huber , _ reducing the peak - to - average power ratio of multicarrier modulation by selected mapping " _ , electronics letters , vol .32 , no .22 , pp . 20562057 , oct .h. breiling , s. h. mller - weinfurtner , and j. b. huber , _ slm peak - power reduction without explicit side information " _ , ieee communications letters , vol . 5 , no . 6 , pp .23941 , jun . 2001 .chin - liang wang , _ low - complexity selected mapping schemes for peak - to - average power ratio reduction in ofdm systems " _ , ieee trans . on signal processing , vol .54 , no . 12 , pp .4652 - 4660 , dec . 2005 .s. j. heo , h. s. noh , j. s. no and d. j. shin _ a modified slm scheme with low complexity for papr reduction of ofdm systems " _ , ieee trans . on broadcasting , vol .53 , no . 4 , pp .802 - 808 , dec . 2007 .s. y. le goff , s. s. al - samahi , b. k. khoo , c. c. tsimenidis , and b. s. sharif,_selected mapping without side information for papr reduction in ofdm " _ , ieee trans . on wireless communications , vol . 8 , no . 7 , pp. 3320 - 3325 , jul .a. ghassemi t. a. gulliver , _ partial selective mapping ofdm with low complexity iffts " _ , ieee communications letters , vol .1 , pp . 4 - 6 , jan .b. s. krongold and d. l. jones , _ par reduction in ofdm via active constellation extension " _ , ieee trans .on broadcasting , vol .49 , no . 3 , pp . 258 - 268 , sept .s. sezginer , and h. sari , _ ofdm peak power reduction with simple amplitude predistortion " _ , ieee communications letters , vol .10 , no . 12 , pp . 65 - 67 , feb . 2006 .s. sezginer , and h. sari , _ metric - based symbol predistortion techniques for peak power reduction in ofdm systems " _ , ieee trans . on wireless communications , vol . 6 , no . 7 , pp .2622 - 2629 , jul .m. j. fernandez - getino garcia , o. edfors , j. m. paez - borrallo , _ peak power reduction for ofdm systems with orthogonal pilot sequences " , _ieee trans . on wireless communications , vol . 5 , no . 1 ,47 - 51 , jan . 2006 .martha c. paredes paredes received the ingeniero en electrnica y redes de informacin degree from escuela politcnica nacional , quito , ecuador in 2008 and the m. sc . of multimedia and communications from carlos iii university of madrid ,spain in 2010 , with scholarship from fundacin carolina , spain .she is currently pursuing the ph.d .degree in the department of signal theory and communications at carlos iii university of madrid , where she is doing research on signal processing for multicarrier modulation and papr reduction in ofdm systems .m. julia fernndez - getino garca received the m. eng . and ph.d .degrees in telecommunication engineering , both from the polytechnic university of madrid , spain , in 1996 and 2001 , respectively .currently , she is with the department of signal theory and communications of carlos iii university of madrid , spain , as an associate professor . from 1996 to 2001 , she held a research position at the department of signals , systems and radiocommunications of polytechnic university of madrid .she was on leave during 1998 at bell laboratories , murray hill , nj , usa , visited lund university , sweden , during two periods in 1999 and 2000 , visited politecnico di torino , italy , in 2003 and 2004 , and visited aveiro university , portugal , in 2009 and 2010 .her research interests include multicarrier communications , coding and signal processing for wireless systems . in 1998 and 2003, she respectively received the best ` master thesis ' and ` ph.d .thesis ' awards from the professional association of telecommunication engineers of spain , and in 1999 and 2000 , she was respectively , awarded the ` student paper award ' and ` certificate of appreciation ' at the ieee international conferences pimrc99 and vtc00 .in 2004 , she was distinguished with the ` ph.d .extraordinary award ' from the polytechnic university of madrid . in 2012, she has received the ` excellence award ' to her research from carlos iii university of madrid .
orthogonal frequency division multiplexing ( ofdm ) is widely used in many digital communication systems due to its advantages such us high bit rate , strong immunity to multipath and high spectral efficiency but it suffers a high peak - to - average power ratio ( papr ) at the transmitted signal . it is very important to deal with papr reduction in ofdm systems to avoid signal degradation . currently , the papr problem is an active area of research and in this paper we present several techniques and that mathematically analyzed . moreover their advantages and disadvantages have been enumerated in order to provide the readers the actual situation of the papr problem . ofdm system , papr reduction .
principal component analysis ( pca ) is a standard tool in modern data analysis - in diverse fields from neuroscience to computer graphics - because it is a simple , non - parametric method for extracting relevant information from confusing data sets . with minimal effort pcaprovides a roadmap for how to reduce a complex data set to a lower dimension to reveal the sometimes hidden , simplified structures that often underlie it .the goal of this tutorial is to provide both an intuitive feel for pca , and a thorough discussion of this topic .we will begin with a simple example and provide an intuitive explanation of the goal of pca .we will continue by adding mathematical rigor to place it within the framework of linear algebra to provide an explicit solution .we will see how and why pca is intimately related to the mathematical technique of singular value decomposition ( svd ) .this understanding will lead us to a prescription for how to apply pca in the real world and an appreciation for the underlying assumptions .my hope is that a thorough understanding of pca provides a foundation for approaching the fields of machine learning and dimensional reduction .the discussion and explanations in this paper are informal in the spirit of a tutorial .the goal of this paper is to _ educate_. occasionally , rigorous mathematical proofs are necessary although relegated to the appendix . although not as vital to the tutorial , the proofs are presented for the adventurous reader who desires a more complete understanding of the math .my only assumption is that the reader has a working knowledge of linear algebra .my goal is to provide a thorough discussion by largely building on ideas from linear algebra and avoiding challenging topics in statistics and optimization theory ( but see discussion ). please feel free to contact me with any suggestions , corrections or comments .here is the perspective : we are an experimenter .we are trying to understand some phenomenon by measuring various quantities ( e.g. spectra , voltages , velocities , etc . ) in our system .unfortunately , we can not figure out what is happening because the data appears clouded , unclear and even redundant .this is not a trivial problem , but rather a fundamental obstacle in empirical science .examples abound from complex systems such as neuroscience , web indexing , meteorology and oceanography - the number of variables to measure can be unwieldy and at times even _ deceptive _ , because the underlying relationships can often be quite simple .take for example a simple toy problem from physics diagrammed in figure [ diagram : toy ] .pretend we are studying the motion of the physicist s ideal spring .this system consists of a ball of mass attached to a massless , frictionless spring .the ball is released a small distance away from equilibrium ( i.e. the spring is stretched ) .because the spring is ideal , it oscillates indefinitely along the -axis about its equilibrium at a set frequency .this is a standard problem in physics in which the motion along the direction is solved by an explicit function of time . in other words ,the underlying dynamics can be expressed as a function of a single variable . however , being ignorant experimenters we do not know any of this .we do not know which , let alone how many , axes and dimensions are important to measure .thus , we decide to measure the ball s position in a three - dimensional space ( since we live in a three dimensional world ) .specifically , we place three movie cameras around our system of interest . at each movie camerarecords an image indicating a two dimensional position of the ball ( a projection ) .unfortunately , because of our ignorance , we do not even know what are the real , and axes , so we choose three camera positions and at some arbitrary angles with respect to the system .the angles between our measurements might not even be ! now , we record with the cameras for several minutes .the big question remains : _ how do we get from this data set to a simple equation of ? _ we know a - priori that if we were smart experimenters , we would have just measured the position along the -axis with one camera .but this is not what happens in the real world .we often do not know which measurements best reflect the dynamics of our system in question .furthermore , we sometimes record more dimensions than we actually need .also , we have to deal with that pesky , real - world problem of noise . in the toy examplethis means that we need to deal with air , imperfect cameras or even friction in a less - than - ideal spring .noise contaminates our data set only serving to obfuscate the dynamics further ._ this toy example is the challenge experimenters face everyday . _keep this example in mind as we delve further into abstract concepts .hopefully , by the end of this paper we will have a good understanding of how to systematically extract using principal component analysis .the goal of principal component analysis is to identify the most meaningful basis to re - express a data set .the hope is that this new basis will filter out the noise and reveal hidden structure . in the example of the spring, the explicit goal of pca is to determine : `` the dynamics are along the -axis . '' in other words , the goal of pca is to determine that , i.e. the unit basis vector along the -axis , is the important dimension .determining this fact allows an experimenter to discern which dynamics are important , redundant or noise . with a more precise definition of our goal , we need a more precise definition of our data as well .we treat every time sample ( or experimental trial ) as an individual sample in our data set . at each time samplewe record a set of data consisting of multiple measurements ( e.g. voltage , position , etc . ) . in our dataset , at one point in time , camera _ a _ records a corresponding ball position .one sample or trial can then be expressed as a 6 dimensional column vector \ ] ] where each camera contributes a 2-dimensional projection of the ball s position to the entire vector .if we record the ball s position for 10 minutes at 120 hz , then we have recorded of these vectors . with this concrete example ,let us recast this problem in abstract terms .each sample is an -dimensional vector , where is the number of measurement types .equivalently , every sample is a vector that lies in an -dimensional vector space spanned by some orthonormal basis . from linear algebrawe know that all measurement vectors form a linear combination of this set of unit length basis vectors . what is this orthonormal basis ?this question is usually a tacit assumption often overlooked .pretend we gathered our toy example data above , but only looked at camera .what is an orthonormal basis for ? a naive choice would be , but why select this basis over or any other arbitrary rotation ?the reason is that the _ naive basis reflects the method we gathered the data ._ pretend we record the position .we did not record in the direction and in the perpendicular direction . rather , we recorded the position on our camera meaning 2 units up and 2 units to the left in our camera window .thus our original basis reflects the method we measured our data .how do we express this naive basis in linear algebra ?in the two dimensional case , can be recast as individual row vectors .a matrix constructed out of these row vectors is the identity matrix .we can generalize this to the -dimensional case by constructing an identity matrix = \left [ \begin{array}{cccc } 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \\ \end{array } \right ] = \mathbf{i}\ ] ] where each _ row _ is an orthornormal basis vector with components .we can consider our naive basis as the effective starting point .all of our data has been recorded in this basis and thus it can be trivially expressed as a linear combination of . with this rigor we may now state more precisely what pca asks : _ is there another basis , which is a linear combination of the original basis , that best re - expresses our data set ?_ a close reader might have noticed the conspicuous addition of the word _linear_. indeed , pca makes one stringent but powerful assumption : linearity .linearity vastly simplifies the problem by restricting the set of potential bases . with this assumption pcais now limited to re - expressing the data as a _ linear combination _ of its basis vectors .let be the original data set , where each is a single sample ( or moment in time ) of our data set ( i.e. ) .in the toy example is an matrix where and .let be another matrix related by a linear transformation . is the original recorded data set and is a new representation of that data set . also let us define the following quantities .and are _ column _ vectors , but be forewarned . in all other sections and are _ row _ vectors . ]* are the rows of * are the columns of ( or individual .* are the columns of .equation [ eqn : basis - transform ] represents a change of basis and thus can have many interpretations . 1 . is a matrix that transforms into .2 . geometrically , is a rotation and a stretch which again transforms into .the rows of , , are a set of new basis vectors for expressing the columns of .the latter interpretation is not obvious but can be seen by writing out the explicit dot products of . \left [ \begin{array}{ccc } \mathbf{x_1 } & \cdots & \mathbf{x_n } \\ \end{array } \right ] \\\mathbf{y } & = & \left [ \begin{array}{ccc } \mathbf{p_1 \cdot x_1 } & \cdots & \mathbf{p_1 \cdot x_n } \\\vdots & \ddots & \vdots \\\mathbf{p_m \cdot x_1 } & \cdots & \mathbf{p_m \cdot x_n } \\\end{array } \right ] \\\end{aligned}\ ] ] we can note the form of each column of .\ ] ] we recognize that each coefficient of is a dot - product of with the corresponding row in . in other words , the coefficient of is a projection on to the row of .this is in fact the very form of an equation where is a projection on to the basis of .therefore , the rows of are a new set of basis vectors for representing of columns of . by assuming linearity the problem reduces to findingthe appropriate _ change of basis_. the row vectors in this transformation will become the _ principal components _ of .several questions now arise . * what is the best way to re - express ? * what is a good choice of basis ? these questions must be answered by next asking ourselves _what features we would like to exhibit_. evidently , additional assumptions beyond linearity are required to arrive at a reasonable result .the selection of these assumptions is the subject of the next section .now comes the most important question : what does _ best express _ the data mean ?this section will build up an intuitive answer to this question and along the way tack on additional assumptions . for camera_ a_. the signal and noise variances and are graphically represented by the two lines subtending the cloud of data .note that the largest direction of variance does not lie along the basis of the recording but rather along the best - fit line.,scaledwidth=25.0% ] measurement noise in any data set must be low or else , no matter the analysis technique , no information about a signal can be extracted .there exists no absolute scale for noise but rather all noise is quantified relative to the signal strength .a common measure is the _ signal - to - noise ratio _ ( _ snr _ ) , or a ratio of variances , a high snr ( ) indicates a high precision measurement , while a low snr indicates very noisy data .let s take a closer examination of the data from camera _ a _ in figure [ fig : snr ] .remembering that the spring travels in a straight line , every individual camera should record motion in a straight line as well .therefore , any spread deviating from straight - line motion is noise .the variance due to the signal and noise are indicated by each line in the diagram .the ratio of the two lengths measures how skinny the cloud is : possibilities include a thin line ( snr ) , a circle ( snr ) or even worse . by positing reasonably good measurements, quantitatively we assume that directions with largest variances in our measurement space contain the dynamics of interest . in figure[ fig : snr ] the direction with the largest variance is not nor , but the direction along the long axis of the cloud .thus , by assumption the dynamics of interest exist along directions with largest variance and presumably highest snr .our assumption suggests that the basis for which we are searching is not the naive basis because these directions ( i.e. ) do not correspond to the directions of largest variance . maximizing the variance ( and by assumption the snr )corresponds to finding the appropriate rotation of the naive basis .this intuition corresponds to finding the direction indicated by the line in figure [ fig : snr ] . in the 2-dimensional case of figure [ fig : snr ]the direction of largest variance corresponds to the best - fit line for the data cloud .thus , rotating the naive basis to lie parallel to the best - fit line would reveal the direction of motion of the spring for the 2-d case .how do we generalize this notion to an arbitrary number of dimensions ?before we approach this question we need to examine this issue from a second perspective . and . the two measurements on the left are uncorrelated because one can not predict one from the other .conversely , the two measurements on the right are highly correlated indicating highly redundant measurements.,scaledwidth=47.0% ] figure [ fig : snr ] hints at an additional confounding factor in our data - redundancy .this issue is particularly evident in the example of the spring . in this casemultiple sensors record the same dynamic information .reexamine figure [ fig : snr ] and ask whether it was really necessary to record 2 variables . figure [ fig : redundancy ]might reflect a range of possibile plots between two arbitrary measurement types and .the left - hand panel depicts two recordings with no apparent relationship . because one can not predict from , one says that and are uncorrelated . on the other extreme, the right - hand panel of figure [ fig : redundancy ] depicts highly correlated recordings .this extremity might be achieved by several means : * a plot of if cameras _ a _ and _ b _ are very nearby . * a plot of where is in meters and is in inches .clearly in the right panel of figure [ fig : redundancy ] it would be more meaningful to just have recorded a single variable , not both .why ? because one can calculate from ( or vice versa ) using the best - fit line .recording solely one response would express the data more concisely and reduce the number of sensor recordings ( variables ) .indeed , this is the central idea behind dimensional reduction . in a 2 variable caseit is simple to identify redundant cases by finding the slope of the best - fit line and judging the quality of the fit .how do we quantify and generalize these notions to arbitrarily higher dimensions ?consider two sets of measurements with zero means where the subscript denotes the sample number .the variance of and are individually defined as , the _ covariance _ between and is a straight - forward generalization . the covariance measures the degree of the linear relationship between two variables .a large positive value indicates positively correlated data .likewise , a large negative value denotes negatively correlated data .the absolute magnitude of the covariance measures the degree of redundancy .some additional facts about the covariance .* is zero if and only if and are uncorrelated ( e.g. figure [ fig : snr ] , left panel ) . * if .we can equivalently convert and into corresponding row vectors . \\ \mathbf{b } & = & \left[b_1\;b_2\;\ldots\;b_n\right]\end{aligned}\ ] ] so that we may express the covariance as a dot product matrix computation .is calculated as .the slight change in normalization constant arises from estimation theory , but that is beyond the scope of this tutorial . ] finally , we can generalize from two vectors to an arbitrary number .rename the row vectors and as and , respectively , and consider additional indexed row vectors .define a new matrix .\ ] ] one interpretation of is the following . each _row _ of corresponds to all measurements of a particular type .column _ of corresponds to a set of measurements from one particular trial ( this is from section 3.1 ) .we now arrive at a definition for the _ covariance matrix _ . consider the matrix .the element of is the dot product between the vector of the measurement type with the vector of the measurement type .we can summarize several properties of : * is a square symmetric matrix ( theorem 2 of appendix a ) * the diagonal terms of are the _ variance _ of particular measurement types . * the off - diagonal terms of are the _ covariance _ between measurement types . captures the covariance between all possible pairs of measurements .the covariance values reflect the noise and redundancy in our measurements .* in the diagonal terms , by assumption , large values correspond to interesting structure . * in the off - diagonal terms large magnitudes correspond to high redundancy .pretend we have the option of manipulating .we will suggestively define our manipulated covariance matrix .what features do we want to optimize in ?we can summarize the last two sections by stating that our goals are ( 1 ) to minimize redundancy , measured by the magnitude of the covariance , and ( 2 ) maximize the signal , measured by the variance .what would the optimized covariance matrix look like ?* all off - diagonal terms in should be zero .thus , must be a diagonal matrix . or, said another way , is decorrelated . * each successive dimension in be rank - ordered according to variance .there are many methods for diagonalizing .it is curious to note that pca arguably selects the easiest method : pca assumes that all basis vectors are orthonormal , i.e. is an _orthonormal matrix_. why is this assumption easiest ? envision how pca works . in our simple example in figure [ fig : snr ] , acts as a generalized rotation to align a basis with the axis of maximal variance . in multiple dimensionsthis could be performed by a simple algorithm : 1 .select a normalized direction in -dimensional space along which the variance in is maximized . save this vector as .2 . find another direction along which variance is maximized , however , because of the orthonormality condition , restrict the search to all directions orthogonal to all previous selected directions .save this vector as 3 .repeat this procedure until vectors are selected .the resulting ordered set of s are the _principal components_. in principle this simple algorithm works , however that would bely the true reason why the orthonormality assumption is judicious .the true benefit to this assumption is that there exists an efficient , analytical solution to this problem .we will discuss two solutions in the following sections .notice what we gained with the stipulation of rank - ordered variance .we have a method for judging the importance of the principal direction .namely , the variances associated with each direction quantify how `` principal '' each direction is by rank - ordering each basis vector according to the corresponding variances.we will now pause to review the implications of all the assumptions made to arrive at this mathematical goal .this section provides a summary of the assumptions behind pca and hint at when these assumptions might perform poorly ._ linearity _ + linearity frames the problem as a change of basis .several areas of research have explored how extending these notions to nonlinear regimes ( see discussion ) ._ large variances have important structure . _+ this assumption also encompasses the belief that the data has a high snr .hence , principal components with larger associated variances represent interesting structure , while those with lower variances represent noise . note that this is a strong , and sometimes , incorrect assumption ( see discussion ) . _the principal components are orthogonal . _+ this assumption provides an intuitive simplification that makes pca soluble with linear algebra decomposition techniques .these techniques are highlighted in the two following sections .we have discussed all aspects of deriving pca - what remain are the linear algebra solutions .the first solution is somewhat straightforward while the second solution involves understanding an important algebraic decomposition .we derive our first algebraic solution to pca based on an important property of eigenvector decomposition .once again , the data set is , an matrix , where is the number of measurement types and is the number of samples .the goal is summarized as follows ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ find some orthonormal matrix in such that is a diagonal matrix .the rows of are the _ principal components _ of ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we begin by rewriting in terms of the unknown variable . note that we have identified the covariance matrix of in the last line .our plan is to recognize that any symmetric matrix is diagonalized by an orthogonal matrix of its eigenvectors ( by theorems 3 and 4 from appendix a ) .for a symmetric matrix theorem 4 provides , where is a diagonal matrix and is a matrix of eigenvectors of arranged as columns .might have orthonormal eigenvectors where is the rank of the matrix .when the rank of is less than , is _ degenerate _ or all data occupy a subspace of dimension . maintaining the constraint of orthogonality, we can remedy this situation by selecting additional orthonormal vectors to `` fill up '' the matrix .these additional vectors do not effect the final solution because the variances associated with these directions are zero . ]now comes the trick ._ we select the matrix to be a matrix where each row is an eigenvector of ._ by this selection , . with this relation and theorem 1 of appendix a ( )we can finish evaluating . it is evident that the choice of diagonalizes .this was the goal for pca .we can summarize the results of pca in the matrices and .* the principal components of are the eigenvectors of . *the diagonal value of is the variance of along . in practice computing pca of a dataset entails ( 1 ) subtracting off the mean of each measurement type and ( 2 ) computing the eigenvectors of .this solution is demonstrated in matlab code included in appendix b.this section is the most mathematically involved and can be skipped without much loss of continuity .it is presented solely for completeness .we derive another algebraic solution for pca and in the process , find that pca is closely related to singular value decomposition ( svd ) .in fact , the two are so intimately related that the names are often used interchangeably .what we will see though is that svd is a more general method of understanding _ change of basis_. we begin by quickly deriving the decomposition . in the following section we interpret the decomposition and in the last section we relate these results to _let be an arbitrary matrix to .the reason for this derivation will become clear in section 6.3 . ] and be a rank , square , symmetric matrix . in a seemingly unmotivated fashion , let us define all of the quantities of interest . * is the set of _ orthonormal _ eigenvectors with associated eigenvalues for the symmetric matrix . * are positive real and termed the _singular values_. * is the set of vectors defined by .the final definition includes two new and unexpected properties .* = j * these properties are both proven in theorem 5 .we now have all of the pieces to construct the decomposition .the scalar version of singular value decomposition is just a restatement of the third definition . this result says a quite a bit . multiplied by an eigenvector of is equal to a scalar times another vector .the set of eigenvectors and the set of vectors are both orthonormal sets or bases in -dimensional space. we can summarize this result for all vectors in one matrix multiplication by following the prescribed construction in figure [ diagram : svd - construction ] .we start by constructing a new diagonal matrix .\ ] ] where are the rank - ordered set of singular values .likewise we construct accompanying orthogonal matrices , \\ \mathbf{u } & = & \left[\mathbf{\hat{u}_{\tilde{1}}}\;\mathbf{\hat{u}_{\tilde{2}}}\;\ldots\;\mathbf{\hat{u}_{\tilde{n } } } \right ] \end{aligned}\ ] ] where we have appended an additional and orthonormal vectors to `` fill up '' the matrices for and respectively ( i.e. to deal with degeneracy issues ) .figure [ diagram : svd - construction ] provides a graphical representation of how all of the pieces fit together to form the matrix version of _ where each column of and perform the scalar version of the decomposition ( equation [ eqn : value - svd ] ) . because is orthogonal, we can multiply both sides by to arrive at the final form of the decomposition . although derived without motivation , this decomposition is quite powerful .equation [ eqn : svd - matrix ] states that _ any _ arbitrary matrix can be converted to an orthogonal matrix , a diagonal matrix and another orthogonal matrix ( or a rotation , a stretch and a second rotation ) . making sense of equation [ eqn : svd - matrix ]is the subject of the next section ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the scalar form of svd is expressed in equation [ eqn : value - svd ] . mathematical intuition behind the construction of the matrix form is that we want to express all scalar equations in just one equation .it is easiest to understand this process graphically .drawing the matrices of equation [ eqn : value - svd ] looks likes the following .we can construct three new matrices , and .all singular values are first rank - ordered , and the corresponding vectors are indexed in the same rank order . each pair of associated vectors and is stacked in the column along their respective matrices .the corresponding singular value is placed along the diagonal ( the position ) of .this generates the equation , which looks like the following .the matrices and are and matrices respectively and is a diagonal matrix with a few non - zero values ( represented by the checkerboard ) along its diagonal .solving this single matrix equation solves all `` value '' form equations ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the final form of svd is a concise but thick statement . instead let us reinterpret equation [ eqn : value - svd ] as where and are column vectors and is a scalar constant .the set is analogous to and the set is analogous to .what is unique though is that and are orthonormal sets of vectors which _ span _ an or dimensional space , respectively . in particular ,loosely speaking these sets appear to span all possible `` inputs '' ( i.e. ) and `` outputs '' ( i.e. ) .can we formalize the view that and span all possible `` inputs '' and `` outputs '' ?we can manipulate equation [ eqn : svd - matrix ] to make this fuzzy hypothesis more precise . where we have defined .note that the previous columns are now rows in . comparing this equation to equation [ eqn : basis - transform ] ,perform the same role as .hence , is a _ change of basis _ from to . just as before, we were transforming column vectors , we can again infer that we are transforming column vectors .the fact that the orthonormal basis ( or ) transforms column vectors means that is a basis that spans the columns of .bases that span the columns are termed the _ column space _ of .the column space formalizes the notion of what are the possible `` outputs '' of any matrix . there is a funny symmetry to svd such that we can define a similar quantity - the _ row space_. where we have defined .again the rows of ( or the columns of ) are an orthonormal basis for transforming into . because of the transpose on , it follows that is an orthonormal basis spanning the _ row space _ of .the row space likewise formalizes the notion of what are possible `` inputs '' into an arbitrary matrix . we are only scratching the surface for understanding the full implications of svd . for the purposes of this tutorialthough , we have enough information to understand how pca will fall within this framework .it is evident that pca and svd are intimately related .let us return to the original data matrix .we can define a new matrix as an matrix .is of the appropriate dimensions laid out in the derivation of section 6.1 .this is the reason for the `` flipping '' of dimensions in 6.1 and figure 4 . ] where each _ column _ of has zero mean .the choice of becomes clear by analyzing . by construction equals the covariance matrix of . from section 5we know that the principal components of are the eigenvectors of .if we calculate the svd of , the columns of matrix contain the eigenvectors of ._ therefore , the columns of are the principal components of . this second algorithm is encapsulated in matlab code included in appendix b. what does this mean ? spans the row space of .therefore , must also span the column space of .we can conclude that finding the principal components amounts to finding an orthonormal basis that spans the _ column space _ of .then we can calculate it directly without constructing . by symmetry the columns of produced by the svd of also be the principal components . ]principal component analysis ( pca ) has widespread applications because it reveals simple underlying structures in complex data sets using analytical solutions from linear algebra .figure [ fig : summary ] provides a brief summary for implementing pca .a primary benefit of pca arises from quantifying the importance of each dimension for describing the variability of a data set .in particular , the measurement of the variance along each principle component provides a means for comparing the relative importance of each dimension .an implicit hope behind employing this method is that the variance along a small number of principal components ( i.e. less than the number of measurement types ) provides a reasonable characterization of the complete data set .this statement is the precise intuition behind any method of _ dimensional reduction _ a vast arena of active research . in the example of the spring, pca identifies that a majority of variation exists along a single dimension ( the direction of motion ) , eventhough 6 dimensions are recorded .although pca `` works '' on a multitude of real world problems , any diligent scientist or engineer must ask _ when does pca fail ?_ before we answer this question , let us note a remarkable feature of this algorithm .pca is completely _ non - parametric _ : any data set can be plugged in and an answer comes out , requiring no parameters to tweak and no regard for how the data was recorded . from one perspective, the fact that pca is non - parametric ( or plug - and - play ) can be considered a positive feature because the answer is unique and independent of the user . from another perspectivethe fact that pca is agnostic to the source of the data is also a weakness .for instance , consider tracking a person on a ferris wheel in figure [ fig : failures]a .the data points can be cleanly described by a single variable , the precession angle of the wheel , however pca would fail to recover this variable ., a non - linear combination of the naive basis . ( b ) in this example data set , non - gaussian distributed data and non - orthogonal axes causes pca to fail .the axes with the largest variance do not correspond to the appropriate answer.,scaledwidth=47.0% ] a deeper appreciation of the limits of pca requires some consideration about the underlying assumptions and in tandem , a more rigorous description of the source of data . generally speaking, the primary motivation behind this method is to decorrelate the data set , i.e. remove second - order dependencies .the manner of approaching this goal is loosely akin to how one might explore a town in the western united states : drive down the longest road running through the town .when one sees another big road , turn left or right and drive down this road , and so forth . in this analogy , pca requires that each new road explored must be perpendicular to the previous , but clearly this requirement is overly stringent and the data ( or town ) might be arranged along non - orthogonal axes , such as figure [ fig : failures]b .figure [ fig : failures ] provides two examples of this type of data where pca provides unsatisfying results .to address these problems , we must define what we consider optimal results . in the context of dimensional reduction ,one measure of success is the degree to which a reduced representation can predict the original data . in statistical terms, we must define an error function ( or loss function ) .it can be proved that under a common loss function , mean squared error ( i.e. norm ) , pca provides the optimal reduced representation of the data .this means that selecting orthogonal directions for principal components is the best solution to predicting the original data .given the examples of figure [ fig : failures ] , how could this statement be true ?our intuitions from figure [ fig : failures ] suggest that this result is somehow misleading .the solution to this paradox lies in the goal we selected for the analysis .the goal of the analysis is to decorrelate the data , or said in other terms , the goal is to remove second - order dependencies in the data . in the data sets of figure [ fig : failures ] ,higher order dependencies exist between the variables . therefore, removing second - order dependencies is insufficient at revealing all structure in the data .multiple solutions exist for removing higher - order dependencies .for instance , if prior knowledge is known about the problem , then a nonlinearity ( i.e. _ kernel _ ) might be applied to the data to transform the data to a more appropriate naive basis .for instance , in figure [ fig : failures]a , one might examine the polar coordinate representation of the data .this parametric approach is often termed _kernel pca_. another direction is to impose more general statistical definitions of dependency within a data set , e.g. requiring that data along reduced dimensions be _ statistically independent_. this class of algorithms , termed , _ independent component analysis _( ica ) , has been demonstrated to succeed in many domains where pca fails .ica has been applied to many areas of signal and image processing , but suffers from the fact that solutions are ( sometimes ) difficult to compute . writingthis paper has been an extremely instructional experience for me .i hope that this paper helps to demystify the motivation and results of pca , and the underlying assumptions behind this important analysis technique .please send me a note if this has been useful to you as it inspires me to keep writing !this section proves a few unapparent theorems in linear algebra , which are crucial to this paper . + * 1 .the inverse of an orthogonal matrix is its transpose . *+ let be an orthogonal matrix where is the column vector .the element of is therefore , because , it follows that . + * 2 . for any matrix , and symmetric . *+ * 3 .a matrix is symmetric if and only if it is orthogonally diagonalizable . *+ because this statement is bi - directional , it requires a two - part `` if - and - only - if '' proof .one needs to prove the forward and the backwards `` if - then '' cases .let us start with the forward case .if is orthogonally diagonalizable , then is a symmetric matrix . by hypothesis ,orthogonally diagonalizable means that there exists some such that , where is a diagonal matrix and is some special matrix which diagonalizes .let us compute . evidently , if is orthogonally diagonalizable , it must also be symmetric. the reverse case is more involved and less clean so it will be left to the reader . in lieu of this ,hopefully the `` forward '' case is suggestive if not somewhat convincing . + * 4 .a symmetric matrix is diagonalized by a matrix of its orthonormal eigenvectors . *+ let be a square symmetric matrix with associated eigenvectors . let } ] be the matrix of eigenvectors placed in the columns .let be a diagonal matrix where the eigenvalue is placed in the position .we will now show that .we can examine the columns of the right - hand and left - hand sides of the equation . \\\mathsf{right\;hand\;side : } & \mathbf{ed } & = & [ \lambda_{1}\mathbf{e_1}\:\lambda_{2}\mathbf{e_2}\:\ldots\:\lambda_{n}\mathbf{e_n } ] \end{array}\ ] ] evidently , if then for all .this equation is the definition of the eigenvalue equation .therefore , it must be that .a little rearrangement provides , completing the first part the proof . for the second part of the proof ,we show that a symmetric matrix always has orthogonal eigenvectors . for some symmetric matrix ,let and be distinct eigenvalues for eigenvectors and . by the last relation we can equate that .since we have conjectured that the eigenvalues are in fact unique , it must be the case that .therefore , the eigenvectors of a symmetric matrix are orthogonal .let us back up now to our original postulate that is a symmetric matrix .by the second part of the proof , we know that the eigenvectors of are all orthonormal ( we choose the eigenvectors to be normalized ) .this means that is an orthogonal matrix so by theorem 1 , and we can rewrite the final result . .thus , a symmetric matrix is diagonalized by a matrix of its eigenvectors .+ * 5 . for any arbitrary matrix , the symmetric matrix has a set of orthonormal eigenvectors of and a set of associated eigenvalues .the set of vectors then form an orthogonal basis , where each vector is of length .* + all of these properties arise from the dot product of any two vectors from this set . the last relation arises because the set of eigenvectors of is orthogonal resulting in the kronecker delta . in more simpler termsthe last relation states : this equation states that any two vectors in the set are orthogonal .the second property arises from the above equation by realizing that the length squared of each vector is defined as : code is written for matlab 6.5 ( release 13 ) from mathworks .the code is not computationally efficient but explanatory ( terse comments begin with a % ) .this first version follows section 5 by examining the covariance of the data set .
principal component analysis ( pca ) is a mainstay of modern data analysis - a black box that is widely used but ( sometimes ) poorly understood . the goal of this paper is to dispel the magic behind this black box . this manuscript focuses on building a solid intuition for how and why principal component analysis works . this manuscript crystallizes this knowledge by deriving from simple intuitions , the mathematics behind pca . this tutorial does not shy away from explaining the ideas informally , nor does it shy away from the mathematics . the hope is that by addressing both aspects , readers of all levels will be able to gain a better understanding of pca as well as the when , the how and the why of applying this technique .
polar plume / interplume regions and extended fan loop structures in active regions are often found to host outward propagating slow magneto - acoustic waves . besides their contribution to coronal heating and solar wind acceleration , they are important for their seismological applications .the observed periods of the slow waves are of the order of few minutes to few tens of minutes .these waves cause periodic disturbances in intensity and doppler shift and are mostly identified from the alternate slanted ridges in the time - distance maps in intensity .however , spectroscopic studies by some authors indicate periodic asymmetries in the line profiles , suggesting the presence of high - speed quasi - periodic upflows , which also produce similar signatures in time - distance maps .this led to an ambiguity in the interpretation of observed propagating features as slow waves .but , later studies found that flow - like signatures are dominantly observed close to the foot points and no obvious blueward asymmetries were observed in the line profiles higher in the loops .results from the recent 3d magneto - hydrodynamic ( mhd ) simulations by and , who report the excitation of slow waves by impulsively generated periodic upflows at the base of the coronal loop , were in agreement to this . also , the propagation speeds were found to be temperature dependent for both sunspot and non - sunspot related structures , in agreement with the slow mode behaviour .so , the propagating disturbances observed in the extended loop structures and polar regions can be interpreted as due to slow waves .one of the important observational characteristics of these waves is that they tend to disappear after travelling some distance along the supporting ( guiding loop ) structure .their amplitude rapidly decays as they propagate .thermal conduction , compressive viscosity , optically thin radiation , area divergence , and gravitational stratification , were identified to be some of the physical mechanisms that can alter the slow wave amplitude .the gravitational stratification leads to an increase in the wave amplitude whereas the other mechanisms cause a decrease ( see the review by * ? ? ?* and the references therein ) . using forward modelling to match the observed damping, it was found that for a slow mode with shorter ( 5 min ) periodicity , thermal conduction is the dominant damping mechanism and when combined with area divergence it can account for the observed damping even when the density stratification is present .they also found that the contribution of the compressive viscosity and radiative dissipation to this damping was minimal .another study on oscillations with longer periods ( 12 minutes ) travelling along sub - million - degree cool loops , suggested that area divergence has the dominant effect over thermal conduction .recently , had shown that this damping is dependent on frequency .these authors constructed powermaps in three different period ranges from which they conclude that longer period waves travel larger distances along the supporting structure while the shorter period waves get damped more heavily .such frequency - dependent damping was earlier reported by and for standing slow waves observed in hot coronal loops . in the present work, we aim to study the quantitative dependence of damping length of the wave on its frequency .details on the observations are presented in the next section followed by the analysis method employed and the results obtained .related theory and the physical implications of the results obtained are discussed in the subsequent sections .data used in this study are comprised of images taken by atmospheric imaging assembly ( aia ; * ? ? ?* ) on - board solar dynamics observatory ( sdo ; * ? ? ?* ) in two different extreme ultra - violet ( euv ) channels centred at 171 and 193 .full - disk images of three hours duration , starting from 21:10 ut on 2011 october 8 , were considered .the cadence of the data is 12 s. the initial data at level 1.0 , were processed to correct the roll angles and the data from different channels were brought to a common centre and common plate scale following the standard procedure using ` aia_prep.pro ` routine ( version 4.13 ) .the final spatial extent of each pixel is 0.6 .+ subfield regions were chosen to cover loop structures over a sunspot , an on - disk plume like structure , and the plume / interplume structures at the south pole .the imaging sequence in each of these regions was co - aligned using intensity cross - correlation , taking the first image as the reference .a snapshot for each of the selected on - disk regions and the polar region are shown in fig .four loop structures , two from a sunspot region and another two from an on - disk plume like structure , were selected to represent the on - disk region and several plume and interplume regions at the south pole , were selected to represent the polar region , for this study .the selection of these structures was made on the basis of cleanliness of the propagating oscillations by looking at the time - distance maps .[ fig1 ] displays the selected loop structures on - disk and the plume / interplume structures at the south pole .the width of the selected loop structures varied from 7 to 19 pixels and that of the plume / interplume structures was fixed at 30 pixels . .the two panels in the bottom row display the region outlined with a box in the top two panels to present a zoomed - in view.,title="fig:",width=597 ] .the two panels in the bottom row display the region outlined with a box in the top two panels to present a zoomed - in view.,title="fig:",width=597 ] the enhanced time - distance maps for the loop structure labelled ` 1 ' in the top panel of fig .[ fig1 ] , are shown in fig .[ fig2 ] , for both the aia channels .a background constructed from the 300 point ( 60 min ) running average in time has been subtracted from the original and the resultant is normalized with the same background to produce these enhanced time - distance maps .these maps clearly show alternate slanted ridges of varying intensity due to outward propagating slow waves .ridges are not visible throughout the length of the chosen loop segment due to rapid decay in the slow wave amplitude as it propagates along the structure . however , they are present for the entire duration of the dataset .another interesting feature visible in these maps is the presence of multiple periodicities .two different periods , one less frequent ( longer period ) and another more frequent ( shorter period ) , are apparent from these maps which can be more clearly seen from the zoomed - in view presented in the bottom panels of fig .these shorter and longer periods roughly correspond to a periodicity of 3 and 22 minutes .there can be additional periods present in the signal which may not be not visually evident from these maps .our main aim is to measure the damping lengths of these waves at different periods and study the relation between them .ideally one would look for measuring the damping lengths directly from the decaying wave amplitude along the loop , at a particular instant .but the damping in these waves is so rapid that we hardly get to observe more than a cycle .this makes the direct measurements difficult .the simultaneous presence of multiple periods is another hurdle for doing this . to overcome these issues , we transformed the original time - distance maps into period - distance maps by replacing the time series at each spatial position with its power spectrum .these maps contain the oscillation power at different periods for each spatial position . in this way, we can not only isolate the power in different periods , but can also trace the spatial decay in amplitude from the corresponding variation in power .[ fig3 ] displays the period - distance maps generated from the time - distance maps for loop 1 .a notable feature in these maps is the presence of more power in longer periods up to larger distances as observed by .values are listed for each period in the plot legend .top and bottom panels correspond to the data from 171 and 193 channels respectively . ] now to identify all the periods present , we constructed an average light curve from the bottom 5 pixels of the structure and used it to generate a template power spectrum ( see fig .[ fig4 ] ) . the peak periods and their respective widths were then estimated using a simple routine ( ` gt_peaks.pro ` ) available with the solar software . at each peakidentified , we constructed a bin of width determined by the width of the peak and computed the spatial variation of the total power in that bin from period - distance maps . taking the square root of the power as amplitude of the oscillation , the amplitude decay at a particular period is fitted with a function of the form , to compute the damping length at that period . the template power spectrum constructed for loop 1 in the 171 channel ,is shown in the left panel of fig .all the identified peaks and their respective widths are marked with solid and dashed lines in this plot .note the routine we used to estimate the widths of peaks gives very rough estimates which can be significantly different from the actual widths .but this is good enough to isolate the power in individual periods and is far better than the regular way of summing the power in predefined period ranges without having the knowledge of the peak frequencies present in the data .the amplitude decay and the fitted function corresponding to all the identified periods , are shown in the right panel of the figure .different symbols ( colors ) are used to show the data for different periods .corresponding plots for 193 channel are shown in the bottom panel .in the plots depicting the amplitude decay , the data for each period are offset by a constant value ( 50 for 171 and 5 for 193 channels ) from the preceding period , to avoid cluttering .the computed damping lengths from each period are listed in the plot legend along with the respective errors obtained from the fit .the exact fit parameters ( , , and ) estimated for all the periods obtained from the data are listed in tables [ par171 ] & [ par193 ] of appendix ( section [ appendix ] ) .the exponential fits are quite good for the amplitude decay in most of the cases , but occasionally we find some random variations ( bumps ) in the amplitude leading to very high damping lengths .we found that the contamination from background structures is causing this . to eliminate such data , we considered the damping lengths larger than the length of the supporting structure as unreliable . thus , we measured the damping lengths at different periods .only those periods between 2 min and 30 min were considered , keeping the total duration ( 3 hours ) and cadence ( 12 s ) of the dataset in mind . .] we combined the results from all the four loop structures on - disk and plotted the measured damping lengths against the period .this allows to evenly populate the frequency spectrum since the loops with different physical conditions support different frequencies .a similar procedure had been followed for the plume / interplume regions at the south pole except that the time - distance maps in the polar region are constructed by making artificial slits of 30 pixels ( fixed ) width to avoid the effect of jets .the time - distance maps and the corresponding period - distance maps constructed from the interplume region denoted by slit 10 ( see fig .[ fig1 ] ) are shown in fig .[ fig5 ] for both the channels .the propagating intensity disturbances are clearly seen in these images , but for some of the slit locations in 193 ( slits 1 , 2 , 6 , 7 , & 8 , in fig .[ fig1 ] ) , the signal is very poor and we do not see any clear signature of these disturbances . this is possible because the 193 channel looks at relatively hotter plasma ( 1.25 mk ) compared to the typical temperatures of plume / interplume regions ( mk ) .so the data from these locations are discarded in our final analysis .none of the data are discarded from 171 channel .+ fig .[ fig6 ] displays the plots for damping length versus period in log - log scale .the top two panels correspond to the results from on - disk structures and the bottom two from polar regions .different symbols ( colors ) are used to separate the data from sunspot & plume - like structure and plume & interplume regions .damping lengths are measured in arcseconds and periods are measured in seconds . in all the panels ,the overplotted solid lines represent a linear fit to the data . the slope of the line and the uncertainty in estimating it are written in the respective panels .the number of data points for the on - disk region are less because of the limited data .clearly , the on - disk and polar regions show a different dependence of damping length on frequency .in this section , we study the theoretical dependence of the damping length on the frequency of the slow wave by considering different damping mechanisms separately . to perform this , we follow the one - dimensional linear mhd model of and extend it to discuss the frequency dependence .this model is applicable under the assumptions that the magnetic field lines are straight , plasma- is much less than unity , and the amplitude of the oscillations are small .the one - dimensional form of the basic mhd equations for the slow waves can be written as \label{eq : energy}\\ & p = \frac{1}{\tilde{\mu}}\rho r t \label{eq : state}\end{aligned}\ ] ] where and are pressure , density , velocity , and temperature respectively . is the gas constant and is the mean molecular weight .the second and third terms on the right hand side of eq .[ eq : force ] represent the gravitational and viscous forces and those of eq .[ eq : energy ] represent the energy losses due to thermal conduction and optically thin radiation . in these terms, is acceleration due to gravity , is coefficient of compressive viscosity , is thermal conductivity parallel to the magnetic field , and are constants under the approximation of a piece - wise continuous function for optically thin radiation , and is the coronal heating function . in the following subsections ,we use appropriate forms of these equations to study the effect of individual damping mechanisms on slow waves and investigate the frequency dependence of damping length . as the slow wave propagates energyis lost due to thermal conduction which results in a decay of its amplitude . by considering the thermal conduction as the only damping mechanism, we linearised the basic mhd equations and assumed the perturbations in the form exp ] is given by the reciprocal of the imaginary part of .so , to solve for , we simplify the dispersion relation by approximating the thermal conduction at its lower and upper limits . in the lower thermal conduction limit ( 1 )[ eq : tcond ] reduces to which gives the damping length .this implies in the lower thermal conduction limit , that the damping length of slow waves increases with the square of the wave period .similarly , if we consider the higher thermal conduction limit ( 1 ) , the solution becomes which gives the imaginary part of independent of .thus , in the limit of higher thermal conduction the damping in slow waves is frequency independent .the viscous forces lead to dissipation of energy and therefore reduce the slow wave amplitude .to understand the effect of compressive viscosity quantitatively , we solved the relevant linearised mhd equations assuming all the perturbations are in the form exp ] . here is the radiation parameter defined as .the reciprocal of this parameter has the dimension of time and gives the radiation time scale .according to eq .[ eq : rad ] , the damping in slow waves due to optically thin radiation is frequency independent .in contrast to the other mechanisms so far discussed , the gravitational force stratifies the atmosphere which leads to an increase in the slow wave amplitude as it propagates outwards . assuming the initial perturbations of the form exp $ ] , we solved the linearised mhd equations to get here is the gravitational scale height given by and is the cut - off frequency defined as .this relation indicates that for slow waves with frequencies above the cut - off value , the velocity amplitude grows exponentially as and the growth rate is independent of frequency .the corresponding amplitude of density perturbations , however , varies as considering the equilibrium density fall due to stratification .note that this variation still represents a growth in _ relative _ amplitude as , similar to that of velocity perturbations and is independent of frequency .studied the effect of the radial divergence and area divergence of the magnetic field on slow waves .the amplitude of slow waves was found to decrease in both the cases as they propagate outwards .however , it is important to note that it is purely a geometric effect and there is no real dissipation mechanism involved .we solved the linearised mhd equations in the presence of radial divergence and obtained the following expression for the evolution of velocity perturbations , \label{eq : magdiv}\ ] ] . here is the radial coordinate in the spherical coordinate notation with the sun at the centre , and and are first order spherical bessel functions . substituting the spherical bessel functions with their standard definition , eq .[ eq : magdiv ] can be written as .\ ] ] the constants and can be determined from the boundary conditions .we chose these constants such that the amplitude of oscillations at the surface ( ) is independent of frequency similar to that we assumed for other cases . substituting and ,the velocity becomes \ ] ] it can be shown that the amplitudes of expressions in the numerator varies as and that in denominator varies as .this gives the overall amplitude variation as which is frequency independent .following the same treatment , area divergence can be shown to behave similarly .therefore , we can conclude that the damping in slow waves due to magnetic field divergence is frequency independent ..dependence of damping length on period of slow waves [ cols="^,^,^ " , ] a summary on the derived frequency dependence of damping length in slow waves is presented in table [ tab : theory ] for different physical mechanisms .the table also lists the amplitude growth of density perturbations .it may be noted that although the derivations were primarily done for the velocity perturbations , the density ( intensity ) perturbations due to slow waves are proportional to the velocity perturbations ( as can be derived from eq .[ eq : cont ] ) , and hence the same growth is expected except for the case of gravitational stratification as mentioned in section [ gravstrat ] .we did not explore the frequency dependence due to other geometrical effects like loop curvature , offset , and inclination , and other damping mechanisms like phase mixing , and resonant absorption , as we believe the damping in slow waves due to these effects is secondary . for instance , studied the damping of slow waves due to phase mixing and mode coupling to the fast wave , using a two dimensional model and found that their contributions are not significant enough to explain the observed damping .fitted with a gaussian decay model .the damping lengths ( in arcseconds ) and the corresponding reduced values are listed in the plot legend .the left and right panels show the results for 171 and 193 channels respectively . ] however , it may be interesting to note that the amplitude decay for some of the periods can be fit better with a gaussian decay function ( ) rather than an exponential function ( see fig . [ fig7 ] ) .a similar behaviour was found by in their numerical simulations for propagating kink waves .it was found analytically that the gaussian damping for kink modes is a result of the excitation phase . in this phase other modes ( than the kink mode ) are excited and they gradually leak away before the system evolves to the `` eigenvalue '' state ( when it oscillates with the pure kink mode ) .a consequence of this is that longer wavelengths show the gaussian damping to greater heights .this fits also with some of our observations , where we find that the amplitude decay for the longer periods ( and wavelengths ) is quite well explained with gaussian damping .it is unclear however , if the theory of for kink modes also holds for slow waves and what physical ingredients are essential for showing this behaviour .damping in slow waves has been studied extensively in polar plumes and active region loops both theoretically and observationally since their first detection . however , studies on the frequency dependence of their damping are limited . and studied the frequency - dependent damping in standing slow magneto - acoustic waves observed in hot ( mk ) coronal loops and found a good agreement between the observed scaling of the dissipation time with the period using their model .they concluded that thermal conduction is the dominant damping mechanism for these waves and the contribution of compressive viscosity is less significant .theoretical investigations on frequency - dependent damping in propagating slow waves were made by a few authors .recently , report an observational evidence of this using powermaps constructed in three different period ranges . as a follow - up of that work , in this article we studied the quantitative dependence of damping lengths on frequency of the slow waves using period - distance maps .we selected four loop structures on - disk and about 10 plume / interplume structures in the south polar region that show clear signatures of propagating slow waves .damping lengths were measured and plotted against the period of the slow wave to find the relation between them .[ fig6 ] displays the observed dependence of damping lengths on periodicity for the on - disk loop structures and the polar plume / interplume regions in two aia channels .the slopes estimated from the linear fits are ( 171 channel ) and ( 193 channel ) for the on - disk regions and are ( 171 channel ) and ( 193 channel ) for the polar regions .the negative slopes obtained for the polar region means the damping lengths for the longer period waves observed in this region are shorter than those for the shorter period waves .note , however , in both the regions the longer period waves are visible up to relatively larger distances due to the availability of more power . considering thermal conduction , magnetic field divergence , and density stratification , as the dominant mechanisms that alter the slow wave amplitude , linear theory ( see table [ tab : theory ] ) predicts the variation of damping length as square of the time period . in a log - log scale , used in fig .[ fig6 ] , this would mean a slope of 2 .but as we find here , the slopes estimated from the observations are positive but less than 2 for the on - disk region and are negative for the polar region .it may be noted that similar negative slopes were found for the polar region , even when the data from plume and interplume regions were plotted separately .this mismatch between the observed values and those expected from the linear theory , suggests some missing element in the current theory of damping in slow waves .perhaps , the linear description does not hold good and the slow waves undergo non - linear steepening that causes enhanced viscous dissipation .this can be effective for the long period waves whose amplitudes are relatively larger and possibly can even explain the negative slopes observed in the polar regions .further studies are required to explore such possibilities and understand the observed frequency dependence .nevertheless , the discrepancy in the results from the on - disk and the polar regions , indicates the existence ( or dominance ) of different damping mechanisms in these two regions possibly due to different physical conditions .it is also possible that the sunspot loops and the on - disk plume - like structures also behave differently , but the current data is limited to make any such conclusions .we thank the anonymous referee for useful comments .the authors would also like to thank i. de moortel for helpful discussions .the aia data used here is the courtesy of sdo ( nasa ) and aia consortium .this research has been made possible by the topping - up grant charm+top - up cor - seis of the belspo and the indian dst .it was partly funded by the iap p7/08 charm and an fwo vlaanderen odysseus grant .the exponential fit parameters for all the periods identified in the on - disk and polar data are listed in the tables [ par171 ] & [ par193 ] for 171 and 193 channels respectively .the corresponding reduced values are also listed as a goodness - of - fit statistic .c [email protected] r@ r@ r@ [email protected] loop1 & 3&0 & 195.32&1.99 & 3.77&0.07 & 11.95&0.53 & 8&32 + loop1 & 12&1 & 373.84&13.66 & 14.14&1.57 & -30.07&15.95 & 437&93 + loop1 & 17&1 & 573.04&46.96 & 21.71&4.01 & -128.19&54.29 & 1071&35 + loop1 & 22&2 & 549.66&51.02 & 23.65&4.63 & -107.21&58.02 & 885&18 + loop2 & 2&8 & 163.46&2.70 & 3.59 & 0.11 & 9.04&0.69 & 14&83 + loop3 & 3&9 & 63.75&1.30 & 13.95&0.89 & 0.96&1.36 & 6&83 + loop3 & 7&2 & 155.38&3.76 & 19.21&1.31 & -9.08&4.51 & 25&39 + loop3 & 22&2 & 275.87&4.35 & 11.39&0.52 & 19.04&3.54 & 85&67 + loop4 & 3&9 & 13.69&0.54 & 11.09&1.32 & 7.20&0.51 & 1&12 + loop4 & 6&6 & 32.25&0.80 & 6.77&0.37 & 10.30&0.37 & 2&09 + loop4 & 22&2 & 140.94&7.89 & 21.51&2.81 & -3.37&9.20 & 37&76 + slit1 & 6&1 & 15.70&0.23 & 23.72&0.59 & 1.50&0.05 & 0&55 + slit1 & 13&2 & 39.83&0.30 & 28.33&0.38 & 1.60&0.08 & 1&11 + slit1 & 20&4 & 64.69&0.90 & 26.30&0.64 & 2.42&0.23 & 9&23 + slit2 & 2&1 & 7.59&0.07 & 61.83&1.52 & 0.86&0.05 & 0&11 + slit2 & 5&1 & 32.99&0.14 & 39.73&0.34 & 1.49&0.05 & 0&34 + slit2 & 13&2 & 57.21&0.31 & 39.62&0.43 & 0.92&0.12 & 1&61 + slit2 & 22&2 & 53.16&0.99 & 46.75&1.95 & 0.44&0.47 & 19&26 + slit3 & 13&2 & 60.78&0.79 & 15.68&0.32 & 2.55&0.13 & 4&45 + slit3 & 20&4 & 86.50&2.54 & 10.63&0.48 & 4.23&0.32 & 31&98 + slit3 & 28&8 & 103.80&2.08 & 24.27&0.82 & 4.89&0.46 & 46&10 + slit4 & 3&3 & 7.24&0.10 & 42.02&1.26 & 1.17&0.04 & 0&17 + slit4 & 5&1 & 9.55&0.15 & 34.41&1.08 & 1.28&0.05 & 0&35 + slit4 & 6&6 & 10.44&0.16 & 32.16&0.91 & 1.31&0.05 & 0&33 + slit4 & 13&2 & 24.97&0.47 & 17.73&0.54 & 2.02&0.09 & 1&76 + slit4 & 26&4 & 54.65&0.48 & 23.02&0.34 & 2.02&0.11 & 2&28 + slit5 & 3&0 & 7.02&0.07 & 52.24&1.33 & 1.20&0.04 & 0&11 + slit5 & 5&1 & 11.39&0.14 & 41.03&1.06 & 1.81&0.06 & 0&35 + slit5 & 18&7 & 41.99&0.24 & 36.01&0.40 & 1.51&0.08 & 0&88 + slit6 & 3&9 & 7.43&0.09 & 41.63&1.09 & 1.10&0.04 & 0&15 + slit6 & 6&6 & 13.09&0.20 & 22.84&0.58 & 1.48&0.04 & 0&39 + slit6 & 18&7 & 57.04&0.30 & 27.03&0.25 & 1.94&0.08 & 1&04 + slit6 & 28&8 & 40.15&0.39 & 33.55&0.62 & 2.50&0.13 & 2&17 + slit7 & 6&6 & 20.37&0.17 & 34.12&0.53 & 1.28&0.06 & 0&40 + slit7 & 20&4 & 62.27&1.44 & 29.08&1.21 & 4.00&0.41 & 26&00 + slit8 & 3&3 & 5.58&0.07 & 61.22&2.17 & 0.90&0.06 & 0&09 + slit8 & 5&6 & 7.75&0.10 & 41.73&1.28 & 1.20&0.05 & 0&19 + slit8 & 7&9 & 9.90&0.17 & 34.48&1.21 & 1.38&0.06 & 0&44 + slit8 & 13&2 & 18.98&0.17 & 38.47&0.73 & 1.12&0.07 & 0&46 + slit8 & 20&4 & 21.98&0.30 & 27.80&0.70 & 1.86&0.09 & 1&10 + slit8 & 28&8 & 35.96&0.84 & 10.61&0.39 & 3.60&0.12 & 3&51 + slit9 & 2&8 & 6.51&0.06 & 56.43&1.37 & 1.37&0.04 & 0&08 + slit9 & 5&6 & 8.24&0.11 & 36.46&0.97 & 1.30&0.04 & 0&20 + slit9 & 11&1 & 18.76&0.32 & 26.31&0.77 & 1.66&0.08 & 1&17 + slit9 & 20&4 & 50.68&0.30 & 26.99&0.28 & 1.63&0.08 & 1&07 + slit10 & 4&7 & 7.03&0.10 & 46.45&1.78 & 1.33&0.07 & 0&17 + slit10 & 11&1 & 17.83&0.39 & 23.94&0.96 & 2.44&0.11 & 1&60 + slit10 & 18&7 & 45.65&0.81 & 28.99&1.03 & 3.92&0.30 & 8&27 + c [email protected] r@ r@ r@ [email protected] loop1 & 3&0 & 81.62&2.39 & 1.87&0.10 & 9.90&0.43 & 7&84 + loop1 & 12&1 & 58.75&2.72 & 10.89&1.53 & 25.23&2.60 & 27&39 + loop1 & 17&1 & 121.21&7.62 & 13.48&2.61 & 45.37&8.72 & 154&00 + loop2 & 2&8 & 63.19&2.82 & 2.15&0.17 & 8.05&0.54 & 11&76 + loop3 & 4&7 & 22.28&0.47 & 8.85&0.45 & 5.72&0.26 & 0&89 + loop3 & 7&2 & 37.66&1.31 & 18.37&1.86 & 6.90&1.57 & 3&61 + loop4 & 4&7 & 13.32&0.65 & 15.48&2.23 & 4.98&0.77 & 0&88 + slit3 & 2&5 & 4.54&0.04 & 62.96&1.72 & 0.81&0.03 & 0&05 + slit3 & 7&9 & 13.74&0.20 & 27.08&0.68 & 1.30&0.05 & 0&48 + slit3 & 14&4 & 26.11&0.65 & 28.98&1.26 & 1.60&0.17 & 5&27 + slit3 & 28&8 & 70.20&1.39 & 19.46&0.62 & 3.47&0.26 & 16&59 + slit4 & 2&3 & 3.48&0.05 & 48.70&1.59 & 0.98&0.03 & 0&04 + slit4 & 5&6 & 5.01&0.10 & 34.91&1.38 & 1.11&0.04 & 0&15 + slit4 & 8&6 & 7.31&0.13 & 30.40&1.03 & 1.17&0.04 & 0&24 + slit4 & 14&4 & 14.76&0.31 & 20.76&0.72 & 1.48&0.07 & 0&87 + slit4 & 28&8 & 29.39&0.67 & 25.74&1.02 & 1.87&0.17 & 4&97 + slit5 & 2&5 & 2.55&0.03 & 60.29&2.05 & 0.93&0.02 & 0&02 + slit5 & 6&6 & 7.11&0.13 & 28.61&0.92 & 1.77&0.03 & 0&21 + slit5 & 18&7 & 17.68&0.26 & 28.62&0.74 & 1.84&0.07 & 0&82 + slit9 & 2&3 & 2.11&0.03 & 59.64&2.32 & 0.90&0.02 & 0&02 + slit9 & 6&1 & 4.13&0.10 & 23.80&1.00 & 1.11&0.02 & 0&11 + slit9 & 11&1 & 9.54&0.23 & 15.69&0.59 & 1.28&0.04 & 0&36 + slit9 & 14&4 & 8.02&0.26 & 19.00&1.01 & 1.37&0.05 & 0&58 + slit9 & 24&3 & 9.74&0.19 & 47.24&2.14 & 1.04&0.10 & 0&74 + slit10 & 3&6 & 3.33&0.07 & 31.00&1.44 & 1.19&0.03 & 0&07 + slit10 & 6&1 & 4.92&0.12 & 20.04&0.82 & 1.33&0.03 & 0&12 + slit10 & 18&7 & 30.96&0.87 & 8.47&0.37 & 3.21&0.12 & 3&09 +
propagating slow magneto - acoustic waves are often observed in polar plumes and active region fan loops . the observed periodicities of these waves range from a few minutes to few tens of minutes and their amplitudes were found to decay rapidly as they travel along the supporting structure . previously , thermal conduction , compressive viscosity , radiation , density stratification , and area divergence , were identified to be some of the causes for change in the slow wave amplitude . our recent studies indicate that the observed damping in these waves is frequency dependent . we used imaging data from sdo / aia , to study this dependence in detail and for the first time from observations we attempted to deduce a quantitative relation between damping length and frequency of these oscillations . we developed a new analysis method to obtain this relation . the observed frequency dependence does not seem to agree with the current linear wave theory and it was found that the waves observed in the polar regions show a different dependence from those observed in the on - disk loop structures despite the similarity in their properties .
a _ network _ consists of a multigraph , sometimes called _ supply graph _ , a set of _ terminals _ and a _ demand graph _ with .let denote the collection of inclusion - maximal anticliques of the graph ( such a collection is called a _ clutter _ ) . divides the pairs of terminals into three classes : pairs not covered by a member of ( this is precisely ) , pairs covered by a single member of ( we denote this class ) and pairs covered by more than one member of ( called _ equivalent pairs _ ) . therefore , a network can be also defined by specifying and and denoted . in this paperwe study a subclass of _ eulerian networks _ where all nodes in , called _ inner nodes _ , have even degrees .we refer to the paths with end - pair in as -paths _ _ , and the paths with the end - pair in as -paths__. collection of edge - disjoint paths with ends in is called a _multiflow_. various multiflow optimization problems are studied in the literature .a major multiflow optimization problem , called the _ ( integer ) path packing problem _ , is to in this paper we refer to this problem as the _strong problem_. for a multiflow , we denote by ] the number of -paths and -paths in . an optimization problem called the _ weak problem _ is to +{\frac{1}{2}}f[w] f { ( g , t,{\mathcal{k}})} ] , and two splits preserving and ] and ]. then ={\hat{h}_{\mathcal{y}}}=[a]=\beta(a) ] by at most ( by turning back into a -path ) , we have . then =\eta{(g , t,{\mathcal{k}})} ] - a contradiction .same arguments apply when or .note that is integral by construction .let us now assume that contains more than one inner node .again , from claim [ no - inner - nodes ] it follows that solves the weak problem in .since is not a trident in and by corollary [ y < x ] , we have .let us select an inner node such that .since is minimal dual solution , unlocks in , thus the paths of can be switched so as to obtain a trident with a pivot in some inner node .we have and thus we select .therefore , any split of preserves , and we can split so that for the resulting network . by our assumption , .let denote a common solution in .since , gluing back leaves a weak problem solution in , and therefore locks and and is integral .additionally , any common solution in contains the same number of -paths for all , thus a common solution in can not have more compound -paths than a common solution in .then , as required .in this section , we prove that the dual combinatorial structure of section [ dual - structure - section ] gives _ good characterization for the fractional strong problem _ in flat networks and _ good characterization for the strong problem _ in integral flat networks .it is worth noting that no adequate criterion for integrality in k - networks in known to date .let us observe a flat network , flat clutter and a dual solution in where .we define multigraph by taking as node set and adding edge of multiplicity for every pair .we denote by the line graph of and define a vector on the vertices of .we denote by the size of maximum b - matching in the graph w.r.t vector .figure [ line - graph - figure ] shows an example of a flat network with expansion , its corresponding graph and the line graph together with the vector . we can now define function on a network , expansion and graph : to show that the path packing problem in integral flat networks is in co - np , we prove the following .[ gc - theorem]for multiflows in a flat network , flat clutters and expansions in , equation =\min_{{\mathcal{r}}\succeq { \mathcal{k}},{\mathcal{x}}}\varphi({\mathcal{x}},{\mathcal{r}})\ ] ] gives good characterization for the fractional strong problem and good characterization for the strong problem if is integral .* let us first show that \leq \varphi({\mathcal{x}},{\mathcal{r}}) ] .let us now show by induction in the number of -paths in that .\ ] ] if =\beta(a) ] .we remove the edges of an -path of from the network so that remains intact .inequality holds for the resulting network .the right side of the inequality does not change when we add the edges of back , while ] .indeed , the number of simple -paths in a common solution in is , because is a dual solution and locks and .the maximum number of compound -paths in is precisely , and since =\beta[a] ] .otherwise , _ unlocks _ .in other words , locks if it contains the smallest possible number of -paths .a. karzanov and m. lomonosov have introduced in the following application of the ford - fulkerson augmenting path procedure , assuming that a multiflow traverses each edge . a maximum multiflow unlocks if and only if it contains an _ augmenting sequence _ of paths ( an -path ) , ( -paths ) , ( an -path ) , and inner nodes so that for and is located on between and the -end of .figure [ augmenting - sequence - figure ] shows an example of augmenting sequence . in this paper, we use the fact that unlocking a member of and existence of the alternating sequence are equivalent .when is a k - clutter , there exists a series of switches of in creating a maximum multiflow with a cross and having .if solves the -problem and unlocks , switching in creates a multiflow with a trident with pivot , where is an -path and is an -path .let be an expansion of in a network , where each member of contains a single terminal .set taken as terminals together with clutter + on for a network .let us call a pair of terminals in a clutter _ weak _ or _ strong _ if that pair belongs to class or w.r.t . to that clutter .an -path _ _ in is an -path with lying in distinct members of .an -flow _ _ is a multiflow in the network consisting of -paths .maxima of strong problem and weak problem in are denoted by and respectively .let and be expansions .note that for every , every -flow is also a -flow ( but the converse may be not true ) . since for -flow is also a -flow , . since -flow is also an -flow , .we call an expansion _ critical _ if for every . a critical with is called a _ dual solution_. the triangle theorem ( ) ensures that we limit ourselves to k - networks with simple .the results of this section that hold for simple clutters hold for general k - networks as well , because compressing equivalent terminals does not change by the triangle theorem from .then implies that for a maximum -flow ( even when ) .\ ] ] we aim to prove the following max - min theorem for the fractional -problem .[ wp - maxmin - theorem ] in a k - network the maximum is taken over the fractional multiflows in , and the minimum is taken over all expansions in .moreover , holds as equality for every dual solution . to prove this theorem , we state the following inequality for an expansion and a -flow : we aim to show that holds as inequality for every expansion and as equality for every critical expansion . ( a ) follows directly from the definition of .( b ) holds because is also an -flow .( c ) holds because there exists a maximum -flow that solves the weak problem in . for such the minimum of ] and by the lovasz - cherkassky theorem ( ) .we need the following two claims to show that ( c ) is an equality .[ saturation]let be a dual solution in a simple k - network .a maximum fractional -flow that satisfies ( that is , solves the weak problem in ) locks for all .* proof . * first , let us show that saturates every -edge .let be an -edge with and .let be an expansion where for terminal and . since is critical , and there exists a -flow such that .let us denote the unused capacity of by and let ] since and .the subpath does not have common nodes with any other -path whose ends do not lie in .if it were so , then the above -operation could be applied to both and and a flow with and =h[w]-2\varepsilon$ ] could be created , which contradicts the maximality of .therefore , there exists an edge of which is not saturated by - a contradiction to claim [ saturation ] .a _ mixed flow _ among flows is a flow where are positive rational numbers and . a mixed flow among _ all fractional maximum flows solving the weak problem _ is called the _ mixed solution_. * proof .* let be a mixed solution and let be a subset of edges not saturated by and reachable from .let be the expansion of consisting of all and . clearly , all the -edges , , are saturated by .let us assume that unlocks .then contains an augmenting sequence for and there exists a series of switches that produces a maximum flow maximizing . has an -path and an -path with a common node , where and , . applying a -operation to these paths ,we obtain a maximum flow that maximizes and does not saturate one of the -edges - a contradiction .therefore , is a dual solution .let be a dual solution that does not include and be a maximum -flow .since is not saturated by , a combined flow , is a maximum -flow that does not saturate either .it follows from the definition of that there exists and a path from to , whose edges lie in . by the choice of ,the edges of are not saturated by , and one of those edges is an -edge - a contradiction . * proof .* let be a trident of and assume that does not contain .let be a -path , .suppose first that is an ordinary trident .then a -transformation of in creates a fractional multiflow with and unsaturated edges in , which contradicts claim [ saturation ] .assume now that is a simple trident , which means that and are - and -paths respectively , with and while , .we can obtain a new flow from by replacing and with . andthe edges of and are not be saturated by - a contradiction .r. p. anstee , _ a polynomial algorithm for b - matching : an alternative approach _ , information processing letters , vol .153 - 157 , 1987 .e. barsky and h. ilani , _ np - hardness of one class of path packing problem _ , unpublished manuscript .b. v. cherkassky , _ solution of a problem on multicommodity flows in a network , _ ekon . mat .metody ( russian ) , vol .143 - 151 , 1977 .e. a. dinic , _algorithm for solution of a problem of maximum flow in networks with powerestimation _ ( russian ) , soviet math . dokl .1277 - 1280 , 1970 .j. edmonds , _ minimum partition of a matroid into independent subsets _ , j. research nat .beurau of standards , section n 69 , pp .67 - 72 , 1965 .l. r. ford , jr . andd. r. fulkerson , _ maximal flow through a network _ ,canadian journal of math ., vol . 8 , pp .399 - 404 , 1956 .hu , _ multi - commodity network flows _ , operations research vol .344 - 360 , 1963 .h. ilani , e. korach , m. lomonosov , _ on extremal multiflows , _ j. combin. theory ser .183 - 210 , 2000 .a. karzanov , _ polyhedra related to undirected multicommodity flows , linear algebra and its applications , _ 1989 , vol . 114 - 115 pp . 293 - 328 , 1989 .a. karzanov and m. lomonosov , _ systems of flows in undirected networks , _ math . programming . problems of social and economical systems .operations research models .work collection .issue 1 ( russian ) , moscow , pp .59 - 66 , 1978 .m. lomonosov , _ combinatorial approaches to multiflow problems _ , appl. discrete math .1 , pp . 1 - 93 , 1985 .m. lomonosov , _ on return path packing , _european journal of combinatorics , vol .35 - 53 , 2004 .l. lovasz , _ on some connectivity properties of eulerian graphs , _acta math .hungaricae , vol .129 - 138 , 1976 .w. mader , _ ber die maximalzahl kreuzungsfreier -wege _ , arch .( basel ) , vol .31 , pp . 382 , 1978 .karl menger , _ zur allgemeinen kurventhoerie _ , fund .96 - 115 , 1927 .n. vanetik , _ path packing and a related optimization problem _ , journal of combinatorial optimization , springer , 2007 , accepted for publication .
the path packing problem is stated finding the maximum number of edge - disjoint paths between predefined pairs of nodes in an undirected multigraph . such a multigraph together with predefined node pairs is often called a network . while in general the path packing problem is np - hard , there exists a class of networks for which the hope of better solution for the path packing problem exists . in this paper we prove a combinatorial max - min theorem ( also called a good characterization ) for a wide class of such networks , thus showing that the path packing problem for this class of networks is in co - np . integer path packing
altruistic behavior among agents in evolving systems , both biological and social , has been widely observed in nature .the fact that cooperative behavior can emerge between unrelated individuals in the competitive landscape of natural selection , however , is paradoxical .evolutionary game theory is widely employed to address this question , and models of evolutionary systems that can exhibit realistic phenomena are of great interest to researchers across disciplines including physics , biology , and the social sciences .complex networks have played a central role in the study of evolutionary systems . in particular , network - based models , where evolution is driven by the discrete replicator dynamics ,have been shown to support robust cooperation that is unsustainable in traditional models built on unstructured populations . in a network - based model ,agents occupy the vertices of the network , and interact only within their immediate neighborhood consisting of those agents to whom they are connected by network edges .interactions take the form of a mathematical game , often the prisoner s dilemma ( pd ) , which captures in a precise framework the temptation to selfishly advance one s own fitness at the expense of a cooperating neighbor .evolution is implemented using the replicator dynamics . in ,nowak and may pioneered the network - based approach by showing that cooperation in the pd could become evolutionarily sustainable on a lattice . in ,santos and pacheco further showed that cooperation could even become the dominant population trait on certain _ heterogeneous _ networks like those with scale - free degree distributions , where the network s vertices have degrees that follow an inverse power law .a great deal of subsequent research has focused on the interplay between complex networks and evolutionary dynamics . in this paperwe show that the equilibrium success of cooperation in an evolutionary pd on a network ( defined precisely below ) can be effectively predicted from a quantitative network parameter derived using a mean field system analysis .our approach is rooted in the ideas of generating functions associated to random networks , and uses results from empirical studies of the dynamics .we compare our theoretical predictions to monte - carlo simulations and find excellent agreement across networks with varying topologies and varying average degrees .given the inherent complexity of these dynamical systems , the accuracy with which the simple criterion here predicts actual dynamics is especially appealing .finally , we give an interpretation of the criterion derived here as an analogue of hamilton s rule for kin selection ; a classical result in genetics that explains emerging cooperative behavior among related individuals . in doing so , we are able to apply techniques from the study of complex networks to a modeling problem in evolutionary biology , and arrive at a result that both brings insight to both the factors that drive the model , as well as the social and biological questions that originally inspired the model .the pd is widely studied as a framework in which to model problems involving conflict and cooperation . two playerschoose between cooperation ( c ) and defection ( d ) .players strategy choices determine ( normalized ) payoffs interpreted as fitness in evolutionary biology as follows : mutual cooperation gives r to each player , a defector exploiting a cooperator gets t , and an exploited cooperator gets s. two defectors each give and receive nothing , and payoffs satisfy .a rational player will choose to defect since the payoffs for defection strictly dominate those of cooperation regardless of the co - player s strategy .the result is a nash equilibrium where both players defect ; the dilemma arising from the inefficiency of this equilibrium : both agents would fare better by cooperating .define the cost of cooperation in the pd to be the payoff forgone ( from a defector s perspective ) by choosing to cooperate , or .let represent the benefit provided to a co - player by a cooperator , so that .thus , the cost - to - benefit ratio of the game is the cost - to - benefit ratio indicates of the temptation to defect inherent in the game , with larger corresponding to a stronger temptation to defect . a widely adopted payoff normalization sets and , so that the game depends on the single parameter indicating the temptation toward defection in the game . taking close to zero amounts to an assumption that social interactions are inexpensive . with this normalization ,the game lies on the boundary between the pd and the snowdrift game ( sg ) , another commonly studied game of cooperation . in the sg , the the bottom two pd game payoffs are reversed so that cooperation is a better unilateral response to defection : . in that case , and setting , , and , the nash equilibrium is , so is close to zero as long as is sufficiently close to zero .qualitatively , the case of ( the so - called weak pd ) addresses both games when social interactions are inexpensive , so is the focus from here on . while the assumption is both plausible and widely adopted , it is significant , and dropping it has a considerable effect on system dynamics .evolution is introduced through repeated interactions between agents with respect to the replicator dynamics .the replicator dynamics model natural selection using agent fitness comparisons that result in stochastic imitation of fitter strategies by less fit strategies ( details below ) . in the repeated pd, payoffs are further required to satisfy in order to ensure that full cooperation in the population remains pareto optimal . when a population of agents is unstructured and agent interactions are random , the replicator dynamics favor defection and cooperation is driven to extinction .as mentioned in the introduction , the situation is strikingly different when the population is structured by a network . consider a network consisting of vertices and undirected edges where neither loops nor multiple edges are allowed .agents occupy the network s vertices and are constrained to interact only with their immediate neighbors ; those agents with whom they are connected by an edge .a round consists of each agent playing a pure strategy in a pd with all neighbors , and accumulating the resulting payoffs .following a round of play , agents simultaneously update strategies using discrete replicator dynamics : if agent has accumulated payoff , and compares her payoff to that of agent , then will adopt the strategy of with probability with equal to the larger of the degrees of vertices and .we perform simulations on various specific networks ( details below ) with vertices , in each case starting from a random strategy assignment where the probability of an agent cooperating is .a _ series _ is defined to consist of rounds of play and updating .the _ series mean _ is taken to be the average cooperation level over the last 1000 rounds of the series . for a given network, 100 series are run , and the equilibrium cooperation level is taken to be the average of these 100 series means. it is well documented , and summarized below , that cooperation can become evolutionarily stable in network models of this kind . moreover ,the extent of the evolutionary success of cooperation has been shown to depend greatly on the particular network topology involved . in order to explore this phenomenon further , we recall some basic tools in the study of networks .let denote the probability that a random vertex from the network has degree , and let be the random variable that takes values in the set of possible degrees of vertices in the network .the probability generating function for the distribution of is given by and gives a first - order approximation of network topology .the degree distribution ignores any other contact information present in the network , so represents a generic network chosen randomly from among all those with the fixed degree distribution .the average vertex degree in the network is given by . if an edge is randomly chosen from the network and followed to a vertex at one end , it is times more likely to lead to a vertex of degree than a vertex of degree 1 .therefore , if is the random variable whose values are the degrees of vertices reached along random edges , then the probability generating function of is [ uncorrelated neighbor ] define a _ random neighbor _ to be the vertex reached by first choosing a random vertex in the network , followed by a random edge emanating from that vertex .if no degree - degree correlations are present in the network , then it follows that is the probability generating function for the degree distribution of random neighbors. the average degree of a random neighbor is therefore the expected value of , so that .note that when the probability of an edge leading from a degree vertex to a degree vertex is not independent of , then need not equal .the mean field parameters and , of course , require sufficiently large systems to be meaningful .a critical factor emerging from studies of cooperation phenomena is network heterogeneity . in heterogeneous networks , a broad diversity of vertex degreesare represented . in the context of the evolutionary pd, network heterogeneity has been shown to be strongly correlated with increased success of cooperators .on certain scale - free networks , for example , cooperation can be the dominant population trait for the full range of pd game parameters .heterogeneity can be naturally quantified by the variance of the degree distribution . with denoting the expected value of the random variable , and denoting the expected value of , one has = \langle k^2\rangle - \langle k \rangle^2 $ ] . using the notation above , = g'(1)t'(1 )- g'(1)^2 = v(n - v).\ ] ] if we fix the average network degree , then , or the size difference between an average neighbor and an average vertex in the network , dictates network heterogeneity .since cooperation thrives on heterogeneous networks , emerges as a critical network parameter and has been studied in . in the following ,we introduce a framework that explains this fact in the context of the specifics of the evolutionary pd . in the evolutionary pd ,payoffs flow through connections to cooperators , and agents benefit from maximizing their access thereto .it is well known that cooperation can thrive through the formation of clusters of cooperating agents . moreover , when social interactions are inexpensive ( close to zero ) , cooperators of large degree are especially stable . by contrast , as a large defector converts her neighborhood to defectors , she significantly reduces her own payoff and becomes susceptible to takeover by a cooperator .for this reason , larger degree vertices have been shown to disproportionately favor cooperation .a more detailed picture of the dynamics emerges in . for low temptation to defect , cooperation is the social norm .as the temptation to defect increases , the dynamics are governed by three populations : a core ( or cores ) of cooperating agents , a core ( or cores ) of defecting agents , and a critical fluctuating population of sometime - cooperators and sometime - defectors . the resilience of cooperators , as discussed in , is determined by interactions between agents on the border of a cooperator core .when the temptation to defect is too high , defectors eventually invade the core by stripping off layer upon layer of exposed cooperators until they are largely eradicated from the population .since the growth or breakdown of a cooperator core is determined by the core s exposure to fluctuating nodes , that is where we focus our analysis . consider an interaction on the frontier of a cooperating cluster .using the mean field network parameters and , we address the question of predicting the particular value of the cost - to - benefit ratio , call it , at which point neither a cooperator nor a defector has an advantage ( on average ) . at expects each strategy to be equally successful , and a resulting equilibrium where cooperation and defection are approximately equally prevalent .thus , is a predicted threshold at which point the system transitions between dominant ( defined as more than ) cooperation and dominant defection .given the description of the dynamics in ( and outlined above ) , we consider interactions between agents on the boundary of a cluster of cooperators with the goal of deriving a criterion to calculate . of course ,within a cooperator or defector core , no strategy changes occur as a cooperator ( resp .defector ) considers the success of a another cooperator ( resp .defector ) .the dynamics are determined by interactions between differing strategists .first we analyze a defecting vertex , representing a chain of potentially advancing defectors , and connected to a cooperating neighbor on the border of a core of cooperators .our assumptions give the defector , , the average network degree , while her cooperating neighbor , , has the average network neighbor size .finally , assume that has defecting neighbors while has defecting neighbors .we consider the relative costs and benefits associated to the strategies ( as opposed to pure payoffs ) in this situation , with costs and benefits as in eq .( [ cbr ] ) .the cooperator perceives the collective value of defection in a round of play to be .the defector , meanwhile , sees the cooperator receiving a benefit from each of her cooperative neighbors , but also sees this benefit mitigated by the cost of each cooperative act . from the defector s perspective, the value of cooperation is .[ graph_sit ] shows the situation when , that is , when both agents are maximally cooperator connected , and thus have maximal strength in the sense of evolutionary fitness . the critical frontier between a defector chain ( blue ) and a cooperator cluster ( red ) .the darker colored vertices are labeled with the respective costs and benefits associated to those strategies and collected over one round of play .comparison of these values in the context of strategy updating leads to the criterion of eq .( [ normrule ] ) . ]since is the cost - to - benefit ratio at which point neither agent perceives an advantage in the other s strategy , we can predict by equating the collective benefits of the strategy and the -strategy .this gives rearranging , we get substituting and , and inserting the normalized payoffs , , and gives finally , we add an assumption that near , the term .this amounts to an assumption that when an average cooperator and average defector are connected by an edge , their respective numbers of defecting neighbors are comparable . that this assumption is true in actual simulationsis verified below .conversely , consider a randomly chosen cooperator on a frontier , compared to her randomly chosen neighbor .as above , let and be the numbers of defector neighbors of and , respectively .the same argument this time yields the condition now , however , the condition is statistically unsustainable since , and . during the system s evolution ,smaller cooperators not inside a cluster are stripped away . as a result, cooperators must migrate to larger vertices in order to survive , leaving the first scenario already analyzed above .this framework suggests the following : natural selection favors cooperation when the cost - to - benefit ratio of the cooperative act is smaller than the relative size difference between an average neighbor and an average vertex , divided by the average neighbor size : the term , which is purely a network parameter as well as a natural measure of network heterogeneity on a zero - to - one scale , should mark the phase transition from dominant cooperation ( more than half the population cooperating ) to dominant defection ( more than half the population defecting ) . the actual probability distribution of , the difference in the number of -neighbors over all - edges .simulations were run on a network with vertices , and with average degree and average neighbor degree .the data in the histogram was collected from simulations performed with , and equilibrium cooperation level of after rounds of play . at ,cooperation levels are below . ] to test the accuracy of eq .( [ normrule ] ) , we perform simulations on networks with various degree distributions .networks with vertices , and average degree are constructed via a two step process introduced in .first , a network is generated using the algorithm in .this algorithm uses a single parameter to interpolate between an erds - rnyi random network ( er ) , and a barabsi - albert scale free network ( ba ) . starting from a complete graph on vertices , one of the remaining verticesis chosen .this vertex has edges to attach as follows . with probability , the vertex attaches an edge to an existing vertex with a probability proportional to the existing vertex s degree ( i.e. , by preferential attachment ) . with probability , the edge is connected to any of the existing network vertices with a fixed probability .this procedure is repeated times , once for each edge .when , one obtains a ba network with a power law degree distribution , and when one obtains an er random graph .intermediate give hybrid distributions with intermediate levels of heterogeneity between the heterogeneous ba networks and the essentially homogeneous er random networks .networks are generated with vertices , and average degree .for each value of , networks are generated with .finally , each network is distilled down to its degree distribution by throwing away all other contact information , and a new uncorrelated network is reconfigured , consistent with that degree distribution , using the configuration model .the result is a maximally random network with the specified degree distribution .the point of this choice of networks is not any particular topology , but rather , to give a range of varied topologies with varied heterogeneity as measured by .the results of the simulations described above are consistent with previous work , and are summarized in fig . [ fogs ] below . the temperature plot in fig .[ fogs ] shows that the data are in excellent agreement with the theoretical predictions , proving the effectiveness of the mean field parameters and , and validating the framework leading to eq .( [ normrule ] ) .in particular , the black lines in each panel mark the predicted ( by eq . [ normrule ] ) , as a function of , where cooperators and defectors are expected to be at equal strength , and so , are predicted to be equally prevalent .actual transitions from dominant cooperation ( darker red ) to dominant defection ( darker blue ) occur in the neutral tan colored regions between red and blue .indeed , the black prediction line passes through the neutral , or nearly neutral , regions of the temperature plot .statistical fluctuations are most prominent when , while for and , the predictions are extremely accurate .this is not surprising in light of eq .( [ normrule_nbr ] ) , and the fact that when , values of are smallest , and the term most affects the value of .we note that the mean field framework leading to eq .( [ normrule ] ) is extremely versatile , giving accurate predictions across networks with very different distributions , different levels of heterogeneity , and different average degrees .additionally , we have checked that the criterion also holds on random regular graphs with average degrees , , and , where cooperation is virtually eliminated immediately , as predicted by eq . ( [ normrule ] ) .we turn to the assumption that simplifies eq .( [ normrule_nbr ] ) ; namely , that the numbers of -neighbors of the average agents at either end of a - edge are comparable . while the accuracy of the predictions in fig .[ fogs ] serves as a partial justification , we can consider the assumption directly in the simulations .[ hist ] shows a histogram of the distribution of differences in the numbers of -neighbors over all edges connecting a cooperator and a defector , normalized by .the data was collected after the ten - thousandth round of play and updating on a network with vertices , average degree , average neighbor degree , and for .note that , at which point cooperation levels are below .the distribution of is sharply peaked at .we then let the system run for another rounds , computing the average value of after each round .finally , averaging over these data points gives an overall average value of .there is also a nice connection between the criterion of eq .( [ normrule ] ) and the results in . in that paperthe authors showed that the weighted ( by the cost - to - benefir ratio ) average equilibrium cooperation level call it on the network depended on the network parameter in a linear way .the regression line in that paper gave was given by .notice that the regression line is very close to .inserting , one gets , and the network parameter of eq .( [ normrule ] ) appears again .this seems to be a kind of mean value relationship , where the global average cooperation level taken over all values of the cost - to - benefit ratio also gives the local transition value .finally , notice the similarity between the criterion of eq .( [ normrule ] ) and hamilton s rule for kin selection . despite arising in a completely separate context , hamilton s rule gives a genetic criterion for the emergence of altruistic behavior between individuals when the _ genetic relatedness _ of the individuals exceeds the cost - to - benefit ratio of the altruistic act .genetic relatedness is measured by the probability that two genes randomly selected from each individual at the same locus are identical by descent . in the context of a social network ,the parameter serves as a natural definition for a notion of _ social relatedness_. like genetic relatedness , social relatedness lies in the interval , with larger values indicating increased relatedness . if two networks have the same fixed average agent size , then there is more social cohesion in networks with larger , more influential neighbors . as a result, emerges as the parameter governing social viscosity , where larger neighbors increasingly facilitate relatedness , and through this , cooperation .similar parallels have been drawn before , particularly in where a weak selection model was considered , but does nt appear relevant to the case of strong selection considered here as it fails to distinguish between networks with different topologies , but the same average degree . in conclusion , we have shown that a simple analytical framework , using basic ideas form the theory of complex networks , can effectively predict the success of cooperation in an evolutionary pd on varied network topologies . moreover , the analysis suggests a network - based evolutionary rule that nicely parallels hamilton s classical genetic rule for kin selection .
we show that the success of cooperation in an evolutionary prisoner s dilemma on a complex network can be predicted by a simple , quantitative network analysis using mean field parameters . the criterion is shown to be accurate on a wide variety of networks with degree distributions ranging from regular to poisson to scale - free . the network analysis uses a parameter that allows for comparisons of networks with both similar , and distinct , topologies . furthermore , we establish the criterion here as a natural network analogue of hamilton s classical rule for kin selection , despite arising in an entirely different context . the result is a network - based evolutionary rule for altruism that parallels hamilton s genetic rule . [ section ] [ dfn]notation [ dfn]theorem [ dfn]lemma [ dfn]proposition [ dfn]corollary [ dfn]definition [ dfn]remark 1h
stochastic differential equations , sdes , have been an important tool in the description of many phenomena in different areas of knowledge .the main results as to their existence , uniqueness , and qualitative properties are found in the classical literature , such as [ ] and in recent years , a line of research has been exploring problems related to stochastic differential equations with pulses ( see [ ] ) .+ this paper focuses on determining the finiteness of the expectation of the pulse times that occur when the solution process of a sde reaches a control function , producing a pulse that sends the process to a second control function .+ to this end , consider a one - dimensional brownian motion defined over a stochastic basis , two real and continuous functions and , and a stochastic process solution of a sde of the following type : with and . in this time , the process will be sent to the value .+ the functions and represent the curves of the lower and upper control functions , respectively .let and be two continuous and positive functions such that for all , let , and let be a standard brownian motion . then the model can be written as \tau_k,\tau_{k+1}]\\ x(\tau_k^+)&=&q(\tau_k)\\ \tau_{k}&=&\inf\big\{t>\tau_{k-1};\quad x(t)=s(t)\big\}&{},&\quad\tau_0=0 \end{array } \right.\ ] ] this figure shows a possible path of evolution of the system proposed .the times and correspond to the first pulse times of the system .+ the main goal is to prove that it is possible to obtain a behaviour as in figure 1 , that a local solution of the equation exists on the stochastic interval \tau_{k-1},\tau_k] ] and are positive constants .then there exists a unique stochastic process such that on the stochastic interval ] , where .therefore the application of the classical theorem of the existence of a solution to a stochastic differential equation with local lipschitz coefficients ( see baldi or ksendal ) implies that has a unique solution process on the interval ] implies that the processes , and are solutions to the equations respectively , with ,s[ ] .the fact that implies that , where is the stopping time defined in proposition [ prop : exist-1 ] taking .therefore <\mathbb{e}[\tau]<\infty ] that is equivalent to the solution of equations with ,\tau_2] ] . note that between ] there is no relationship of order .+ under hypotheses _ ( a ) _ , _ ( b ) _ and _ ( c ) _ , there exists a stochastic process that is the unique solution , in the a.s .sense , of the system , and the pulse times are random variables with finite expectations . using corollary [ coro : stepp-1 ] and corollary [ coro : shift - t ] , it is possible to obtain , a solution of equation on the interval ,\tau_1] ] .+ then the process \tau_1]}(t) ] and \tau_1,\tau_2]}(t) ] .moreover , \tau_1]}(t)+x_2(t)\mathbb{1}_{]\tau_1,\tau_2]}(t) ] .therefore , using the same argument recursively , we find \tau_{k-1},\tau_k]}(t),\ ] ] a global solution of the system .its uniqueness a.s . is due to the uniqueness on each interval .+ the expectation of the stopping time is given by the following recurrence equation .+ =\mathbb{e}[\tau_{k-1}]+\mathbb{e}[\delta\tau_k]\ ] ] where is the stopping time defined in proposition [ prop : exist-1 ] taking .the random variable corresponds to the time between two successive pulses , called the _in order to have more precise information about the asymptotic behaviour of the process , it is necessary to impose additional conditions on the functions and .more precisely , if is increasing and is decreasing , or vice versa , then it is possible to improve the estimates for the timeout .[ prop : q - in s - de ] assume that hypotheses _ ( a ) _ , _ ( b ) _ and _ ( c ) _ are satisfied and further , let be a decreasing function and be an increasing function. then ] , completing the proof .[ prop : q - de s - in ] assume that hypotheses _ ( a ) _ , _ ( b ) _ and _ ( c ) _ are satisfied , and that is an increasing function and is a decreasing function . then ] , which completes the proof .note that if and are monotone , then under hypothesis * c * , ,s[]with .this allows the following theorem .[ theo : delta convergence ] if hypotheses _ * a * _ , _ * b * _ and _ * c * _ are satisfied , if is increasing and is decreasing , or if is decreasing and is increasing , then the expectation of the timeout converges on the interval ] , is a decreasing sequence . in the expression for the expectation of in equation , the initial and final values can be replaced and updated at time by and .therefore the expectation of the timeout is bounded by + < \frac{1}{\alpha-\dfrac{\sigma^2}{2}}\ln\left(\dfrac{s(\tau_{k-1}}{q(\tau_{k-1})}\right),\ ] ] + and taking limits , + \le \frac{1}{\alpha-\dfrac{\sigma^2}{2}}\ln\left(\dfrac{\tilde{s}}{q}\right),\ ] ] + and so ] .the case where is increasing and is decreasing is analogous to the previous case , but then ] is constant on the interval ] is the fraction of fishing and is a temporal scale factor . + note that if is constant , then is a geometric progression .a reasonable choice for is , as in the previous example , , and therefore and the model can be written as + \tau_k,\tau_{k+1}]\\ { } \\x(\tau_k^+)&=&s\left(1-\gamma^{1+\frac{\tau_{k-1}}{t}}\right)&\\ { } \\ \tau_{k}&=&\inf\big\{t>\tau_{k-1};\quad x(t)=s\big\},&{}&\tau_0=0 .\end{array}\right.\ ] ] + as ,1[]and .+ as a consequence of equation , one obtains that =0 $ ] .platen e. ( 1985 ) , on first exit time of diffusions ., _ lecture note on control and information sciences _ , 69 , 192195 sakthivel , r. ; luo , j. ( 2009 ) , asymptotic stability of nonlinear impulsive stochastic differential equations . _ statistics & probability letters _ 79(9 ) , 12191223 .
the main objective of this paper is the construction of the solution of an impulsive stochastic differential equation , subject to control conditions in the pulse times , and to give sufficient conditions for the pulse times to be random variables with finite expectations . such equations are useful in modeling diverse phenomena , such as biological control and pressure regulating mechanisms . the article ends with an application to fisheries .
during the last three decades , the study of integrability , invariants , symmetries , and exact solutions of nonlinear ordinary and partial differential equations ( odes , pdes ) and differential - difference equations ( ddes ) has been the topic of major research projects in the dynamical systems and the soliton communities .various techniques have been developed to determine whether or not pdes belong to the privileged class of completely integrable equations .one of the most successful and widely applied techniques is the painlev test , named after the french mathematician paul painlev ( 1863 - 1933 ) , who classified second - order differential equations that are globally integrable in terms of elementary functions by quadratures or by linearization .in essence , the painlev test verifies whether or not solutions of differential equations in the complex plane are single - valued in the neighborhood of all their movable singularities . to a large extent ,the painlev test is algorithmic , yet very cumbersome when done by hand .in particular , the verification of the compatibility conditions is assessed by many practitioners of the painlev test as a painstaking computation .in addition to the tedious verification of self - consistency ( or compatibility ) conditions , computer programs are helpful at exploring _ all _ possibilities of balancing singular terms .indeed , the omission of one or more choices of dominant behavior " can lead to wrong conclusions .we therefore developed the programs _painsing.max _ and _ painsys.max _ ( both in _ macsyma _ syntax ) , and _ painsing.m _ and _ painsys.m _ in _ mathematica _ language , that perform the painlev test for polynomial systems of odes and pdes . in this paperwe demonstrate the above codes by analyzing a few prototypical nonlinear equations and systems , such as the boussinesq and nonlinear schrdinger equations , a class of fifth - order kdv equations , and the hirota - satsuma and lorenz systems . our computer code does not deal with the theoretical shortcomings of the painlev test as identified by kruskal and others .thus far , we have implemented the traditional painlev test , and not yet incorporated the latest advances in painlev type methods , such as the poly - painlev test or other generalizations . neither did we code the weak painlev test or other variants . furthermore , we do not have code for the singularity confinement method , i.e. an adaptation of the painlev test that allows one to test the integrability of difference equations . among the various alternatives to establish the integrability of nonlinear pdes and ddes , the search for conserved densities and higher - order symmetries is particularly appealing .indeed , in this paper we will give algorithms that apply to both the continuous and semi - discrete cases .we implemented these algorithms in _ mathematica _ , but they are fairly simple to code in other computer algebra languages ( see ) .our algorithms are based on the concept of dilation ( scaling ) invariance .that inherently limits their scope to polynomial conserved densities and higher - order symmetries of polynomial systems .although the existence of a sequence of conserved densities predicts integrability , the nonexistence of polynomial conserved quantities does not preclude integrability .indeed , integrable pdes or ddes could be disguised with a coordinate transformation in ddes that no longer admit conserved densities of polynomial type .the same care should be taken in drawing conclusions about non - integrability based on the lack of higher - order symmetries , or equations failing the painlev test . apart from integrability testing , the knowledge of the explicit form of conserved densities and higher - order symmetries is useful .for instance , with higher - order symmetries of integrable systems , one can build new completely integrable systems , or discover connections between integrable equations and their group theoretic origin .explicit forms of conserved densities are useful in the numerical solution of pdes or ddes . in solving ddes , which may arise from integrable discretizations of pdes, one should check that conserved quantities indeed remain constant .in particular , the conservation of a positive definite quadratic quantity may prevent nonlinear instabilities in the numerical scheme . our integrability package _ invariantssymmetries.m _ in _ mathematica _ automates the tedious computation of closed - form expressions for conserved densities and higher - order symmetries for both pdes and ddes .applied to systems with parameters , the package determines the conditions on these parameters so that a sequence of conserved densities or symmetries exists .the software can thus be used to test the integrability of classes of equations that model various wave phenomena .our examples include a vector modified kdv equation , the extended lotka - volterra and relativitic toda lattices , and the heisenberg spin model .the conserved densities and symmetries presented in this paper were obtained with _we focus on pdes . as originally formulated by ablowitz _ , the painlev conjecture asserts that all similarity reductions of a completely integrable pde should be of painlev - type ; i.e. its solutions should have no movable singularities other than poles " in the complex plane . a later version of the painlev test due to weiss _ allows testing of the pde directly , without recourse to the reduction(s ) to an ode .a pde is said to have the painlev property if its solutions in the complex plane are single - valued in the neighborhood of all its movable singularities .in other words , the equation must have a solution without any branching around the singular points whose positions depend on the initial conditions . for odes, it suffices to show that the general solution has no worse singularities than movable poles , or that no branching occurs around movable essential singularities .a three step - algorithm , known as the painlev test , allows one to verify whether or not a given nonlinear system of odes or pdes with ( real ) polynomial terms fulfills the necessary conditions for having the painlev property .such equations are prime candidates for being completely integrable .there is a vast amount of literature about the test and its applications to specific odes and pdes .several well - documented surveys and books discuss subtleties and pathological cases of the test that are far beyond the scope of this article .other survey papers deal with the many interesting by - products of the painlev test .for example , they show how a truncated laurent series expansion of the type introduced below , allows one to construct lax pairs , bcklund and darboux transformations , and closed - form particular solutions of pdes .we briefly outline the three steps of the painlev test for a single pde , in two independent variables and .our software can handle the four independent variables throughout the paper we will use the notations in the approach proposed by weiss , the solution , expressed as a laurent series should be single - valued in the neighborhood of a non - characteristic , movable singular manifold which can be viewed as the surface of the movable poles in the complex plane . in ( [ laurent ] ) , , is a negative integer , and are analytic functions in a neighborhood of note that for odes the singular manifold is where is the initial value for . for pdes ,if has simple zeros and one may apply the implicit function theorem near the singularity manifold and set for an arbitrary function .this so - called the kruskal simplification , considerably reduces the length of the calculations .the painlev test proceeds in three steps : * step 1 : determine the dominant behavior * determine the negative integer and from the leading order ansatz " .this is done by balancing the minimal power terms after substitution of into the given pde. there may be several branches for and for each the next two steps must be performed .* step 2 : determine the resonances * for a selected and calculate the non - negative integer powers , called the _ resonances _ , at which arbitrary functions enter the expansion .this is done by requiring that is arbitrary after substitution of into the equation , only retaining its most singular terms .the coefficient will be arbitrary if its coefficient equals zero .the integer roots of the resulting polynomial must be computed .the number of roots , including should match the order of the given pde .the root corresponds to the arbitrariness of the manifold , * step 3 : verify the correct number of free coefficients * verify that the correct number of arbitrary functions indeed exists by substituting the truncated expansion into the pde , where is the largest resonance . at non - resonance levels , determine all at resonance levels , should be arbitrary , and since we are dealing with a nonlinear equation , a _ compatibility condition _ must be verified . an equation for which the above steps can be carried out consistently and unambiguously ,is said to have the painlev property and is conjectured to be completely integrable .this entails that the solution has the necessary number of free coefficients and that the compatibility condition at each of these resonances is unconditionally satisfied .the reader should be warned that the above algorithm does not detect essential singularities and therefore can not determine whether or not branching occurs about these .so , for an equation to be integrable it is _ not _ _ sufficient _ that it passes the painlev test .neither it is _necessary_. indeed , there are integrable equations , such as the dym - kruskal equation , that do not pass the painlev test , yet , by a complicated change of variables can be transformed into an integrable equation .the generalization of the algorithm to systems of odes and pdes is obvious .yet , it is non - trivial to implement .one of the reasons is that the major symbolic packages do not handle inequalities well .with respect to systems , our code is based on the above three step - algorithm but generalized to systems , as it can be found in . in these papersthere is an abundance of worked examples that served as test cases .for example , given a system of first - order odes , one introduces a laurent series for every dependent variable the computer program must carefully determine all branches of dominant behavior corresponding to various choices of and/or for each branch , the single - valuedness of the corresponding laurent expansion must be tested ( i.e. the resonances must be computed and the compatibility conditions must be verified ) .all the details can be found in .singularity analysis for pdes is nontrivial and the painlev test should be applied with extreme care .notwithstanding , our software automatically performs the _ formal _ steps of the painlev test for systems of odes and pdes .the examples in section 3.2 illustrate how the code works .careful analysis of the output and drawing conclusions about integrability should be done by humans .some subtleties of the mathematics of the painlev test of systems of pdes were also dealt with in .numerous examples of the painlev test for odes can be found in the review papers .we turn our attention to pdes . using our software package _ painsing.max _ or _painsing.m _ one can determine the conditions under which the equation passes the painlev test . for ( [ cylindricalkdv ] ) , and apart from the roots are and the latter three are resonances .furthermore , and and are indeed arbitrary since the compatibility conditions at resonances and are satisfied identically .the compatibility condition at resonance is ignoring the trivial solution , we get without loss of generality , we set and equation ( [ cylindricalkdv ] ) becomes the _ cylindrical _ kdv equation which is indeed completely integrable . painlev based investigations for integrable pdes with space and time dependent coefficients are given in .our painlev programs can not automatically test a class of equations such as with arbitrary ( non - zero and real ) parameters and the parameters affect the lowest coefficient in the laurent expansion in such as way that the roots can not be computed , and the integrability conditions can no longer be tested .in ( [ classfifthkdv ] ) there are four cases that are of particular interest : * \(i ) and ( lax equation ) , * \(ii ) and ( sawada - kotera equation ) , * \(iii ) and ( kaup - kupershmidt equation ) , and * \(iv ) and ( ito equation ) . in table 1we list the results of the painlev test applied to these cases . for the first three equations the compatibility conditions are satisfied at all the resonances .these equations pass the test . for the ito equationthe compatibility conditions are only satisfied at some of the resonances .the ito equation fails the test .the first three equations are known to be completely integrable .ito s equation is not completely integrable .the two other algorithms presented in this paper can determine the conditions ( i ) , ( ii ) and ( iii ) that assure the complete integrability of ( [ classfifthkdv ] ) .the conserved densities and higher - order symmetries of ( [ classfifthkdv ] ) can be found in and .0.01pt [ cols= " < , < , < , < " , ] table 4 : conserved densities for the heisenberg spin model 0.01pt furthermore , the sum of two conserved densities is a conserved density .hence , after adding ( [ ssquare ] ) , can be replaced by note that and recall that densities are equivalent if they only differ by a total .so , ( [ contspinhamiltonian2 ] ) is equivalent with which can be compactly written as where consequently , the hamiltonian of ( [ orgheisenberg ] ) is constant in time .the dot refers to the standard inner product of vectors .itoh studied this extended version of the lotka - volterra equation ( [ volterra ] ) : for ( [ extvolterra ] ) is ( [ volterra ] ) , for which three conserved densities and one symmetry are listed in table 3 . in , we gave two additional densities and in we listed two more symmetries .for ( [ extvolterra ] ) , we computed 5 densities and 2 higher - order symmetries for through here is a partial list of the results : 0.01pt for + u_n [ u_{n+1 } u_{n+3 } + u_{n+2 } u_{n+3 } + u_{n+2 } u_{n+4 } \nonumber \\ & & - ( u_{n-4 } u_{n-2 } + u_{n-3 } u_{n-2 } + u_{n-3 } u_{n-1 } ) ] .\end{aligned}\ ] ] for + u_n [ ( u_{n+1 } + u_{n+2 } + u_{n+3})^2 \nonumber \\ & & - u_n ( u_{n-3 } + u_{n-2 } + u_{n-1})^2 ] + u_n^2 [ u_{n+1 } + u_{n+2 } + u_{n+3 } \nonumber \\ & & - ( u_{n-3 } + u_{n-2 } + u_{n-1})].\end{aligned}\ ] ] our last example involves the integrable relativistic toda lattice : we computed the densities of rank through the first three are .\end{aligned}\ ] ]we computed the symmetries for ranks through the first two are : , \\! & v_n^2 ( u_n - u_{n-1 } ) + v_n ( u_n^2 - u_{n-1 } u_{n-2 } - u_{n-1}^2 + u_n u_{n+1 } \nonumber \\ & & - u_{n-1 } v_{n-1 } + u_n v_{n+1}).\end{aligned}\ ] ] conserved densities and symmetries of other relativistic lattices are in .we presented three methods to test the integrability of differential equations and difference - differential equations .one of these methods is the painlev test , which is applicable to polynomial systems of odes and pdes .the two other methods are based on the principle of dilation invariance .thus far , they can only be applied to polynomial systems of evolution equations . as shown , it is easy to adapt these methods to the dde case .although restricted to polynomial equations , the techniques presented in this paper are algorithmic and have the advantage that they are fairly easy to implement in symbolic code .applied to systems with parameters , the codes allow one to determine the conditions on the parameters so that the systems pass the painlev test , or have a sequence of conserved densities or higher - order symmetries . given a class of equations , the software can thus be used to pick out the candidates for complete integrability .currently , we are extending our algorithms to the symbolic computation of recursion operators of evolution equations . in the futurewe will investigate generalizations of our methods to pdes and ddes in multiple space dimensions .the potential use of lie - point symmetries other than dilation ( scaling ) symmetries will also be studied .we acknowledge helpful discussions with profs .b. herbst , m. kruskal .s. mikhailov , c. nucci , j. sanders , e. van vleck , f. verheest , p. winternitz , and t. wolf .we also thank c. elmer and g. erdmann for help with parts of this project .m. j. ablowitz , a. ramani and h. segur , j. math .( 1980 ) 715 .m. j. ablowitz , a. ramani and h. segur , j. math .( 1980 ) 1006 .f. calogero and a. degasperis , _ spectral transform and solitons .( north - holland , amsterdam , 1982 ) .f. cariello and m. tabor , physica d 39 ( 1989 ) 77 .d. v. chudnovsky , g. v. chudnovsky and m. tabor , phys .lett . a 97 ( 1983 ) 268 .p. a. clarkson , i m a j. appl . math .44 ( 1990 ) 27 .p. a. clarkson , ed ., _ applications of analytic and geometrical methods to nonlinear differential equations _ ,c v. 413 ( kluwer , dortrecht , the netherlands , 1993 ) .r. conte , in : introduction to methods of complex analysis and geometry for classical mechanics and non - linear waves , eds .d. benest and c. frschl ( ditions frontires , gif - sur - yvette , france , 1993 ) p. 49 .r. conte , ed ., _ the painlev property , one century later _ , crm series in mathematical physics ( springer , berlin , 1998 ) .r. conte , a. p. fordy and a. pickering , physica d 69 ( 1993 ) 33 .g. contopoulos , b. grammaticos and a. ramani , j. phys . a : math .28 ( 1995 ) 5313 .e. v. doktorov and s. yu .sakovich , j. phys . a : math .gen . 18 ( 1985 ) 3327 .l. d. faddeev and l. a. takhtajan , _ hamiltonian methods in the theory of solitons _( springer , berlin , 1987 ) .h. flaschka , a. c. newell and m. tabor , in : what is integrability .v. e. zakharov , ed .( springer , new york , 1991 ) p. 73 . j. d. gibbon , p. radmore , m. tabor and d. wood , stud . appl . math .72 ( 1985 ) 39 .a. goriely , j. math .33 ( 1992 ) 2728 .. gkta and w. hereman , j. symb .24 ( 1997 ) 591 .. gkta and w. hereman , _ computation of conservation laws for nonlinear lattices _ , physica d ( 1998 ) in press .. gkta and w. hereman , package _invariantssymmetries.m _ is available at http://www.mathsource.com/cgi-bin/msitem?0208-932 ._ mathsource _ is the electronic library of wolfram research , inc .( champaign , illinois ) .w. hereman , in : advances in computer methods for partial differential equations vii , r. vichnevetsky , d. knight and g. richter , eds .( imacs , new brunswick , new jersey , 1993 ) p. 326. w. hereman , the macsyma programs _painsing.max_ , _painsys.max_ , and the mathematica programs _ painsing.m _ , _painsys.m _ are available via anonymous ftp from mines.edu in directory pub / papers / math_cs_dept / software ; or via internet url : http://www.mines.edu / fs_home / whereman/. w. hereman and s. angenent , macsyma newsletter * 6 * ( 1989 ) 11 .w. hereman and w. zhuang , acta appl .( 1995 ) 361 .r. hirota , b. grammaticos and a. ramani , j. math .( 1986 ) 1499 .y. itoh , prog .78 ( 1987 ) 507 .m. d. kruskal , in : painlev transcendents .d. levi and p. winternitz , eds .( plenum press , new york , 1992 ) p. 187 .m. d. kruskal and p. a. clarkson , stud .appl . math .86 ( 1992 ) 87 . m. d. kruskal , n. joshi and r. halburd , in : proc .cimpa winter school on nonlinear systems , eds .b. grammaticos and k. m. tamizhmani , january 3 - 22 , 1996 , pondicherry , india ( 1996 ) .m. d. kruskal , a. ramani and b. grammaticos , in : partially integrable evolution equations in physics , r. conte and n. boccara , eds .( kluwer , dortrecht , the netherlands , 1990 ) p. 321 . m. lakshmanan and r. sahadevan , physics reports 224 ( 1993 ) 1 .d. levi and p. winternitz , eds . ,painlev transcendents : their asymptotics and physical applications , nato adv .b ( phys . ) v. 278 ( plenum press , new york , 1992 ) . c. r. menyuck , h. h. chen and y. c. lee , phys . rev .a 27 ( 1983 ) 1597 .m. musette and r. conte , j. math .32 ( 1991 ) 1450 .m. musette and r. conte , j. phys . a : math .( 1994 ) 3895 .a. c. newell and m. tabor and y. b. zeng , physica d 29 ( 1987 ) 1 .j. m. nunes da costa and c .- m .marle , j. phys . a : math .( 1997 ) 7551 .p. j. olver , _ applications of lie groups to differential equations , _edition ( springer , new york , 1993 ) . a. ramani , b. grammaticos and t. bountis , physics reports 180 ( 1989 ) 159 .r. l. sachs , physica d 30 ( 1988 ) 1 .a. b. shabat and r. i. yamilov , leningrad math .j. 2 ( 1991 ) 377 .c. sparrow , _ the lorenz equation : bifurcations , chaos , and strange attractors _ , applied mathematical sciences 41 ( springer , new york , 1982 ) .k. m. tamizhmani and r. sahadevan , j. phys . a : math .. 18 ( 1985 ) l1067 .f. verheest , in : proc .fourth int . conf .on mathematical and numerical aspects of wave propagation , j. a. desanto , ed .( siam , philadelphia , 1998 ) p. 398 .f. verheest , w. hereman , and h. serras , mon . not .( 1990 ) 392 .j. weiss , m. tabor and g. carnevale , j. math .phys . 24 ( 1983 ) 522 .s. wolfram , the _ mathematica _ book .edition ( wolfram media , urbana - champaign , illinois & cambridge university press , london , 1996 ) .
three symbolic algorithms for testing the integrability of polynomial systems of partial differential and differential - difference equations are presented . the first algorithm is the well - known painlev test , which is applicable to polynomial systems of ordinary and partial differential equations . the second and third algorithms allow one to explicitly compute polynomial conserved densities and higher - order symmetries of nonlinear evolution and lattice equations . the first algorithm is implemented in the symbolic syntax of both _ macsyma _ and _ mathematica_. the second and third algorithms are available in _ mathematica_. the codes can be used for computer - aided integrability testing of nonlinear differential and lattice equations as they occur in various branches of the sciences and engineering . applied to systems with parameters , the codes can determine the conditions on the parameters so that the systems pass the painlev test , or admit a sequence of conserved densities or higher - order symmetries . , , integrability ; painlev test ; conservation law ; invariant ; symmetry ; differential equation ; lattice
there is an obvious and wide spread interest in predicting extreme events in a variety of contexts .particularly well known examples are the insurance risks related to large tropical storms , human and property risks in the context of large earthquakes , financial risks caused by large movements of the markets , and dangers to passenger planes due to extremely intermittent turbulent air velocities .obviously , any improvement in the predictability of any of these extreme events is highly desirable for a number of reasons .accordingly , there exists a large body of work focusing on the statistics of such events , small , intermediate and large , with the aim of studying the ensuing probability distribution functions ( pdf ) .if one can model properly the pdf , one can in principle predict at least the frequency of extreme events . yet, there is one fundamental question that arises that needs to be confronted first : are the extreme events sharing the same statistical properties as the small and intermediate events , or are they `` outliers '' ? if the latter is true , then no analysis of the core of the pdf , clever as it may be , could yield a proper answer to the desire to predict the probability of extreme events .indeed , in a number of context it had been proposed recently that extreme events are outliers " .for example in financial markets the largest draw - downs appear to exhibit properties that differ from the bulk of the fluctuations .in general one would refer to outliers " when the rate of occurrence of small and intermediate events lies on a pdf with some given properties , while the extreme events appear to exhibit statistical properties that differ from the bulk in a significant way .the aim of this paper is to present a detailed analysis of the fluctuations in a turbulent dynamical system that shows that such a point of view can be substantiated .clearly , this type of considerations must be conducted with great care .the danger is that on small time horizons the largest events appear so rarely , once or twice , that their rate of occurrence is not statistically significant , and no conclusion about their relation to the statistics of small and intermediate events is possible .nevertheless , we offer in this paper a positive outlook .we will show that in the context of the bulk of this paper , which is the analysis of a shell model of turbulence , one can analyze _ within the short time horizon _ the dynamics of the extreme events .this analysis reveals their special dynamical scaling properties , allowing us to make interesting predictions about the tails of the distribution functions even before the full statistics is available .these predictions can be checked in our case by considering much longer time horizons .the conclusion for the extreme events community is that it may very well pay to look very carefully at the detailed dynamics of the extreme events if one wants to claim anything about their probability of occurrence .the model that we treat in detail in this paper is a so - called shell " model of turbulence .shell models of turbulence are simplified caricatures of the equations of fluid mechanics in wave - vector representation ; typically they exhibit anomalous scaling even though their nonlinear interactions are local in wavenumber space .the wavenumbers are represented as shells , which are chosen as a geometric progression where is the shell spacing " .there are degrees of freedom where is the number of shells .the model specifies the dynamics of the velocity " which is considered a complex number , .their main advantage is that they can be studied via fast and accurate numerical simulations , in which the values of the scaling exponents can be determined very precisely .we employ our own home - made shell model which had been christened the sabra model .it exhibits similar anomalies of the scaling exponents to those found in the previously popular goy model , but with much simpler correlation properties , and much better scaling behavior in the inertial range .the equations of motion for the sabra model read : where the star stands for complex conjugation , is a forcing term which is restricted to the first shells and is the viscosity " . in this paperwe restrict the forcing to the first and and second shells only ( . the coefficients and are chosen such that this sum rule guarantees the conservation of the energy " in the inviscid ( ) limit .the main attraction of this model is that it displays multiscaling in the sense that moments of the velocity depend on as power laws with nontrivial exponents : where the scaling exponents exhibit non linear dependence on .we expect such scaling laws to appear in the inertial range " with shell index larger than the largest shell index that is effected by the forcing , denoted as , and smaller than the shell indices affected by the viscosity , the smallest of which will be denoted as .the scaling exponents were determined with high numerical accuracy better than in ref. . to introduce the issue behind the title of this paper , we present in fig .[ fig1 ] a typical time series for .the parameters of the model are detailed in the figure legend .one can see the typical appearance of rare events with amplitude that exceeds the mean by a factor of 68 . to pose the question in its clearest way we display in fig .[ fig2 ] a distribution function which is the normalized rate of occurrence ( i.e. the number of times ) that a given amplitude has been observed in the time window of time steps .this apparent relative frequency of events is very similar to findings in real data , see for example fig . 1 of ref . . which deals with draw downs in the dow jones average .similarly to the analysis there , we can pass an approximate straight line through the points representing small and intermediate events .such an exponential law would mean that the events of with amplitudes larger than , say , 4 are clear outliers .their probability is so low that they should not have appeared in the short time horizon at all .we could conclude , like in the analysis of ref . , that the extreme events can not be dealt with the same distribution function as the small and intermediate events .-1.2 cm on the other hand , it is very possible that the low rate of occurrence of the extreme events in fig .[ fig2 ] means simply that they are statistically irrelevant and that no conclusion can be drawn . how to overcome this difficulty ?the purpose of this paper is to show that indeed the extreme events may have dynamical scaling properties that are all their own , and that they affect crucially the tails of the distributions functions , making them very broad indeed .the main new point is that detailed analysis of the extreme events _ in the short time horizon _ suffices to make lots of predictions about the tails of the pdf s , predictions that in our case can be easily confirmed by considering much longer time horizons . in explaining our ideas we will try to distinguish aspects which are general , and that in our view may have applications to other systems with extreme events , and aspects which are particular to the example of the shell model of turbulence .thus we start in sect.2 with an analysis of the temporal shape of the extreme events .we believe that this analysis is very general , leading to an important relation between the amplitude of the event and its time scale ( the time elapsing from rise up to demise ) . in sect .3 we employ the dynamical scaling form of the extreme events to present a theory of the tails of the distribution functions .we can relate the tails of pdf s belonging to different scales . in sect .4 we discuss numerical studies of the pdf s , distinguishing the core and the tails . in sect.5the main numerical findings are rationalized theoretically on the basis of universal pulse " solutions of the dynamics of the sabra model .section 6 contains the bottom line : we make use of the scaling relations to _ predict _ the tails of pdf s from data collected within short time horizons .direct measurements of these tails give nonsense unless the time horizons are increased a hundred fold . yet with the help of the theoretical forms we can offer predicted tails that agree very well with the data collected with much longer time horizons .in turbulence in general and in our shell model in particular the energy that is injected by the forcing at the largest scales ( and 2 ) is transferred on the average to smaller scales .it is advantageous to analyze the extreme events of a given scale ( or given shell ) and also to follow the cascade of extreme events from scale to scale .we first consider a given shell .we focus here on the detailed dynamics of the largest events of a given scale .we considered for example the time series of the 20th shell ( ) and isolated the 5 largest events ( in terms of their amplitude ) as they occurred in a time window of time steps . in the first step of analysis we normalized these 5 events by the amplitude at their maximum .next we plotted these normalized events as a function of time , subtracting the time at which they have reached their maximum value .the result of this replotting is shown in fig. [ fig3 - 1 ] .obviously a similar replotting can be done for any time series , and by itself is contentless .-1.2 cm = 8.3 cm -0.8cm = 8.3 cm the next step of analysis will reveal already something interesting .building on the normalized events of fig .[ fig3 - 1 ] we attempt to rescale the time axis for each event in order to collapse the data together .of course , each event calls for a different rescaling factor , which we denote ( in frequency units ) as .the fact that such a rescaling factors exist , and that they leads to data collapse as shown in fig .[ fig3 - 2 ] , is a not trivial fact which may or may not exist in different cases .but we will show that if such a rescaling is found , it can serve as a starting point for very useful considerations .the third step of the present analysis is a search of meaning to the rescaling factors .we hope that has a simple relation to the amplitude of the extreme events . to test this we can plot the individual values of found in fig .[ fig3 - 2 ] as a function of the amplitude at the peak .the resulting plot is shown as fig .[ fig3 - 3 ] . in passing the straight line through the data points we included the point in the analysis , as we search for a simple scaling form with a scaling exponent .we conclude that in this case we have a satisfactory scaling law with .the meaning of this scaling law is quite apparent in the present case .looking back at the equation of motion we realize that from the point of view of power counting ( not to be confused with actual dynamics ) it can be written as with .it is thus acceptable that a rescaling of by should collapse all the extreme events as shown above .if the equation of motion were cubic in we could expect etc .obviously , the rescaling analysis in this case revealed the type of dynamics underlying the process . whether this can be done effectively in other case where extreme events are crucialis an open question for future research .= 8.6 cm = 8.6 cm to gain further understanding of the extreme events we focus now on the transfer from scale to scale .consider for example a particular large amplitude event in the shell , and its future fate as time proceeds .this is shown in fig .the event reached its highest amplitude at shell 15 around . at a slightly later time it appeared as a large event in shell 16 , and with a shorter delay at shell 17 where it started to split into a doublet . at even shorter delays this event emerges as a triplet and a multiplet at shells 18,19 and 20 respectively .a very important characteristic of the dynamics of large events can be obtained from finding how to relate the maximal amplitudes of the first peak in the different shells . as was done above, we first replot all the first peaks as a function of time minus the time of their maximal amplitude .we then glue all the maxima together by rescaling the peaks amplitudes relative to the peak of a chosen shell .denote by the relative amplitude of the peak in the shell to the shell .choosing in our example we then seek a single exponent such that where is the shell spacing defined by eq .( [ kn ] ) .the value of is obtained by plotting _ vs _ where /\ln\lambda = y\ , ( 20-n)\ .\ ] ] the best fit is obtained with , see fig .[ interfit ] .the peaks which are now glued at their maxima as shown in fig .[ fig5 ] still have very different time - width .next , as before , we want to collapse all these curves by rescaling the time axis according to . expecting the scaling law it is natural to consider /\ln\lambda = z\ , ( 20-n)\ .\ ] ] the exponent is found by computing `` the best '' linear fit of _ vs _ , see fig .[ interfit ] .the quality of the resulting data collapse can be seen in fig .note , that within the error bars .this sum rule will be rationalized theoretically in sec .[ s : theory ] . the bottom line of this analysis can be summarized in a dynamical scaling form for the extreme events : \ . \label{scform}\ ] ] here is a characteristic velocity amplitude associated with the cascade of a particular large event which starts at small and reaches eventually large values of .as such is not universal .we stress that the scaling form was derived on the basis of a time series in the short time horizon , i.e. the the same one that gave rise to the apparent pdf shown in fig .we will see that these findings suffice to make rather strong predictions about the expected form of the _ converged _ pdf .a theoretical understanding of the origin of the scaling form ( [ scform ] ) will be presented in sec .[ s : theory ] . -0.1cm = 8.6 cm -1 cm = 8.6 cm = 8.6 cm = 8.6 cmhaving a scaling form for the large events means a great deal for the structure functions [ cf .( [ scaling ] ) ] for high values of .in fact for high the structure functions are dominated by the large events . to demonstrate this we show in fig .[ fig10 ] the relative contribution to that arise from velocity amplitudes that exceed a threshold . in this plot is the structure function eq .( [ scaling ] ) where only events with are considered , whereas contains all the data .obviously the higher is the higher is the contribution of large events .for any time window there exists the largest event , and when exceeds its value , necessarily vanishes . if we accept the scaling form ( [ scform ] ) we can use it to predict the scaling exponent for high values of . by definitions for large enoughthe structure functions are dominated by the well separated events . instead of the integral in the interval ] and write comparing the exponents of here and in the previous equation we find the scaling exponents of course this prediction is valid only for high values of for which the contributions of the isolated peaks are domninant .we turn now to the prediction of the tails of the pdfs assuming that these tails are dominated by well separated peaks with self - similar evolution ( [ scform ] ) .we will see below [ and cf .( [ pred1-pdf ] ) ] , that the tails of the predicted pdf are very sensitive to the _ exponents _ in eq .( [ scform ] ) , but rather insensitive to the precise form of the universal function in eq.([pred - sp ] ) .assume then for simplicity that for and for .there is the free parameter in eq .( [ scform ] ) ; for the chaotic realizations we consider it as a random parameter .define then the variable according to consider now a run with a total time horizon .denote as the number of peaks measured in this run in which the value of fell in the window ] with three free parameters , and .the results of our fits showed that the parameters are close to for all values of .therefore we fixed the value and optimized the values of of and to get the best fits in the tail regions .now the fit formula reads =a _n u_n \ .\ ] ] the corresponding fits for the tails of the pdfs for the 11th , 15th and 18th shells are shown in fig .[ fig : sh11 - 18 ] , lower panel .the fits are excellent for but not surprisingly they fail for smaller values of , especially for larger value of . to collapse the tails together we need to choose a reference shell ; we show the results for . replotting -a_n + a_{11} ] .it was shown in ref . that the eqs .( [ eq : univ ] , [ st ] ) can be considered as a nonlinear eigenvalue problem .they have trivial solutions , but they may have nonzero solutions for particular values of and .for example , the nonzero solution .requires .nevertheless the constant solution fails to fulfill the requirement that .we expect that a nontrivial solution that satisfied the boundary conditions will force into the observed value which lies between 2/3 to 1 .the actual calculations that demonstrate this are outside the scope of this paper .we just reiterate our numerical finding that for the particular set of parameters , , and that were employed in this study .-0.6 cm = 8.4truecm -0.3 cm = 8.4truecm -0.3 cm = 8.6truecm-0.3 cm in this section we demonstrate that the analysis presented above can be used to predict the tails of the pdf s of large scale phenomena ( relatively low values of ) using only data measured in the short time horizon .we focus on the example shown in fig .[ fig2 ] , i.e. with times steps .we first fit the pdf shown in fig .[ fig2 ] , using a fit formula which is inspired by eq .( [ eq : fits ] ) : =\tilde a_n+\tilde b_n u_n^{\tilde c_n } \ , \label{fit11}\ ] ] [ sec : conclusion ] and found , . , .the data and the best fit are shown in fig.[predict ] panel a. next we want to continue the pdf of into event values that are too rare in the short time horizon . to this aim we measured , in the same time window of time steps , the tail of the pdf of the 18th shell . in doingso we use the fact that the small scale events have a much shorter turn over time , and the short " time horizon is sufficiently long to provide a good estimate of the tail .we fitted the tail with eq .( [ eq : fits ] ) and found , . from this value and ( eq .[ eq : par - scale ] ) we can predict . we employ the value which is taken from eq.([eq : u2 ] ) with the known value of ( from the intershell collapse ) and of .the resulting prediction is . rather than attempting to also predict in eq .( [ eq : fits ] ) ( knowing the inaccuracies of intercepts ) we glued the tail with the predicted value of to the core pdf function ( [ fit11 ] ) by finding the unique point of continuity with same first derivative .the way that the predicted tail hangs onto the pdf is shown in fig .[ predict ] panel b. to test the quality of the prediction we ran now the simulation for a time horizon that is a hundred times longer ( i.e time steps ) .such a run can resolve the events that belong to the tail , and indeed the comparison is surprisingly good , as seen in fig .[ predictc ] .the main aims of this paper are twofold : on the one hand we aimed at understanding the detailed dynamical scaling properties of the largest events in our system . on the other hand we wanted to employ these properties to _ predict _ the probability of these events even in situations in which they are very rare .the first aim was achieved by focusing on the largest events , following their cascade down the the scales ( or up the shells ) , and learning how to collapse them on each other by rescaling their amplitudes and their time arguments .this exercise culminated in eq.([scform ] ) which represents the largest events in terms of a universal " function where is a properly rescaled time difference from the peak time of the event .this dynamical scaling form is characterized by two exponents , a static " one denoted and a dynamic " one denoted .we argued theoretically for a scaling relation , and determined the values of the these exponents on the basis of the analysis of isolated events in _ short _ time horizons .the second aim was accomplished by developing a scaling theory for the tails of the pdf s in different shells .we have learned how to translate information from the tail of a pdf in a high shell to the tail of a pdf of a low shell . in doingso we made use of the fact the high shells ( small length scales ) have much shorter characteristic times scales .thus even short time horizons are sufficient to accumulate _ reliable_ statistics on the tails of the pdf s of high shells . having a theory to translate the information to low shells in which the tails are extremely sparse ( or even totally absent ), we could overcome the meager statistics .we could present predicted tails that were populated only in time horizons that were a hundred fold longer than those in which the analysis was performed .we demonstrated the existence of scaling properties of the extreme events that are in distinction from the bulk of the fluctuations that make the core of the pdf . in this sensethe extreme events are outliers .we can not , on the basis of the present work , claim that this approach has a general applicability to a large class of physical systems in which extreme events are important .we certainly made a crucial and explicit use of the scale invariance of the underlying equation of motion .this scale invariance translates here to an intimate connection between extreme events appearing on one length scale at one time to extreme events appearing on smaller length scales at later ( and predictable ) times ( cf .we are pretty confident that similar ideas can ( and should ) be implemented to fluid turbulence ; whether or not such techniques will be applicable to broader issues like geophysical phenomena or financial markets is a question that we pose to the community at large .our interest in the issue was ignited to a large extent by the meeting on extreme events organized by anne and didier sornette in villefranche - sur - mer , summer 2000 .we thank didier sornette for his comments on the manuscript , and for clarifying the notion of outliers " .this work has been supported in part by the european commission under the tmr program , the israel science foundation , the german israeli foundation and the naftali and anna backenroth - bronicki fund for research in chaos and complexity .
extreme events have an important role which is sometime catastrophic in a variety of natural phenomena including climate , earthquakes and turbulence , as well as in man - made environments like financial markets . statistical analysis and predictions in such systems are complicated by the fact that on the one hand extreme events may appear as outliers " whose statistical properties do not seem to conform with the bulk of the data , and on the other hands they dominate the ( fat ) tails of probability distributions and the scaling of high moments , leading to abnormal " or multi "- scaling . we employ a shell model of turbulence to show that it is very useful to examine in detail the dynamics of onset and demise of extreme events . doing so may reveal dynamical scaling properties of the extreme events that are characteristic to them , and not shared by the bulk of the fluctuations . as the extreme events dominate the tails of the distribution functions , knowledge of their dynamical scaling properties can be turned into a prediction of the functional form of the tails . we show that from the analysis of relatively short time horizons ( in which the extreme events appear as outliers ) we can predict the tails of the probability distribution functions , in agreement with data collected in very much longer time horizons . the conclusion is that events that may appear unpredictable on relatively short time horizons are actually a consistent part of a multiscaling statistics on longer time horizons . .#1 2
the semantic web builds upon xml as the common machine - readable syntax to structure content and data , upon rdf as a simple language to express property relationships between arbitrary resources identified by uris , and ontology languages such as rdfs or owl as a means to define rich vocabularies ( ontologies ) which are then used to precisely describe resources , their relations and their semantics .this prepares an infrastructure to share the relevant meaning of content and leads to a more machine - processable and relevant web .many bioinformatics projects , such as uniprot , tambis , fungalweb , yeasthub , biopax have meanwhile adopted the semantic web approach ( in particular the rdf standard ) and large ontologies such as the gene ontology are provided as rdfs or owl ontologies .this has been utilized by several bioinformatics projects , such as w3c hcls rdf or bio2rdf , to solve the old problem of distributed heterogeneous data integration in health care and life sciences .the goal of this article is to show how the rule responder approach can be used to build a flexible , loosely - coupled and service - oriented escience infrastructure which allows wrapping the existing web data sources , services and tools by rule - based agents which access and transform the existing information into relevant information of practical consequences for the end - user .a virtual escience infrastructure consists of a community of independent and often distributed ( sub- ) organizations which are typically represented by an organizational agent and a set of associated individual agents .the organizational agent might act as a single agent towards other internal and external individual or organizational agents .in particular , a virtual organization s agent can be the single ( or main ) point of entry for communication with the `` outer '' world . in the architecture of the escience agent web model(figure [ fig : usecaseszenario ] ) , the syntactic level controls the appearance and access of syntactic information resources such as html pages .the representation languages such as xml , rdf and owl on the semantic level make these web - based resources more readable and processable not only to humans , but also to computers to infer new knowledge .finally , the pragmatic and behavioral level defines the rules that how information is used and describes the actions in terms of its pragmatic aspects .these rules e.g. transform existing information into relevant information of practical consequences , trigger automated reactions according to occurred complex events , and derive answers from the existing syntactic and semantic information resources . in this paper we focus on the pragmatic and behavioral layer and build it upon existing technologies and common language formats of the semantic web such as html / xml web pages , rdf / rdfs , owl and etc .we assume that there is already a critical mass of such data sources on the semantic and syntactic layer .furthermore , we integrate data and functionality from legacy applications .the core parts of the distributed rule responder architecture for the escience agent web are the common platform - independent rule interchange format ( ruleml ) , the communication middleware ( esb ) and the execution environments ( prova ) .the rule markup language ( ruleml ) is a modular , interchangeable rule specification on standard to express both forward and backward rules for deduction , reaction , rewriting , and further inferential - transformational tasks .reaction ruleml is a sublanguage of ruleml and incorporates various kinds of production , action , reaction , and kr temporal / event / action logic rules as well as ( complex ) event / action messages . to seamlessly handle message - based interactions between the responder agents and with other applications , an enterprise service bus ( esb ) , the mule open - source esb is used .the esb allows deploying the rule - based agents as highly distributable rule inference services installed as web - based endpoints in the mule object broker and supports the reaction ruleml based communication between them .mule is based on ideas from esb architectures , but goes beyond the typical definition of an esb as a transit system for carrying data between applications .prova , which is a highly expressive semantic web rule engine to the reference implementation for complex agents with complex reaction workflows , decision logic and dynamic access to external semantic web data sources.the current version of prova follows the spirit and design of the recent w3c semantic web initiative and combines declarative rules , ontologies and inference with dynamic object - oriented java api calls and access to external data sources such as relational databases or enterprise applications and it services .the discovery process for a researcher to find the alzheimer s drug target candiates is very complex and time-consuming.he/she first discovers from uniprot , the w3c hcls kb and the swan data that beta amyloidal in various forms , and in particular addls , which are good therapeutic targets .she then searches the pubmed database about articles on addls and ranks the results to find the top location , which is evanston , and the top author , who is william klein . from this, the researcher makes the hypothesis that william klein works in evanston , and simply proves it using google .finally , the researcher queries the embi - ebi database for the patents addressing addls as therapeutic target for ad and concludes that william klein who also holds two patents is one of the top experts in addls research.implicitly , the researcher executes the following rule : if a person has most publications in the field and one or more patents in the field then the person is an expert for this field .figure 6 shows how this rule can be implemented in terms of rule responder agents .the hcls rule responder agent service ( figure [ fig : az_architecture ] ) implements the main logic of the escience infrastructure and acts as the main communication endpoint for external agents .it s the rule code defines the public interfaces to receive requests ( queries , tasks ) to the escience infrastructure and the logic to look up the respective source agents and delegate requests to them in order to answer the queries and fulfill the tasks .each existing legacy data sources / service is wrapped by a rule responder source agent which runs a prova rule engine .the agent s rule base comprises the local rule interface descriptions , i.e. the rule functions which can be queried by other agents of the escience infrastructure , the respective transformation rules to issue queries to the platform - specic services and access the heterogeneous local data sources , and the rule logic to process incoming requests and derive answers / information from the local knowledge .with rule responder hcls we have evolved a rule - based approach which facilitates easy heterogeneous systems integration and provides computation , database access , communication , web services , etc .this approach preserves local anonymity of local agent nodes including modularity and information hiding and provides much more control to users with respect to the relatively easy declarative rule - based programming techniques .the rules allow specifying where to access and process information , how to present information and automatically react to it , and how to transform the general information available from existing data sources on the web into personally relevant information accessible via the escience infrastructure .the rule responder escience infrastructure is available online at responder.ruleml.org .the rule - ml family of web rule languages . in 4th int. workshop on principles and practice of semantic web reasoning , budva , montenegro , 2006 .a. paschke , et al .reaction ruleml , http://ibis.in.tum.de/research/reactionruleml/ mule .mule enterprise service bus , http://mule.codehaus.org/display/mule/home,2006 .a. kozlenkov , et al .prova , http://prova.ws , 2006 .
to a large degree information and services for chemical e - science have become accessible -anytime , anywhere -but not necessarily useful . the rule responder escience middleware is about providing information consumers with rule - based agents to transform existing information into relevant information of practical consequences , hence providing control to the end - users to express in a declarative rule - based way how to turn existing information into personally relevant information and how to react or make automated decisions on top of it .
since the fifth edition of protostars and planets in 2005 ( ppv ) the number of extrasolar planets has increased from about 200 to nearly 1000 , with several thousand transiting planet candidates awaiting confirmation .these prolific discoveries have emphasized the amazing diversity of exoplanetary systems .they have brought crucial constraints on models of planet formation and evolution , which need to account for the many flavors in which exoplanets come .some giant planets , widely known as the hot jupiters , orbit their star in just a couple of days , like 51 pegasus b .some others orbit their star in few ten to hundred years , like the four planets known to date in the hr 8799 planetary system . at the time of ppv, it was already established that exoplanets have a broad distribution of eccentricity , with a median value . since then, measurements of the rossiter mclaughlin effect in about 50 planetary systems have revealed several hot jupiters on orbits highly misaligned with the equatorial plane of their star ( e.g. , * ? ? ?* ) , suggesting that exoplanets could also have a broad distribution of inclination . not only should models of planet formation and evolution explain the most exotic flavors in which exoplanets come , they should also reproduce their statistical properties .this is challenging , because the predictions of such models depend sensitively on the many key processes that come into play .one of these key processes is the interaction of forming planets with their parent protoplanetary disc , which is the scope of this chapter .planet - disc interactions play a prominent role in the orbital evolution of young forming planets , leading to potentially large variations not only in their semi - major axis ( a process known as planet migration ) , but also in their eccentricity and inclination .observations ( i ) of hot jupiters on orbits aligned with the equatorial plane of their star , ( ii ) of systems with several coplanar low - mass planets with short and intermediate orbital periods ( like those discovered by the kepler mission ) , ( iii ) and of many near - resonant multi - planet systems , are evidence that planet - disc interactions are one major ingredient in shaping the architecture of observed planetary systems .but , it is not the only ingredient : planet - planet and star - planet interactions are also needed to account for the diversity of exoplanetary systems .the long - term evolution of planetary systems after the dispersal of their protoplanetary disc is reviewed in the chapter by davies et al .this chapter commences with a general description of planet - disc interactions in section [ sec : theory ] .basic processes and recent progress are presented with the aim of letting non - experts pick up a physical intuition and a sense of the effects in planet - disc interactions .section [ sec : app ] continues with a discussion on the role played by planet - disc interactions in the properties and architecture of observed planetary systems .summary points follow in section [ sec : summary ] .embedded planets interact with the surrounding disc mainly through gravity . disc material , in orbit around the central star , feels a gravitational perturbation caused by the planet that can lead to an exchange of energy and angular momentum between planet and disc . in this section , we assume that the perturbations due to the planet are small , so that the disc structure does not change significantly due to the planet , and that migration is slow , so that any effects of the radial movement of the planet can be neglected . we will return to these issues in sections [ sec : type2 ] and [ sec : type3 ] . if these assumptions are valid , we are in the regime of type i migration , and we are dealing with low - mass planets ( typically up to neptune s mass ) . the perturbations in the disc induced by the planet are traditionally split into two parts : ( i ) a wave part , where the disc response comes in the form of spiral density waves propagating throughout the disc from the planet location , and ( ii ) a part confined in a narrow region around the planet s orbital radius , called the planet s horseshoe region , where disc material executes horseshoe turns relative to the planet .an illustration of both perturbations is given in fig .[ fig : overview ] .we will deal with each of them separately below . for simplicity, we will focus on a two - dimensional disc , characterised by vertically averaged quantities such as the surface density .we make use of cylindrical polar coordinates centred on the star .it has been known for a long time that a planet exerting a gravitational force on its parent disc can launch waves in the disc at lindblad resonances .these correspond to locations where the gas azimuthal velocity relative to the planet matches the phase velocity of acoustic waves in the azimuthal direction .this phase velocity depends on the azimuthal wavenumber , the sound speed and the epicyclic frequency , that is the oscillation frequency of a particle in the disc subject to a small radial displacement ( e.g. , * ? ? ?* ) . at large azimuthal wavenumber, the phase velocity tends to the sound speed , and lindblad resonances therefore pile up at , where is the semi - major axis of the planet and is the pressure scaleheight of the disc .the superposition of the waves launched at lindblad resonances gives rise to a one - armed spiral density wave , called the wake ( see fig .[ fig : overview ] ) .it is possible to solve the wave excitation problem in the linear approximation and calculate analytically the resulting torque exerted on the disc using the wkb approximation .progress beyond analytical calculations for planets on circular orbits has been made by solving the linear perturbation equations numerically , as done in 2d in .the three - dimensional case was tackled in , which resulted in a widely used torque formula valid for isothermal discs only .it is important to note that 2d calculations only give comparable results to more realistic 3d calculations if the gravitational potential of the planet is softened appropriately .typically , the softening length has to be a sizeable fraction of the disc scaleheight . usingthis 2d softened gravity approach , found that the dependence of the wave torque ( ) on disc gradients in a non - isothermal , adiabatic disc is where is the ratio of specific heats , and are negatives of the ( local ) power law exponents of the surface density and temperature ( , ) , and the torque is normalised by where is the planet - to - star mass ratio , is the aspect ratio and quantities with subscript refer to the location of the planet . note that in general we expect , i.e. both surface density and temperature decrease outward . for reasonable values of ,the wave torque on the planet is negative : it decreases the orbital angular momentum of the planet , and thus its semi - major axis ( the planet being on a circular orbit ) , leading to inward migration . the linear approximation remains valid as long as . for a disc around a solar mass star with means that the planet mass needs to be much smaller than .the factor in eq .( [ eqtl ] ) is due to the difference in sound speed between isothermal and adiabatic discs . for discs that can cool efficiently ,we expect the isothermal result to be valid ( ) , while for discs that can not cool efficiently , the adiabatic result should hold .it is possible to define an effective that depends on the thermal diffusion coefficient so that the isothermal regime can be connected smoothly to the adiabatic regime .a generalized expression for the lindblad torque has been derived by for 2d discs where the density and temperature profiles near the planet are not power laws , like at opacity transitions or near cavities .this generalized expression agrees well with eq .( [ eqtl ] ) for power - law discs .we stress that there is to date no general expression for the wave torque in 3d non - isothermal discs .the analytics is involved and it is difficult to measure the wave torque independently from the corotation torque in 3d numerical simulations of planet - disc interactions . the above discussion neglected possible effects of self - gravity . showed that in a self - gravitating disc , lindblad resonances get shifted towards the planet , thereby making the wave torque stronger .this was confirmed numerically by .the impact of a magnetic field in the disc and of possibly related mhd turbulence will be considered in section [ sec : turb ] .the normalisation factor sets a time scale for type i migration of planets on circular orbits : where denotes the mass of the central star .assuming a typical gas surface density of , , and , the migration time scale in years at 1 astronomical unit ( au ) is given approximately by .this means that an earth - mass planet at 1 au would migrate inward on a time scale of years , while the time scale for neptune would only be years .all these time scales are shorter than the expected disc life time of years , making this type of migration far too efficient for planets to survive on orbits of several au .a lot of work has been done recently on how to stop type i migration or make it less efficient ( see sections [ sec : coro ] and [ sec : coro_satu ] ) .most progress since ppv has been made in understanding the torque due to disc material that on average corotates with the planet , the corotation torque .it is possible , by solving the linearised disc equations in the same way as for the wave torque , to obtain a numerical value for the corotation torque .one can show that in the case of an isothermal disc , this torque scales with the local gradient in specific vorticity or vortensity .it therefore has a stronger dependence on background surface density gradients than the wave torque , with shallower profiles giving rise to a more positive torque .it was nevertheless found in that , except for extreme surface density profiles , the wave torque always dominates over the linear corotation torque ( ) . obtained , in the 2d softened gravity approach , for a non - isothermal disc where is the negative of the ( local ) power law exponent of the specific entropy . for an isothermal disc , andthe corotation torque is proportional to the vortensity gradient .an alternative expression for the corotation torque was derived by by considering disc material on horseshoe orbits relative to the planet .this disc material defines the planet s horseshoe region ( see bottom panel in fig . [fig : overview ] ) .the torque on the planet due to disc material executing horseshoe turns , the horseshoe drag , scales with the vortensity gradient in an isothermal disc , just like the linear corotation torque .it comes about because material in an isothermal inviscid fluid conserves its vortensity .when executing a horseshoe turn , which takes a fluid element to a region of different vorticity because of the keplerian shear , conservation of vortensity dictates that the surface density should change . in a gas disc , this change in surface density is smoothed out by evanescent pressure waves .nevertheless , this change in surface density results in a torque being applied on the planet . for low - mass planets , for which , the half - width of the horseshoe region , , is it is only a fraction of the local disc thickness .the horseshoe drag , which scales as , therefore has the same dependence on and as the wave torque and the linear corotation torque .the numerical coefficient in front is generally much larger than for the linear corotation torque , however . since both approaches aim at describing the same thing , _ the _ corotation torque , it was long unclear which result to use .it was shown in that whenever horseshoe turns occur , the linear corotation torque gets replaced by the horseshoe drag , unless a sufficiently strong viscosity is applied .it should be noted that horseshoe turns do not exist within linear theory , making linear theory essentially invalid for low - mass planets as far as the corotation torque is concerned .the corotation torque in the form of horseshoe drag is very sensitive to the disc s viscosity and thermal properties near the planet . found that in 3d radiation hydrodynamical simulations , migration can in fact be directed _ outwards _ due to a strong positive corotation torque counterbalancing the negative wave torque over the short duration ( 15 planet orbits ) of their calculations .this was subsequently interpreted as being due to a radial gradient in disc specific entropy , which gives rise to a new horseshoe drag component due to conservation of entropy in an adiabatic disc .the situation is , however , not as simple as for the isothermal case .it turns out that the entropy - related horseshoe drag arises from the production of vorticity along the downstream separatrices of the planet s horseshoe region .conservation of entropy during a horseshoe turn leads to a jump in entropy along the separatrices whenever there is a radial gradient of entropy in the disc .this jump in entropy acts as a source of vorticity , which in turn leads to a torque on the planet .crucially , the amount of vorticity produced depends on the streamline topology , in particular the distance of the stagnation point to the planet .an analytical model for an adiabatic disc where the background temperature is constant was developed in , while used a combination of physical arguments and numerical results to obtain the following expression for the horseshoe drag : where the first term on the right hand side is the vortensity - related part of the horseshoe drag , and the second term is the entropy - related part .the model derived in , under the same assumptions for the stagnation point , yields a numerical coefficient for the entropy - related part of the horseshoe drag of instead of . comparing the linear corotation torque to the non - linear horseshoe drag see eqs .( [ eqtclin ] ) and ( [ eqtchs ] ) we see that both depend on surface density and entropy gradients , but also that the horseshoe drag is always stronger . in the inner regions of discs primarily heated by viscous heating , the entropy profile should decrease outward ( ) .the corotation torque should then be positive , promoting outward migration .the results presented above were for adiabatic discs , while the isothermal result can be recovered by setting and ( which makes as well ) .real discs are neither isothermal nor adiabatic .when the disc can cool efficiently , which happens in the optically thin outer parts , the isothermal result is expected to be valid ( or rather the _ locally _ isothermal result : a disc with a fixed temperature profile , which behaves slightly differently from a truly isothermal disc ; see * ? ? ?* ) . in the optically thick inner parts of the disc, cooling is not efficient and the adiabatic result should hold .an interpolation between the two regimes was presented in and . while density waves transport angular momentum away from the planet , the horseshoe region only has a finite amount of angular momentum to play with since there are no waves involved .sustaining the corotation torque therefore requires a flow of angular momentum into the horseshoe region : unlike the wave torque , the corotation torque is prone to _ saturation_. in simulations of disc - planet interactions , sustaining ( or unsaturating ) the corotation torque is usually established by including a navier - stokes viscosity . in a real disc , angular momentum transport is likely due to turbulence arising from the magneto - rotational instability ( mri ; see chapter by turner et al . ) , and simulations of turbulent discs give comparable results to viscous discs as far as saturation is concerned ( see also section [ sec : turb ] ) .the main result for viscous discs is that the viscous diffusion time scale across the horseshoe region has to be shorter than the libration time scale in order for the horseshoe drag to be unsaturated .this also holds for non - isothermal discs , and expressions for the corotation torque have been derived in 2d disc models for general levels of viscosity and thermal diffusion or cooling .results from 2d radiation hydrodynamic simulations , where the disc is self - consistently heated by viscous dissipation and cooled by radiative losses , confirm this picture . the torque expressions of and were derived using 2d disc models , and 3d disc models are still required to get definitive , accurate predictions for the wave and the corotation torques . still , the predictions of the aforementioned torque expressions are in decent agreement with the results of 3d simulations of planet - disc interactions .the simulations of explored the dependence of the total torque with density and temperature gradients in 3d locally isothermal discs .they found the total normalized torque .for the planet mass and disc viscosity of these authors , the corotation torque reduces to the linear corotation torque . summing eqs .( [ eqtl ] ) and ( [ eqtclin ] ) with and yields , which is in good agreement with the results of , except for the coefficient in front of the temperature gradient .the simulations of , and were for non - isothermal radiative discs . considered various temperature power - law profiles and showed that the total torque does not always exhibit a linear dependence with temperature gradient . highlighted substantial discrepancies between the torque predictions of and .these discrepancies originate from a larger half - width ( ) of the planet s horseshoe region adopted in , which was suggested as a proxy to assess the corotation torque in 3d .adopting the same , standard value for given by eq .( [ eq_xs ] ) , one can show that the total torques of and are actually in very good agreement .both are less positive than in the numerical results of .discrepancies originate from inherent torque differences between 2d and 3d disc models ( see , e.g. , * ? ? ?* ) , and possibly from a non - linear boost of the positive corotation torque in the simulations of , for which ( for locally isothermal discs , see * ? ? ?the thermal structure of protoplanetary discs is determined not only by viscous heating and radiative cooling , but also by stellar irradiation .the effects of stellar irradiation on the disc structure have been widely investigated using a 1 + 1d numerical approach ( e.g. , * ? ? ?* ; * ? ? ?* ) , with the goal to fit the spectral energy distributions of observed discs .stellar heating dominates in the outer regions of discs , viscous heating in the inner regions .the disc s aspect ratio should then slowly increase with increasing distance from the star .this increase may have important implications for the direction and speed of type i migration which , as we have seen above , are quite sensitive to the aspect ratio ( local value and radial profile ) .this is illustrated in fig .[ fig : migration ] , which displays the torque acting on type i migrating planets sitting in the midplane of their disc , with and without inclusion of stellar irradiation .the disc structure was calculated in for standard opacities and a constant disc viscosity ( thereby fixing the density profile ) .two regions of outward migration originate from opacity transitions at the silicate condensation line ( near 0.8 au ) and at the water ice line ( near 5 au ) . in this model ,heating by stellar irradiation becomes prominent beyond au .the resulting increase in the disc s aspect ratio profile gives a shallower entropy profile ( and thus take smaller values ) and therefore a smaller ( though positive ) entropy - related horseshoe drag see eq . ( [ eqtchs ] ) .inclusion of stellar irradiation therefore reduces the range of orbital separations at which outward migration may occur ( see also * ? ? ?the outer edge of regions of outward migration are locations where type i planetary cores converge , which may lead to resonant captures and could enhance planet growth if enough cores are present to overcome the trapping process . note in fig .[ fig : migration ] that planets up to do not migrate outwards .this is because for such planet masses , and for the disc viscosity in this model ( a few ) , the corotation torque is replaced by the ( smaller ) linear corotation torque .[ fig : migration ] provides a good example of how sensitive predictions of planet migration models can be to the structure of protoplanetary discs .future observations of discs , for example with alma , will give precious information that will help constrain migration models .so far we have considered the migration of a planet in a purely hydrodynamical laminar disc where turbulence is modelled by an effective viscosity . as stressed in section [ sec : coro ] , a turbulent viscosity is essential to unsaturate the corotation torque .the most likely ( and best studied ) source of turbulence in protoplanetary discs is the magneto - rotational instability ( mri ; * ? ? ? * ) which can amplify the magnetic field and drive mhd turbulence by tapping energy from the keplerian shear .furthermore , the powerful jets observed to be launched from protoplanetary discs are thought to arise from a strong magnetic field ( likely through the magneto - centrifugal acceleration ; see * ? ? ?magnetic field and turbulence thus play a crucial role in the dynamics and evolution of protoplanetary discs , and need to be taken into account in theories of planet - disc interactions . _ stochastic torque driven by turbulence ._ mhd turbulence excites non - axisymmetric density waves ( see left panel in fig .[ fig : turbulence_magnetic ] ) which cause a fluctuating component of the torque on a planet in a turbulent disc . this torque changes sign stochastically with a typical correlation time of a fraction of an orbit .because the density perturbations are driven by turbulence rather than the planet itself , the specific torque due to turbulence is independent of planet mass , while the ( time - averaged ) specific torque driving type i migration is proportional to the planet mass .type i migration should therefore outweigh the stochastic torque for sufficiently massive planets and on a long - term evolution , whereas stochastic migration arising from turbulence should dominate the evolution of planetesimals and possibly small mass planetary cores .the stochastic torque adds a random walk component to planet migration , which can be represented in a statistical sense by a diffusion process acting on the probability distribution of planets .a consequence of this is that a small fraction of planets may migrate to the outer parts of their disc even if the laminar type i migration is directed inwards .note that the presence of a dead zone around the disc s midplane , where mhd turbulence is quenched due the low ionization , reduces the amplitude of the stochastic torque ._ mean migration in a turbulent disc ._ 2d hydrodynamical simulations of discs with stochastically forced waves have been carried out to mimic mhd turbulence and to study its effects on planet migration .using s model , and showed that , when averaged over a sufficiently long time , the torque converges toward a well - defined average value , and that the effects of turbulence on this average torque are well described by an effective turbulent viscosity and heat diffusion .in particular , the wave torque is little affected by turbulence , while the corotation torque can be unsaturated by this wake - like turbulence . and have shown that a similar conclusion holds in simulations with fully developed mhd turbulence arising from the mri . however discovered the presence of an additional component of the corotation torque , which has been attributed to the effect of the magnetic field ( * ? ? ?* see below ) .note that the mean migration in a turbulent disc has been conclusively studied only for fairly massive planets ( typically ) as less massive planets need a better resolution and a longer averaging time .it is an open question whether the migration of smaller planets is affected by turbulence in a similar way as by a diffusion process .one may wonder indeed whether the diffusion approximation remains valid when the width of the planet s horseshoe region is a small fraction of the disc scaleheight and therefore of the typical correlation length of turbulence . _ wave torque with a magnetic field ._ in addition to driving turbulence , the magnetic field has a direct effect on planet migration by modifying the response of the gas to the planet s potential .in particular , waves propagation is modified by the magnetic field and three types of waves exist : fast and slow magneto - sonic waves as well as alfv ' en waves . showed that for a strong azimuthal magnetic field , slow mhd waves are launched at the so - called magnetic resonances , which are located where the gas azimuthal velocity relative to the planet matches the phase velocity of slow mhd waves . the angular momentum carried by the slow mhd wavesgives rise to a new component of the torque .if the magnetic field strength is steeply decreasing outwards , this new torque is positive and may lead to outward migration .a vertical magnetic field also impacts the resonances but its effect on the total torque remains to be established . in the inner parts of protoplanetary discs ,the presence of a strong vertical magnetic field is needed to explain the launching of observed jets. a better understanding of the strength and evolution of such a vertical field and of its effect on planet migration will improve the description of planet migration near the central star . _ corotation torque with a magnetic field ._ a strong azimuthal magnetic field can prevent horseshoe motions so that the corotation torque is replaced by the torque arising from magnetic resonances as discussed in the previous paragraph . showed that horseshoe motions take place and suppress magnetic resonances for weak enough magnetic fields , when the alfvn speed is less than the shear velocity at the separatrices of the planet s horseshoe region . using 2d laminar simulations with effective viscosity and resistivity, these authors showed that advection of the azimuthal magnetic field along horseshoe trajectories leads to an accumulation of magnetic field near the downstream separatrices of the horseshoe region .this accumulation in turns leads to an under - density at the same location to ensure approximate pressure balance ( see right panel in fig . [fig : turbulence_magnetic ] ) .the results of these laminar simulations agree very well with those of the mhd turbulent simulations of .a rear - front asymmetry in the magnetic field accumulation inside the horseshoe region gives rise to a new component of the corotation torque , which may cause outward migration even if the magnetic pressure is less than one percent of the thermal pressure .this new magnetic corotation torque could take over the entropy - related corotation torque to sustain outward migration in the radiatively efficient outer parts of protoplanetary discs .future studies should address the behaviour of the corotation torque in the dead zone , and in regions of the disc threaded by a vertical magnetic field . also , other non - ideal mhd effects , such as the hall effect and ambipolar diffusion , can have a significant impact on the mri turbulence and on the disc structure ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?these non - ideal mhd effects still need to be explored in the context of planet - disc interactions .we have so far considered planet - disc interactions for low - mass planets on circular orbits .interaction between two or more planets migrating in their parent disc may increase eccentricities ( ) and inclinations ( ) see section [ sec : multi ] .we examine below the orbital evolution of protoplanets on eccentric or inclined orbits due to planet - disc interactions . in the limit of small eccentricities, it can be shown that the effect of the disc is to damp the eccentricity of type i migrating planets .similar arguments can be made for inclined low - mass planets , for which planet - disc interactions damp the inclination in time . have provided an analytic expression for the eccentricity damping time scale in 2d , while derived expressions for the eccentricity and inclination damping time scales in 3d : where is given by eq .( [ eq : tau0 ] ) .note that , being very small , damping of and is much faster than migration .a single low - mass planet should therefore migrate on a circular and coplanar orbit .the above results were confirmed by hydrodynamical simulations .eccentricity and inclination damping rates can be alternatively derived using a dynamical friction formalism . we have stressed above that the migration of low - mass planets can be directed outwards if the disc material in the horseshoe region exerts a strong positive corotation torque on the planet .numerical simulations by have shown that for small eccentricities , the magnitude of the corotation torque decreases with increasing eccentricity , which restricts the possibility of outward migration to planets with eccentricities below a few percent .a consequence of this restriction is to shift regions of convergent migration to smaller radii for mildly eccentric low - mass planets ( see last two paragraphs in section [ sec : coro ] , and * ? ? ?in a very recent study , have explored in detail the influence of orbital eccentricity on the corotation torque for a range of disc and planet parameters .their study indicates that the reason why the corotation torque decreases with increasing is because the width of the horseshoe region narrows as increases .furthermore , by fitting the results from their suite of simulations with an analytic function , they find that the corotation torque scales with eccentricity according to the expression , where is the corotation torque at eccentricity , that at zero eccentricity , and is an e - folding eccentricity that scales linearly with the disc aspect ratio at the planet s orbital radius ( the expression provides a good overall fit to the simulations ) .furthermore , the lindblad torque becomes more positive with increasing .this sign reversal does not necessarily lead to outward migration as the torque on an eccentric planet changes both the planet s semi - major axis and eccentricity ( see , e.g. , * ? ? ?orbital migration also changes when a low - mass planet acquires some inclination .the larger the inclination , the less time the planet interacts with the dense gas near the disc midplane , the smaller the corotation torque and the migration rate .thus , inclined low - mass planets can only undergo outward migration if their inclination remains below a few degrees .the wave torque described in section [ sec : lindblad ] is the sum of a positive torque exerted on the planet by its inner wake , and a negative torque exerted by its outer wake .equivalently , the planet gives angular momentum to the outer disc ( the disc beyond the planet s orbital radius ) , and it takes some from the inner disc . if the torque exerted by the planet on the disc is larger in absolute value than the viscous torque responsible for disc spreading , an annular gap is carved around the planet s orbit . in this simple one - dimensional picture ,the gap width is the distance from the planet where the planet torque and the viscous torque balance each other .however , showed that the disc material near the planet also feels a pressure torque that comes about because of the non - axisymmetric density perturbations induced by the planet . in a steady state ,the torques due to pressure , viscosity and the planet balance all together , and such condition determine the gap profile .the half - width of a planetary gap hardly exceeds about twice the size of the planet s hill radius , defined as .a gap should therefore be understood as a narrow depleted annulus between an inner disc and an outer disc .the width of the gap carved by a jupiter - mass planet does not exceed about of the star - planet orbital separation .based on a semi - analytic study of the above torque balance , showed that a planet opens a gap with bottom density less than of the background density if the dimensionless quantity defined as is . in eq .( [ eq : gap - criterion ] ) , is the reynolds number , the disc s kinematic viscosity .adopting the widely used alpha prescription for the disc viscosity , , the above gap - opening criterion becomes this criterion is essentially confirmed by simulations of mri - turbulent discs , although the width and depth of the gap can be somewhat different from an equivalent viscous disc model .we point out that eq .( [ eq : gap - criterion ] ) with can be solved analytically : the minimum planet - to - star mass ratio for opening a deep gap as defined above is given by ^{-3 } , \label{eq : qcrit}\ ] ] with , and where above quantities are to be evaluated at the planet s orbital radius .taking a sun - like star , eq .( [ eq : qcrit ] ) shows that in the inner regions of protoplanetary discs , where typically and , planets with a mass on the order of that of jupiter , or larger , will open a deep gap . in the dead zone of a protoplanetary disc , where can be one or two orders of magnitude smaller , planet masses will open a gap . at larger radii , where planets could form by gravitational instability , is probably near 0.1 , , and only planets above 10 jupiter masses could open a gap ( but , see section [ sec : long ] ) .( [ eq : gap - criterion2 ] ) and ( [ eq : qcrit ] ) give an estimate of the minimum planet - to - star mass ratio for which a gap with density contrast is carved .recent simulations by indicate that similarly deep gaps could be opened for mass ratios smaller than given by these equations in discs with very low viscosities , such as what is expected in dead zones .note also that planets such that is a few can open a gap with a density contrast of few tens of percent .this may concern planets of few earth to neptune masses in dead zones ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?the formation of an annular gap around a planet splits the protoplanetary disc into an inner disc and an outer disc , which both repel the planet towards the centre of the gap .while the planet is locked in its gap , it continues to migrate as the disc accretes onto the star .said differently , the planet follows the migration trajectory imposed by the disc , and the migration timescale is then the viscous accretion time , .however , if the planet is much more massive than the gas outside the gap , the planet will slow down the disc viscous accretion .this occurs if , where is the surface density of the outer disc just outside the gap .when this occurs , the inner disc still accretes onto the star , while the outer disc is held by the planet .this leads to the partial ( or total ) depletion of the inner disc ( see below ) .the migration timescale , which is then set by the balance between the viscous torque and the planet s inertia , is given by .the above considerations show that the timescale for type ii migration ( ) is given by eq .( [ eq : tau2 ] ) applies when a planet carves a deep gap , that is when .however , when and the density inside the gap exceeds about of the background density , the planet and the gas are no longer decoupled .the gas in the gap exerts a corotation torque on the planet , which is usually positive .therefore , the migration of planets that marginally satisfy the gap - opening criterion can be slower than in the standard type ii migration . in particular , if the gap density is large enough , and depending on the local density and temperature gradients , the corotation torque can overcome the viscous torque and lead to outward migration .note that the drift of the planet relative to the gas may lead to a positive feedback on migration ( see section [ sec : type3 ] ) .recently , gap structures similar to what are predicted by planet - disc interactions have been observed in the discs around hd169142 and tw hya .the gap around hd169142 is located 50 au from the star , and seems to be quite deep ( the surface brightness at the gap location is decreased by ) .the gap around tw hya is located au from the star , and is much shallower ( the decrease in surface brightness is only ) .if confirmed , these would be the first observations of gaps in protoplanetary discs that could be carved by a planet .cavities have been observed in several circumstellar discs in the past few years .contrary to a gap , a cavity is characterised by the absence of an inner disc . in each of these transition discs ,observations indicate a lack of dust below some threshold radius that extends from a few au to few tens of au .this lack of dust is sometimes considered to track a lack of gas , but observational evidence for accretion onto the star in some cases shows that the cavities may be void of dust but not of gas . a narrow ring of hot dust is sometimes detected in the central au of these discs ( e.g. , * ? ? ?* ) , and this structure is often claimed to be the signpost of a giant planet carving a big gap in the disc . it should be kept in mind , however , that the gap opened by a planet is usually much narrower than these observed depletions ( see section [ sec : gapopening ] ) , and rarely ( completely ) gas proof .there is here a missing ingredient between the numerical simulations of gas discs and observations .the outer edge of the gap opened by a giant planet corresponds to a pressure maximum .dust decoupled from the gas tends to accumulate there , and does not drift through the gap . for typical disc densities and temperatures between 1 and 10 au ,decoupling is most efficient for dust particles of a few centimetres to a meter ( e.g. , * ? ? ?consequently , gaps appear wider in the dust component than in the gas component , and the inner disc could be void of dust even if not of gas . note that these authors find that the smallest dust grains , which are well coupled to the gas , should have a distribution identical to that of the gas and be present inside the cavity .therefore , the observation of a cavity should depend on the size of the tracked dust , that is on the wavelength . interpreting observationsthus requires to decouple the dynamics of the gas and dust components .the coming years should see exciting , high resolution observations of protoplanetary discs , with for instance alma and matisse .the formation of a circumplanetary disc accompanies the formation of a gap .the structure of a circumplanetary disc and the gas accretion rate onto the planet have been investigated through 2d hydrodynamical simulations ( e.g. , * ? ? ?* ) , 3d hydrodynamical simulations , and more recently through 3d mhd simulations . find accretion rates , in good agreement with previous 3d hydrodynamical calculations of viscous laminar discs .also , in agreement with previous assessment in non - magnetic disc environments , they find that the accretion flow in the planet s hill sphere is intrinsically three - dimensional , and that the flow of gas toward the planet moves mainly from high latitudes , rather than along the mid - plane of the circumplanetary disc .another issue is to address how a circumplanetary disc impacts migration .being bound to the planet , the circumplanetary disc migrates at the same drift rate as the planet s .issues arise in hydrodynamical simulations that discard the disc s self - gravity , as in this case the wave torque can only apply to the planet , and not to its circumplanetary material .this causes the circumplanetary disc to artificially slow down migration , akin to a ball and chain .this issue can be particularly important for type iii migration ( see section [ sec : type3 ] ) .a simple workaround to this problem in simulations of non self - gravitating discs is to exclude the circumplanetary disc in the calculation of the torque exerted on the planet ( see * ? ? ?another solution suggested by these authors and also adopted by is to imprint to the circumplanetary disc the acceleration felt by the planet .the early evolution of the solar system in the primordial gas solar nebula should have led the four giant planets to be in a compact resonant configuration , on quasi - circular and coplanar orbits .their small but not zero eccentricities and relative inclinations are supposed to have been acquired after dispersal of the nebula , during a late global instability in which jupiter and saturn crossed their 2:1 mean motion resonance .this is the so - called nice model .many massive exoplanets have much higher eccentricities than the planets in the solar system .also , recent measurements of the rossiter - mclaughlin effect have reported several hot jupiters with large obliquities , which indicates that massive planets could also acquire a large inclination during their evolution .planet - disc interactions usually tend to damp the eccentricity and inclination of massive gap - opening planets ( e.g. , * ? ? ?* ; * ? ? ?expressions for the damping timescales of eccentricity and inclination can be found in . in particular, this would indicate that type ii migration should only produce hot jupiters on circular and non - inclined orbits .there are , however , circumstances in which planets may acquire fairly large eccentricities and obliquities while embedded in their disk , which we summarise below . _the 3:1 mean motion resonance between a planet and the disc excites eccentricity .thus , if a planet carves a gap that is wide enough for the eccentricity pumping effect of the 3:1 resonance to overcome the damping effect of all closer resonances , the planet eccentricity will grow ( e.g. , * ? ? ?* the disc eccentricity will grow as well ) .hydrodynamical simulations show that planet - disc interactions can efficiently increase the eccentricity of planets over jupiter masses .eccentricity values up to have been obtained in . _obliquity _ planets formed in a disk could have non - zero obliquities if the rotation axes of the star and the disc are not aligned .several mechanisms causing misalignment have been proposed .one possibility is that the protoplanetary disc had material with differing angular momentum directions added to it at different stages of its life ( e.g. , * ? ? ?alternatively , in dense stellar clusters , the interaction with a temporary stellar companion could tilt the disc s rotation axis .however , both mechanisms should be extended by including the interaction between the disc and the magnetic field of the central star .this interaction might tilt the star s rotation axis , and lead to misalignments even in discs that are initially aligned with their star . under most circumstances , such as those presented in the previous sections ,the migration rate of a planet is provided by the value of the disc torque , which depends on the local properties of the underlying disc , but not on the migration rate itself .there are some circumstances , however , in which the torque also depends on the drift rate , in which case one has the constituting elements of a feedback loop , with potentially important implications for migration .this , in particular , is the case of giant or sub - giant planets embedded in massive discs , which deplete ( at least partially ) their horseshoe region .the corotation torque comes from material that executes horseshoe u - turns relative to the planet .most of this material is trapped in the planet s horseshoe region . however, if there is a net drift of the planet with respect to the disc , material outside the horseshoe region will execute a unique horseshoe u - turn relative to the planet , and by doing so will exchange angular momentum with the planet .this drift may come about because of migration , and/or because the disc has a radial viscous drift .the torque arising from orbit - crossing material naturally scales with the mass flow rate across the orbit , which depends on the relative drift of the planet and the disc .it thus depends on the migration rate . for the sake of definiteness, we consider hereafter a planet moving inwards relative to the disc , but it should be kept in mind that the processes at work here are essentially reversible , so that they can also be applied to an outward moving planet .the picture above shows that the corotation torque on a planet migrating inwards has three contributions : * the contribution of the inner disc material flowing across the orbit . as this material gains angular momentum, it exerts a negative torque on the planet which scales with the drift rate .it therefore tends to increase the migration rate , and yields a positive feedback on the migration . *the contribution of the coorbital material in the planet s horseshoe region .it is two - fold .a first component arises from the material that exerts a horseshoe drag on the planet , which corresponds to the same horseshoe drag as if there was no drift between the disc and the planet ( see section [ sec : coro ] ) . * furthermore , as the material in the horseshoe region can be regarded as trapped in the vicinity of the planetary orbit , it has to move inward at the same rate as the planet .the planet then exerts on this material a negative torque , which scales with the drift rate . by the law of action - reaction , this trapped material yields a second , positive component of the horseshoe drag on the planet that scales with the drift rate .thus , the contribution of the drifting trapped horseshoe material also yields a negative feedback on the migration .if the surface density profile of the disc is unaltered by the planet , that is if the angular momentum profile of the disc is unaltered , then contributions ( i ) and ( iii ) exactly cancel out . in that case, the net corotation torque reduces to contribution ( ii ) , and the corotation torque expressions presented in section [ sec : coro ] , which have been derived assuming that the planet is on fixed circular orbit , are valid regardless of the migration rate .conversely , if the planet depletes , at least partly , its horseshoe region ( when a few ) , contributions ( i ) and ( iii ) do not cancel out , and the net corotation torque depends on the migration rate .the above description shows that the coorbital dynamics causes a feedback on migration when planets open a gap around their orbit .there are two key quantities to assess in order to determine when the feedback causes the migration to run away .the first quantity is called the coorbital mass deficit ( ) .it represents the mass that should be added to the planet s horseshoe region so that it has the average surface density of the orbit - crossing flow .the second quantity is the sum of the planet mass ( ) and of the circumplanetary disc mass ( ) , that is the quantity . as shown in , two regimes may occur . if , the coorbital dynamics accelerates the migration , but there is no runaway .on the contrary , if , migration runs away . a more rigorous derivation performed by shows that the coorbital mass deficit actually features the inverse vortensity in place of the surface density , but the same qualitative picture holds .when migration runs away , the drift rate has an exponentially growing behaviour until the so - called fast regime is reached , in which the planet migrates a sizable fraction of the horseshoe width in less than a libration time .when that occurs , the drift rate settles to a finite , large value , which defines the regime of type iii migration . as stressed earlier, this drift can either be outward or inward .the occurrence of type iii migration with varying the planet mass , the disc s mass , aspect ratio , and viscosity was discussed in detail in .the typical migration timescale associated with type iii migration , which depends on the disc mass , is of the order of a few horseshoe libration times . for the large planetary masses prone to type iii migration , which have wide horseshoe regions hence short libration times ,this typically amounts to a few tens of orbits .type iii migration can in principle be outwards .for this to happen , an initial seed of outward drift needs to be applied to the planet .nevertheless , all outward episodes of type iii migration reported so far have been found to stall and revert back to inward migration .interestingly , gravitationally unstable outer gap edges may provide a seed of outward type iii migration and can bring massive planets to large orbital distances .an alternative launching mechanism for outward type iii migration is the outward migration of a resonantly locked pair of giant planets ( * ? ? ?* see also section [ sec : ms01 ] ) , which is found to trigger outwards runaways at larger time .the migration regime depends on how the coorbital mass deficit compares with the mass of the planet and its circumplanetary disc .it is thus important to describe correctly the build up of the circumplanetary material and the effects of this mass on the dynamics of the gas and planet . find , using a nested grid code , that the mass reached by the cpd depends heavily on the resolution for an isothermal equation of state . circumvent this problem by adopting an equation of state that not only depends on the distance to the star , but also on the distance to the planet , in order to prevent an artificial flood of the cpd , and to obtain numerical convergence at high resolution .furthermore , as we have seen in section [ sec : cpd ] , in simulations that discard self - gravity , the torque exerted on the planet should exclude the circumplanetary disc . find indeed that taking that torque into account may inhibit type iii migration .type iii migration has allowed to rule out a recent model of the solar nebula , more compact than the standard model of .indeed , has shown that jupiter would be subject to type iii migration in desch s model and would not survive over the disc s lifetime .the occurrence of type iii migration may thus provide an upper limit to the surface density of the disc models in systems known to harbor giant planets at sizable distances from their host stars .it has been pointed out above that the exact expression of the coorbital mass deficit involves the inverse vortensity rather than the surface density across the horseshoe region .this has some importance in low - viscosity discs : vortensity can be regarded as materially conserved except during the passage through the shocks triggered by a giant planet , where vortensity is gained or destroyed .the corresponding vortensity perturbation can be evaluated analytically .eventually , the radial vortensity distribution around a giant planet exhibits a characteristic two - ring structure on the edges of the gap , which is unstable and prone to the formation of vortices .the resulting vortensity profile determines the occurrence of type iii migration .if vortices form at the gap edges , the planet can undergo non - smooth migration with episodes of type iii migration that bring the planet inwards over a few hill radii ( a distance that is independent of the disc mass ) , followed by a stalling and a rebuild of the vortensity rings .the above results have been obtained for fixed - mass planets . however , for the high gas densities required by the type iii migration regime , rapid growth may be expected . using 3d hydrodynamical simulations with simple prescriptions for gas accretion , find that a planetary core undergoing rapid runaway gas accretion does not experience type iii migration , but goes instead from the type i to the type ii migration regime .future progress will be made using more realistic accretion rates , like those obtained with 3d radiation - hydrodynamics calculations ( e.g. , * ? ? ?it has been shown that type i migration in adiabatic discs could feature a kind of feedback reminiscent of type iii migration .the reason is that in adiabatic discs , the corotation torque depends on the position of the stagnation point relative to the planet ( see section [ sec : coro ] ) .this position , in turn , depends on the migration rate , so that there is here as well a feedback of the coorbital dynamics on migration . found that this feedback on type i migration is negative , and that is has only a marginal impact on the drift rate for typical disc masses .so far , we have examined the orbital evolution of a single planet in a protoplanetary disc , while about of confirmed exoplanets reside in multi - planetary systems ( see exoplanets.org ) . in such systems ,the gravitational interaction between planets can significantly influence the planet orbits , leading , in particular , to important resonant processes which we describe below .a fair number of multi - planetary systems is known to have at least two planets in mean - motion resonance . list for example 32 resonant or near - resonant systems , with many additional kepler candidate systems .the mere existence of these resonant systems is strong evidence that dissipative mechanisms changing planet semi - major axes must have operated .the probability of forming resonant configurations in situ is likely small ( e.g. , * ? ? ?we consider a system of two planets that undergo migration in their disc .if the migration drift rates are such that the mutual separation between the planets increases , i.e. when divergent migration occurs , the effects of planet - planet interactions are small and no resonant capture occurs .conversely , resonant capture occurs for convergent migration under quite general conditions , which we discuss below .planets can approach each other from widely separated orbits if they have fairly different migration rates , or if they form in close proximity and are sufficiently massive to carve a common gap ( see fig .[ fig : multi - planet ] ) . in the latter case, the outer disc pushes the outer planet inwards , the inner disc pushes the inner planet outwards , causing convergence .if the planets approach a commensurability , where the orbital periods are the ratio of two integers , orbital eccentricities will be excited and resonant capture may occur . whether or not resonant capture occurs hinges primarily on the time the planets take to cross the resonance .capture requires the convergent migration timescale to be longer than the libration timescale associated with the resonance width .otherwise , the two - planet system does not have enough time for the resonance to be excited : the two planets will pass through the resonance and no capture will occur ( e.g. , * ? ? ?* ; * ? ? ?due to the sensitivity of the resonant capture to the migration process , the interpretation of observed resonant planetary systems provides important clues about the efficiency of disc - planets interactions .a mean - motion resonance between two planets occurs when their orbital frequencies satisfy where is the angular velocity of the two planets , and where subscripts 1 and 2 refer to quantities of the inner and outer planets , respectively . in eq .( [ eq : resonance ] ) , and are positive integers , and denotes the order of the resonance . the above condition for a mean - motion resonance can be recast in terms of the planets semi - major axes , , as formally , a system is said to be in a : mean - motion resonance if at least one of the resonant angles is librating , i.e. has a dynamical range smaller than .the resonant angles ( ) are defined as where denotes the planets longitude , and the longitude of their pericentre .the difference in pericentre longitudes is often used to characterise resonant behaviour .for instance , when that quantity librates , the system is said to be in apsidal corotation .this means that the two apsidal lines of the resonant planets are always nearly aligned , or maintain a constant angle between them .this is the configuration that the planets end up with when they continue their inward migration after capture in resonance ( e.g. , * ? ? ?several bodies in our solar system are in mean - motion resonance .for example , the jovian satellites io , europa and ganymede are engaged in a so - called :: laplace resonance , while neptune and pluto ( as well as the plutinos ) are in a : mean - motion resonance . however , out of the 8 planets in the solar system , not a single pair is presently in a mean - motion resonance . according to the nice model for the early solar system ,this might have been different in the past ( see section [ sec : type2ei ] ) .the question of which resonance the system may end up in depends on the mass , the relative migration speed , and the initial separation of the planets . because the 2:1 resonance ( ) is the first first - order resonance that two initially well - separated planets encounter during convergent migration , it is common for planets to become locked in that resonance , provided convergent migration is not too rapid . after a resonant capture , the eccentricities increase and the planets generally migrate as a joint pair , maintaining a constant orbital period ratio ( see , however , the last two paragraphs in section [ sec : johnlow ] ) . continued migration in resonance drives the eccentricities up , and they would increase to very large values in the absence of damping agents , possibly rendering the system unstable . the eccentricity damping rate by the disc , , is often parametrized in terms of the migration rate as with a dimensionless constant . for low - mass planets ,typically below 10 to 20 , eccentricity damping occurs much more rapidly than migration : \sim$ ] a few hundred in locally isothermal discs ( see section [ sec : ei ] ) and may take even larger values in radiative discs .high - mass planets create gaps in their disc ( see section [ sec : type2 ] ) and the eccentricity damping is then strongly reduced . for the massive planets in the gj 876 planetary system , the three - body integrations of , in which migrationis applied to the outer planet only , showed that can reproduce the observed eccentricities . if massive planets orbit in a common gap , as in fig .[ fig : multi - planet ] , the disc parts on each side of the gap may act as damping agents , and it was shown by the 2d hydrodynamical simulations of that such configuration gives 510 . disc turbulence adds a stochastic component to convergent migration , which may prevent resonant capture or maintenance of a resonant configuration .this has been examined in n - body simulations with prescribed models of disc - planet interactions and of disc turbulence ( e.g. , * ? ? ?* ; * ? ? ?* ) , and in hydrodynamical simulations of planet - disc interactions with simplified turbulence models .application of above considerations leads to excellent agreement between theoretical evolution models of resonant capture and the best observed systems , in particular gj 876 . because resonant systems most probably echo an early formation via disc - planets interactions, the present dynamical properties of observed systems can give an indicator of evolutionary history .this has been noticed recently in the system hd 45364 , where two planets in 3:2 resonance have been discovered by .fits to the data give semi - major axes and , and eccentricities and , respectively .non - linear hydrodynamic simulations of disc - planets interactions have been carried out for this system by .for suitable disc parameters , the planets enter the 3:2 mean - motion resonance through convergent migration .after the planets reached their observed semi - major axis , a theoretical rv - curve was calculated .surprisingly , even though the simulated eccentricities ( ) differ significantly from the data fits , the theoretical model fits the observed data points as well as the published best fit solution .the pronounced dynamical differences between the two orbital fits , which both match the existing data , can only be resolved with more observations .hence , hd 45364 serves as an excellent example of a system in which a greater quantity and quality of data will constrain theoretical models of this interacting multi - planetary system .another interesting observational aspect where convergent migration due to disc - planets interactions may have played a prominent role is the high mean eccentricity of extrasolar planets ( ) . as discussed above , for single planets , disc - planet interactionsnearly always lead to eccentricity damping , or , at best , to modest growth for planets of few jupiter masses ( see section [ sec : type2ei ] ) .strong eccentricity excitation may occur , however , during convergent migration and resonant capture of two planets .convergent migration of three massive planets in a disc may lead to close encounters that significantly enhance planet eccentricities and inclinations . in the latter case ,an inclination at least degrees between the planetary orbit and the disc may drive kozai cycles . under disc - driven kozai cycles , the eccentricity increases to large values and undergoes damped oscillations with time in anti - phase with the inclination .as the disc slowly dissipates , damping will be strongly reduced .this may leave a resonant system in an unstable configuration , triggering dynamical instabilities .planet - planet scattering may then pump eccentricities to much higher values .this scenario has been proposed to explain the observed broad distribution of exoplanet eccentricities .however , note that the initial conditions taken in these studies are unlikely to result from the evolution of planets in a protoplanetary disc . using n - body simulations with 2 or 3 giant planets and prescribed convergent migration , but no damping, found that , as the eccentricities raise to about 0.4 due to resonant interactions , planet inclinations could also be pumped under some conditions .the robustness of this mechanism needs to be checked by including eccentricity and inclination damping by the disc .disc - planets interactions of several protoplanets undergoing type i migration leads to crowded systems .these authors find that protoplanets often form resonant groups with first - order mean - motion resonances having commensurabilities between 3:2 - 8:7 ( see also * ? ? ?* ; * ? ? ?strong eccentricity damping allows these systems to remain stable during their migration . in general terms ,these simulated systems are reminiscent of the low - mass planet systems discovered by the kepler mission , like kepler-11 .the proximity of the planets to the star in that system , and their near coplanarity , hints strongly toward a scenario of planet formation and migration in a gaseous protoplanetary disc . have shown that a pair of close giant planets can migrate outwards . in this scenario ,the inner planet is massive enough to open a deep gap around its orbit and undergoes type ii migration .the outer , less massive planet opens a partial gap and migrates inwards faster than the inner planet .if convergent migration is rapid enough for the planets to cross the 2:1 mean - motion resonance , and to lock in the 3:2 resonance , the planets will merge their gap and start migrating outwards together .the inner planet being more massive than the outer one , the ( positive ) torque exerted by the inner disc is larger than the ( negative ) torque exerted by the outer disc , which results in the planet pair moving outwards .note that to maintain joint outward migration on the long term , gas in the outer disc has to be funnelled to the inner disc upon embarking onto horseshoe trajectories relative to the planet pair .otherwise , gas would pile up at the outer edge of the common gap , much like a snow - plough , and the torque balance as well as the direction of migration would ultimately reverse .the above mechanism of joint outward migration of a resonant planet pair relies on an asymmetric density profile across the common gap around the two planets .this mechanism is therefore sensitive to the disc s aspect ratio , viscosity , and to the mass ratio between the two planets , since they all impact the density profile within and near the common gap .if , for instance , the outer planet is too small , the disc density beyond the inner planet s orbit will be too large to reverse the migration of the inner planet ( and , thus , of the planet pair ) .conversely , if the outer planet is too big , the torque imbalance on each side of the common gap will favour joint inward migration .numerical simulations by showed that joint outward migration works best when the mass ratio between the inner and outer planets is comparable to that of jupiter and saturn .in particular , found that the jupiter - saturn pair could avoid inward migration and stay beyond the ice line during the gas disc phase .their migration rate depends on the disc properties , but could be close to stationary for standard values .more recently , have proposed that jupiter first migrated inwards in the primordial solar nebula down to 1.5 au , where saturn caught it up .near that location , after jupiter and saturn have merged their gap and locked into the 3:2 mean - motion resonance , both planets would have initiated joint outward migration until the primordial nebula dispersed .this scenario is known as _ the grand tack _ , and seems to explain the small mass of mars and the distribution of the main asteroid belt .the previous section has reviewed basics of , and recent progress on planet - disc interactions .we continue in this section with a discussion on the role played by planet - disc interactions in the properties and architecture of observed planetary systems .section [ sec : short ] starts with planets on short - period orbits .emphasis is put on hot jupiters , including those with large spin - orbit misalignments ( section [ sec : johngiant ] ) , and on the many low - mass candidate systems uncovered by the kepler mission ( section [ sec : johnlow ] ) .section [ sec : long ] then examines how planet - disc interactions could account for the massive planets recently observed at large orbital separations by direct imaging techniques .finally , section [ sect : migrat - obs ] addresses how well global models of planet formation and migration can reproduce the statistical properties of exoplanets .the discovery of 51 pegasi b on a close - in orbit of days as the first example of a hot jupiter led to the general view that giant planets , which are believed to have formed beyond the ice line at au , must have migrated inwards to their present locations .possible mechanisms for this include type ii migration ( see section [ sec : type2 ] ) and either planet - planet scattering , or kozai oscillations induced by a distant companion leading to a highly eccentric orbit which is then circularized as a result of tidal interaction with the central star ( see , e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* and references therein ) .the relative importance of these mechanisms is a matter of continuing debate .the relationship between planet mass and orbital period for confirmed exoplanets on short - period orbits is shown in fig .[ fig : jp1 ] .the hot jupiters are seen to be clustered in circular orbits at periods in the range days .for planet masses in the range jupiter masses , there is no corresponding clustering of orbital periods , indicating that this is indeed a feature associated with hot jupiters .measurements of the rossiter - mclaughlin effect ( e.g. , * ? ? ?* ) indicate that around one third of hot jupiters orbit in planes that are significantly misaligned with the equatorial plane of the central star .this is not expected from disc - planet interactions leading to type ii migration , and so has led to the alternative mechanisms being favoured .thus propose that hot jupiters are placed in an isotropic distribution through dynamical interactions , and are then circularized by tides , with disc - driven migration playing a negligible role .they account for the large fraction of misaligned hot jupiters around stars with effective temperatures k , and the large fraction of aligned hot jupiters around cooler stars , as being due to a very large increase in the effectiveness of the tidal processes causing alignment for the cooler stars . however , there are a number of indications that the process of hot jupiter formation does not work in this way , and that a more gentle process such as disc - driven migration has operated on the distribution .we begin by remarking the presence of significant eccentricities for periods days in fig .[ fig : jp1 ] .the period range over which tidal effects can circularize the orbits is thus very limited .giant planets on circular orbits with periods greater than 10 days , which are interior to the ice line , must have been placed there by a different mechanism .for example , 55 cnc b , a 0.8 planet on a 14 day , near - circular orbit exterior to hot super - earth 55 cnc e is a good case for type ii migration having operated on that system .there is no reason to suppose that a smooth delivery of hot jupiters through type ii migration would not function at shorter periods .there are also issues with the effectiveness of the tidal process ( see * ? ? ?it has to align inclined circular orbits without producing inspiral into the central star . to avoid the latter , the components of the tidal potential that act with non - zero forcing frequency in an inertial frame when the star does not rotate , have to be ineffective .instead one has to rely on components that appear to be stationary in this limit .these have a frequency that is a multiple of the stellar rotation frequency , expected to be significantly less than the orbital frequency , as viewed from the star when it rotates . as such components depend only on the time averaged orbit , they are insensitive to the sense of rotation in the orbit .accordingly there is a symmetry between prograde and retrograde aligning orbits with respect to the stellar equatorial plane .this is a strict symmetry when the angular momentum content of the star is negligible compared to that of the orbit , otherwise there is a small asymmetry ( see * ? ? ?notably , a significant population of retrograde orbits with aligned orbital planes , that is expected in this scenario , is not observed . have recently examined the dependence of the relationship between mass and orbital period on the metallicity of the central star .they find that the pile up for orbital periods in the range days characteristic of hot jupiters is only seen at high metallicity .in addition , high eccentricities , possibly indicative of dynamical interactions , are also predominantly seen at high metallicity .this indicates multi - planet systems in which dynamical interactions leading to close orbiters occur at high metallicity , and that disc - driven migration is favoured at low metallicity .finally , misalignments between stellar equators and orbital planes may not require strong dynamical interactions .several mechanisms may produce misalignments between the protoplanetary disc and the equatorial plane of its host star ( see section [ sec : type2ei ] ) .another possibility is that internal processes within the star , such as the propagation of gravity waves in hot stars , lead to different directions of the angular momentum vector in the central and surface regions ( e.g. , * ? ? ?the kepler mission has discovered tightly packed planetary systems orbiting close to their star .several have been determined to be accurately coplanar , which is a signature of having formed in a gaseous disc .these include koi 94 and koi 25 , koi 30 , and kepler 50 and kepler 55 .we remark that if one supposed that formation through in situ gas free accumulation had taken place , for planets of a fixed type , the formation time scale would be proportional to the product of the local orbital period and the reciprocal of the surface density of the material making up the planets ( e.g. , * ? ? ?for a fixed mass this is proportional to . scaling from the inner solar system , where this time scale is taken to be yr , it becomes yr for au .note that this time scale is even shorter for more massive and more compact systems .this points to a possible formation during the disc lifetime , although the formation process should be slower and quieter in a disc , as protoplanets are constrained to be in non - overlapping near circular orbits . under this circumstance ,disc - planet interactions can not be ignored .notably , found that a significant number of kepler multiplanet candidate systems contain pairs that are close to first - order resonances .they also found a few multi - resonant configurations .an example is the four planet system koi-730 which exhibits the mean motion ratios 8:6:4:3 .more recently , confirmed the kepler 60 system which has three planets with the inner pair in a 5:4 commensurability and the outer pair in a 4:3 commensurability .however , most of the tightly packed candidate systems are non - resonant .the period ratio histogram for all pairs of confirmed exoplanet systems is shown in the upper panel of fig .[ fig : jp3 ] .this shows prominent spikes near the main first - order resonances .however , this trend is biased because many of the kepler systems were validated through observing transit timing variations , which are prominent for systems near resonances. the lower panel of this figure shows the same histogram for kepler candidate systems announced at the time of writing ( quarters 1 - 8 ) . in this case , although there is some clustering in the neighbourhood of the 3:2 commensurability and an absence of systems at exact 2:1 commensurability , with there being an overall tendency for systems to have period ratios slightly larger than exact resonant values , there are many non resonant systems . at first sight, this appears to be inconsistent with results from the simplest theories of disc - planet interactions , for which convergent migration is predicted to form either resonant pairs or resonant chains .however , there are features not envisaged in that modelling which could modify these results , which we briefly discuss below .these fall into two categories : ( i ) those operating while the disc is still present , and ( ii ) those operating after the disc has dispersed .an example of the latter type is the operation of tidal interactions with the central star , which can cause two planet systems to increase their period ratios away from resonant values .however , this can not be the only process operating as fig .[ fig : jp3 ] shows that the same period ratio structure is obtained for periods both less than and greater than days .we also note that compact systems with a large number of planets can be close to dynamical instability through the operation of arnold diffusion ( see , e.g. , the analysis of kepler 11 by * ? ? ?thus it is possible that memory of the early history of multi - planet systems is absent from their current configurations .when the disc is present , stochastic forcing due to the presence of turbulence could ultimately cause systems to diffuse out of resonance ( e.g. , * ? ? ?* ; * ? ? ?when this operates , resonances may be broken and period ratios may both increase and decrease away from resonant values . employed stochastic fluctuations in order to enable lower order resonances to be diffused through under slow convergent migration . in this way , they could form a 7:6 commensurability in their modelling of the kepler 36 system .another mechanism that could potentially prevent the formation of resonances in a disc involves the influence of each planet s wakes on other planets .the dissipation induced by the wake of a planet in the coorbital region of another planet causes the latter to be effectively repelled .this repulsion might either prevent the formation of resonances , or result in an increase in the period ratio from a resonant value . considered a super - earth migrating in a disc towards a giant planet .the upper panel in fig .[ fig : jp2 ] shows the time evolution of the super - earth s semi - major axis when orbiting exterior to a planet of one jupiter or two jupiter masses .we see that the super - earth s semi - major axis attains a minimum and ultimately increases . thus a resonance is not maintained .the lower panel shows the disc s surface density at one illustrative time : the super - earth ( grey arrow ) feels a head wind from the outer wake of the hot jupiter , which leads to the super earth being progressively repelled .this mechanism could account for the observed scarcity of super - earths on near - resonant orbits exterior to hot jupiters .a similar effect was found in disc - planets simulations with two partial gap - opening planets by .this is illustrated in fig .[ fig : jp4 ] .the orbital period ratio between the planets initially decreases until the planets get locked in the 3:2 mean - motion resonance .the period ratio then increases away from the resonant ratio as a result of wake - planet interactions .the inset panel shows a density contour plot where the interaction of the planets with each other s wakes is clearly seen. divergent evolution of a planet pair through wake - planet interactions requires some non - linearity , and so will not work with pure type i migration ( ) , but not so much that the gaps become totally cleared .this mechanism works best for partial gap - opening planets ( a few , ) , which concern super - earth to neptune mass planets in discs with aspect ratio ( expected in inner disc regions ) , or saturn - mass planets if .these results show circumstances where convergent migration followed by attainment of stable strict commensurability may not be an automatic consequence of disc - planet interactions .wake - planet interactions could explain why near - resonant planet pairs amongst kepler s multiple candidate systems tend to have period ratios slightly greater than resonant . in the past decade, spectacular progress in direct imaging techniques have uncovered more than 20 giant planets with orbital separations ranging from about ten to few hundred au .the four planets in the hr 8799 system or b are remarkable examples .the discoveries of these _ cold jupiters _ have challenged theories of planet formation and evolution .we review below the mechanisms that have been proposed to account for the cold jupiters . in the core - accretion scenario for planet formation, it is difficult to form jupiter - like planets in isolation beyond au from a sun - like star ( * ? ? ?* ; * ? ? ?* and see the chapter by helled et al . ) .could forming jupiters move out to large orbital separations in their disc ?outward type i migration followed by rapid gas accretion is possible , but the maximum orbital separation attainable through type i migration is uncertain ( see sections [ sec : coro ] and [ sec : turb ] ) .planets in the jupiter - mass range are expected to open an annular gap around their orbit ( see section [ sec : gapopening ] ) .if a deep gap is carved , inward type ii migration is expected .if a partial gap is opened , outward type iii migration could occur under some circumstances ( see section [ sec : type3 ] ) , but numerical simulations have shown that it is difficult to sustain this type of outward migration over long timescales .it is therefore unlikely that a _ single _ massive planet formed through the core - accretion scenario could migrate to several tens or hundreds of au .it is possible , however , that a pair of close giant planets may migrate outwards according to the mechanism described in section [ sec : ms01 ] . for non - accreting planets ,this mechanism could deliver two near - resonant giant planets at orbital separations comparable to those of the cold jupiters . however , this mechanism relies on the outer planet to be somewhat less massive than the inner one .joint outward migration may stall and eventually reverse if the outer planet grows faster than the inner one .numerical simulations by these authors showed that it is difficult to reach orbital separations typical of the cold jupiters .the fraction of confirmed planets known in multi - planetary systems is about ( see , e.g. , exoplanets.org ) , from which nearly have an estimated minimum mass ( the remaining comprises kepler multiple planets confirmed by ttv , and for which an upper mass estimate has been obtained based on dynamical stability ; see for example ) .more than half of the confirmed multiple planets having a lower mass estimate are more massive than saturn , which indicates that the formation of several giant planets in a protoplanetary disc should be quite common .smooth convergent migration of two giant planets in their parent disc should lead to resonant capture followed by joint migration of the planet pair ( e.g. , * ? ? ?dispersal of the gas disc may trigger the onset of dynamical instability , with close encounters causing one of the two planets to be scattered to large orbital separations . a system of three giant planets is more prone to dynamical instability , and disc - driven convergent migration of three giant planets may induce planet scattering even in quite massive protoplanetary discs .planet scattering before or after disc dispersal could thus be a relevant channel for delivering one or several massive planets to orbital separations comparable to the cold jupiters. it could also account for the observed free - floating planets .giant planets could also form after the fragmentation of massive protoplanetary discs into clumps through the gravitational instability ( gi ) .the gi is triggered as the well - known toomre- parameter is and the disc s cooling timescale approaches the dynamical timescale .the later criterion is prone to some uncertainty due to the stochastic nature of fragmentation .the gi could trigger planet formation typically beyond 30 au from a sun - like star .how do planets evolve once fragmentation is initiated ?first , gi - formed planets are unlikely to stay in place in their gravito - turbulent disc .since these massive planets form in about a dynamical timescale , they rapidly migrate to the inner parts of their disc , having initially no time to carve a gap around their orbit ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* see fig .[ fig : bmp ] ) .these inner regions should be too hot to be gravitationally unstable , and other sources of turbulence , like the mri , will set the background disc profiles and the level of turbulence .the rapid inward migration of gi - formed planets could then slow down or even stall , possibly accompanied by the formation of a gap around the planet s orbit .gap - opening may also occur if significant gas accretion occurs during the initial stage of rapid migration , which may promote the survival of inwardly migrating clumps .planet interactions , which may result in scattering events , mergers , or resonant captures in a disc , should also play a prominent role in shaping planetary systems formed by gi .the near resonant architecture of the hr 8799 planet system could point to resonant captures after convergent migration in a gravito - turbulent disc .furthermore , gas clumps progressively contract as they cool down . asclumps initially migrate inwards , they may experience some tidal disruption , a process known as tidal downsizing .this process could deliver a variety of planet masses in the inner parts of protoplanetary discs .all of the disc - driven migration scenarios discussed in this review have some dependence on the planet mass , so it is necessary to consider the combined effects of mass growth and migration when assessing the influence of migration on the formation of planetary systems .two approaches that have been used extensively for this purpose are planetary population synthesis and n - body simulations , both of which incorporate prescriptions for migration .population synthesis studies use monte - carlo techniques to construct synthetic planetary populations , with the aim of determining which combinations of model ingredients lead to statistically good fits to the observational data . in principlethis allows the mass - period and mass - radius relation for gaseous exoplanets to be computed .input variables that form the basis of the monte - carlo approach include initial gas disc masses , gas - to - dust ratios , and disc photoevaporation rates , constrained by observational data .the advantages of population synthesis studies lie in their computational speed and ability to include a broad range of physical processes .this allows the models to treat elements of the physics ( such as gas envelope accretion , or ablation of planetesimals as they pass through the planet envelope , for example ) much more accurately than is possible in n - body simulations .a single realisation of a monte - carlo simulation consists of drawing a disc model from the predefined distribution of possibilities and introducing a single low - mass planetary embryo in the disc at a random location within a predefined interval in radius. accretion of planetesimals then proceeds , followed by gas envelope settling as the core mass grows . runaway gas accretion to form a gas giant may occur if the core mass reaches the critical value .further implementation details are provided in the chapter on planetary population synthesis by benz et al . in this volume .the main advantages of the n - body approach are that they automatically include an accurate treatment of planet - planet interactions that is normally missing from the ` single - planet - in - a - disc ' monte - carlo models , they capture the competitive accretion that is inherently present in the oligarchic picture of early planet formation , and they incorporate giant impacts between embryos that are believed to provide the crucial last step in terrestrial planet formation . at present , however , gas accretion has been ignored or treated in a crude manner in n - body models . as such, population synthesis models can provide an accurate description of the formation of a gas giant planet , whereas n - body models are well - suited to examining the formation of systems of lower mass terrestrial , super - earth and core - dominated neptune - like bodies .as indicated above , the basis of almost all published population synthesis models has been the core - accretion scenario of planetary formation , combined with simple prescriptions for type i and type ii migration and viscous disc evolution .a notable exception is the recent population synthesis study based on the disc fragmentation model .almost all studies up to the present time have adopted type i migration rates similar to those arising from eq .( [ eqtl ] ) , supplemented with an arbitrary reduction factor that slows the migration .the influence of the vortensity and entropy - related horseshoe drag discussed in sect .2.1.2 has not yet been explored in detail , although a couple of recent preliminary explorations that we describe below have appeared in the literature . , and consider the effects of type i and type ii migration in their population synthesis models .although differences exist in the modelling procedures , these studies all conclude that unattenuated type i migration leads to planet populations that do not match the observed distributions of planet mass and semimajor axis .models presented in , for example , fail to produce giant planets at all if full - strength type i migration operates . statistically acceptablegiant planet populations are reported for reductions in the efficiency of type i migration by factors of 0.01 to 0.03 , with type ii migration being required to form ` hot jupiters ' . with the type ii time scale of yr being significantly shorter than disc life times , numerous giant planets migrate into the central star in these models .the survivors are planets that form late as the disc is being dispersed ( through viscous evolution and photoevaporation ) , but just early enough to accrete appreciable gaseous envelopes .and present models with full - strength type i migration that are able to form a sparse population of gas giants .cores that accrete very late in the disc life time are able to grow to large masses as they migrate because they do not exhaust their feeding zones .type i migration of the forming planetary cores in this case , however , strongly biases the orbital radii of planetary cores to small values , leading to too many short period massive gas giants that are in contradiction of the exoplanet data .the above studies focused primarily on forming gas giant planets , but numerous super - earth and neptune - mass planets have been discovered by both ground - based surveys and the kepler mission ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?based on 8 years of harps data , the former publication in this list suggests that at least 50 % of solar - type stars hosts at least one planet with a period of 100 days or less .based on an analysis of the false - positive rate in kepler detections , suggest that 16.5 % of fgk stars have at least one planet between 0.8 and 1.25 r with orbital periods up to 85 days .these results appear consistent with the larger numbers of super - earth and neptune - like planets discovered by kepler . in a recent study, performed a direct comparison between the predictions of population synthesis models with radial - velocity observations of extrasolar planets orbiting within 0.25 au around 166 nearby g- , k- , and m - type stars ( the survey ) .the data indicate a high density of planets with 4 - 10 m with periods days , in clear accord with the discoveries made by kepler .this population is not present in the monte - carlo models because of rapid migration and mass growth . considered specifically the formation of super - earths using population synthesis , incorporating for the first time a treatment of planet - planet dynamical interactions . in the absence of an inner disc cavity ( assumed to form by interaction with the stellar magnetic field ) the simulations failed to form systems of short period super - earths because of type i migration into the central star .this requirement for an inner disc cavity to halt inward migration , in order to explain the existence of the observed short - period planet population , appears to be a common feature in planetary formation models that include migration .given that planets are found to have a wide - range of orbital radii , however , it seems unlikely that this migration stopping mechanism can apply to all systems .given the large numbers of planets that migrate into the central star in the population synthesis models , it would appear that such a stopping mechanism when applied to all planet - forming discs would predict the existence of a significantly larger population of short - period planets than is observed .this point is illustrated by fig . 7 which shows the mass - period relation for planets with masses m in the upper panel and m in the lower panel .although a clustering of giant planets between orbital periods 3 - 5 days is observed , there is no evidence of such a pile - up for the lower mass planets .this suggests that an inner cavity capable of stopping the migration of planets of all masses may not be a prevalent feature of planet forming discs .n - body simulations with prescriptions for migration have been used to examine the interplay between planet growth and migration .we primarily concern ourselves here with simulations that include the early phase of oligarchic growth when a swarm of mars - mass embryos embedded in a disc of planetesimals undergo competitive accretion .a number of studies have considered dynamical interaction between much more massive bodies in the presence of migration , but we will not consider these here . early work included examination of the early phase of terrestrial planet formation in the presence of gas , which showed that even unattenuated type i migration was not inconsistent with terrestrial planet formation in discs with a moderately enhanced solids abundance .n - body simulations that explore short period super - earth formation and demonstrate the importance of tidal interaction with the central star for disc models containing inner cavities have been presented by . examined the formation of hot super - earth and neptune mass planets using n - body simulations combined with type i migration ( full strength and with various attenuation factors ) .the motivation here was to examine whether or not the standard oligarchic growth picture of planet formation combined with type i migration could produce systems such as gliese 581 and hd 69830 that contain multiple short period super - earth and neptune mass planets . these hot and warm super - earth and neptune systems probably contain up to 30 40 earth masses of rocky or icy material orbiting within 1 au .the models incorporated a purpose - built multiple time - step scheme to allow planet formation scenarios in global disc models extending out to 15 au to be explored .the aim was to examine whether or not hot super - earths and neptunes could be explained by a model of formation at large radius followed by inward migration , or whether instead smaller building blocks of terrestrial mass could migrate in and form a massive disc of embryos that accretes _ in situ _ to form short period bodies .as such this was a precursor study to the recent _ in situ _ models that neglect migration of .the suite of some 50 simulations led to the formation of a few individual super - earth and neptune mass planets , but failed to produce any systems with more than 12 earth masses of solids interior to 1 au . presented a suite of simulations of giant planet formation using a hybrid code in which emerging embryos were evolved using an n - body integrator combined with a 1d viscous disc model .although unattenuated type i and type ii migration were included , a number of models led to successful formation of systems of surviving gas giant planets .these models considered an initial population of planetary embryos undergoing oligarchic growth extending out to 30 au from the star , and indicate that the right combination of planetary growth times , disc masses and life times can form surviving giant planets through the core - accretion model , provided embryos can form and grow at rather large orbital distances before migrating inward .the role of the combined vorticity- and entropy - related corotation torque , and its ability to slow or reverse type i migration of forming planets , has not yet been explored in detail .the survival of protoplanets with masses in the range m in global 1d disc models has been studied by .these models demonstrate the existence of locations in the disc where planets of a given mass experience zero migration due to the cancellation of lindblad and corotation torques ( zero - migration radii or planetary migration traps ) .planets have a tendency to migrate toward these positions , where they then sit and drift inward slowly as the gas disc disperses .preliminary results of population synthesis calculations have been presented by , and n - body simulations that examine the oligarchic growth scenario under the influence of strong corotation torques have been presented by .these studies indicate that the convergent migration that arises as planets move toward their zero - migration radii can allow a substantial increase in the rate of planetary accretion . under conditions where the disc hosts a strongly decreasing temperature gradient , computed models that led to outward migration of planetary embryos to radii au , followed by gas accretion that formed gas giants at these large distances from the star .the temperature profiles required for this were substantially steeper than those that arise from calculations of passively heated discs , however , so it remains to be determined whether these conditions can ever be realised in a protoplanetary disc .following on from the study of corotation torques experienced by planets on eccentric orbits by , incorporated a prescription for this effect and found that planet - planet scattering causes eccentricity growth to values that effectively quench the horseshoe drag , such that crowded planetary systems during the formation epoch may continue to experience rapid inward migration .further work is clearly required to fully assess the influence of the corotation torque on planet formation in the presence of significant planet - planet interactions .looking to the future , it is clear that progress in making accurate theoretical predictions that apply across the full range of observed exoplanet masses will be best achieved by bringing together the best elements of the population synthesis and n - body approaches .some key issues that require particular attention include the structure of the disc close to the star , given its influence in shaping the short - period planet population ( see section [ sec : short ] ) .this will require developments in both observation and theory to constrain the nature of the magnetospheric cavity and its influence on the migration of planets of all masses .significant improvements in underlying global disc models are also required , given the sensitivity of migration processes to the detailed disc physics .particular issues at play are the roles of magnetic fields , the thermal evolution and the nature of the turbulent flow in discs that sets the level of the effective viscous stress .these are all active areas of research at the present time and promise to improve our understanding of planet formation processes in the coming years .the main points to take away from this chapter are summarized below : * disc - planet interactions are a natural process that inevitably operates during the early evolution of planetary systems , when planets are still embedded in their protoplanetary disc .they modify all orbital elements of a planet . while eccentricity and inclination are usually damped quickly , the semi - major axis may increase or decrease more or less rapidly depending on the planet - to - star mass ratio ( ) and the disc s physical properties ( including its aspect ratio ) . *planet migration comes in three main flavors .( i ) type i migration applies to low - mass planets ( , which is the case if ) that do not open a gap around their orbit .its direction ( inwards or outwards ) and speed are very sensitive to the structure of the disc , its radiative and turbulent properties in a narrow region around the planet .while major progress in understanding the physics of type i migration has been made since ppv , robust predictions of its direction and pace will require more knowledge of protoplanetary discs in regions of planet formation .alma should bring precious constraints in that sense .( ii ) type ii migration is for massive planets ( , or given by eq .[ eq : qcrit ] ) that carve a deep gap around their orbit .type ii migrating planets drift inwards in a time scale comparable to or longer than the disc s viscous time scale at their orbital separation .( iii ) type iii migration concerns intermediate - mass planets ( ) that open a partial gap around their orbit .this very rapid , preferentially inward migration regime operates in massive discs . *planet - disc interaction is one major ingredient for shaping the architecture of planetary systems .the diversity of migration paths predicted for low - mass planets probably contributes to the diversity in the mass - semi - major axis diagram of observed exoplanets .convergent migration of several planets in a disc could provide the conditions for exciting planets eccentricity and inclination . *the distribution of spin - orbit misalignments amongst hot jupiters is very unlikely to have an explanation based on a single scenario for the large - scale inward migration required to bring them to their current orbital separations .hot jupiters on orbits aligned with their central star point preferentially to a smooth disc delivery , via type ii migration , rather than to dynamical interactions with a planetary or a stellar companion , followed by star - planet tidal re - alignment .* convergent migration of two planets in a disc does not necessarily result in the planets being in mean - motion resonance .turbulence in the disc , the interaction between a planet and the wake of its companion , or late star - planet tidal interactions , could explain why many multi - planet candidate systems discovered by the kepler mission are near- or non - resonant .wake - planet interactions could account for the observed scarcity of super - earths on near - resonant orbits exterior to hot jupiters .* recent observations of circumstellar discs have reported the existence of cavities and of large - scale vortices in millimetre - sized grains .these features do not necessarily track the presence of a giant planet in the disc .it should be kept in mind that the gaps carved by planets of around a jupiter mass or less are narrow annuli , not cavities .* improving theories of planet - disc interactions in models of planet population synthesis is essential to make progress in understanding the statistical properties of exoplanets .current discrepancies between theory and observations point to uncertainties in planet migration models as much as to uncertainties in planet mass growth , the physical properties of protoplanetary discs , or to the expected significant impact of planet - planet interactions . _ acknowledgments _ we thank cornelis dullemond and the anonymous referee for their constructive reports .cb was supported by a herchel smith postdoctoral fellowship of the university of cambridge , sjp by a royal society university research fellowship , jg by the science and technology facilities council , and bb by the helmholtz alliance planetary evolution and life .
= 11pt = 0.65 in = 0.65 in
the inter - domain routing in the internet takes place over the -globally adopted- border gateway protocol ( bgp ) . autonomous systems ( ases ) use bgp to advertise routing paths for ip prefixes to their neighboring ases . since bgp is a distributed protocol and authentication of advertised routes is not always feasible , it is possible for an as to advertise illegitimate route paths for ip prefixes .these paths can propagate and infect " many ases , or even the entire internet , impacting thus severely the internet routing system and/or economy .this phenomenon , called _ bgp prefix hijacking _ , is frequently observed , and usually caused by router misconfigurations or malicious attacks .some examples of real bgp hijacking cases include : ( a ) a pakistan s isp in 2008 , due to a misconfiguration , hijacked the youtube s prefixes and disrupted its services for more than hours ; ( b ) china telecom mistakenly announced ip prefixes ( corresponding to of the whole bgp table ) in 2010 , causing routing problems in the internet ; and ( c ) hackers performed several hijacking attacks , through a canadian isp , to redirect traffic and steal thousands dollars worth of bitcoins in 2014 . to prevent prefix hijackings , several proactive mechanisms for enhancing the bgp securityhave been proposed .these mechanisms need to be globally deployed to be effective .however , despite the standardization efforts , their deployment is slow due to political , technical , and economic challenges , leaving thus the internet vulnerable to bgp hijacks .therefore , currently , reactive mechanisms are used for defending against prefix hijackings : after a hijacking is detected , network administrators are notified ( e.g , through mailing lists , or dedicated services ) , in order to proceed to manual actions towards its mitigation ( e.g. , reconfigure routers , or contact other ases to filter announcements ) .a number of systems have been proposed for detecting prefix hijacking , based on control plane ( i.e. , bgp data ) and/or data plane ( i.e , .pings / traceroutes ) information .most of them , are designed to operate as third - party services ( external to an as ) that monitor the internet , and upon the detection of a suspicious incident , notify the involved ases .although this approach has been shown to be able to detect suspicious routing events in many cases , two main issues still remain unsolved : ( i ) the detection might not be accurate , since the suspicious routing events might not correspond to hijacks , but be caused by , e.g. , traffic engineering ; and ( ii ) the mitigation is not automated , increasing thus significantly the time needed to resolve a hijack . in this paper , we propose a reactive mechanism / system , which we call artemis ( _ automatic and real - time detection and mitigation system _ ) , that aims to be operated by an as itself , rather than a third - party , to timely detect and mitigate hijackings against its _ own _ prefixes in an automatic way .artemis ( i ) exploits the most recent advances in control - plane monitoring to detect in near real - time prefix hijackings , and ( ii ) immediately proceeds to their automatic mitigation ( section [ sec : artemis ] ). we then conduct several real hijacking experiments in the internet using the peering testbed and analyze the effect of various network parameters ( like , type of hijacking , hijacker / defender - as location and connectivity ) on the performance of artemis .we show that it is possible to detect and mitigate prefix hijacking within _ few seconds _ from the moment the offending announcement is first made .this is a major improvement compared to present approaches , which require slow procedures , like manual verification and coordination .the timely mitigation of artemis , prevents a hijacking from spreading to just , e.g. , 20%-50% of the ases that would be affected otherwise ( section [ sec : evaluation ] ) .finally , we discuss related work in hijacking detection systems and measurement studies , and compare it to our study , in order to highlight the new capabilities that are offered with artemis ( section [ sec : related ] ) .we conclude our paper by discussing the potential for future applications and extensions of artemis ( section [ sec : conclusion ] ) .in this section , we first present the different sources that are used by artemis for control - plane _ monitoring _( section [ sec : sources ] ) , and then describe the _ detection _ ( section [ sec : detection ] ) and _ mitigation _ ( section [ sec : mitigation ] ) services . for the monitoring service ,artemis combines multiple control - plane sources to ( a ) accelerate the detection of a hijacking ( i.e. , minimum time of all sources ) , and ( b ) have a more complete view of the internet ( i.e. , from the vantage points of all the sources ) .artemis receives control - plane information from publicly available sources , namely , the bgpmon tool , the live - streaming service of ripe - ris , and the periscope platform .artemis supports the bgpstream tool as well .however , during our experiments , the bgpstream service was unavailable , and , thus , we do not use it in this paper . in the following ,we present the main features of these control - plane sources .* bgpmon * is a tool that monitors bgp routing information in real - time .it is connected to , and collects bgp updates and routing tables ( ribs ) from bgp routers of : ( a ) the routeviews sites and ( b ) a few dozen of peers around the world ; at the time we conducted our study , bgpmon had vantage points , in total .bgpmon provides the live bgp data , as an xml stream .* ripe ris streaming service . *the ripe s routing information system ( ris ) is connected to route collectors ( rcs ) in several locations around the world , and collects bgp updates and ribs . in the standard ripe ris , the data can be accessed via the raw files ( in mrt format ) or ripestat .the delay for bgp updates is and for ribs .recently , ripe ris offers a streaming service that provides live information from rcs via websockets .the live streaming service of ripe ris , which we use in artemis , has currently rcs in europe and rc in africa ; all of them are located in large ixps .* periscope * is a platform that provides a common interface for issuing measurements from looking glass ( lg ) servers . through periscope ,a user can send a command to a number of chosen lgs to ask for control - plane ( _ show ip bgp _ ) or data - plane ( _ traceroute / ping _ ) information .the status and the output of the measurements can be retrieved at any time ( even before its completion ) .periscope currently provides access to lg servers .* bgpstream * is an open - source framework for live ( and historical ) bgp data analysis .it enables users to quickly inspect raw bgp data from the command - line , or through a python and c / c++ api .bgpstream provides live access to routeviews and ripe ris data archives . while the delay of acquiring the data from these two services is considerable ( and , respectively , for bgp updates ) , bgpstream recently introduced a service for live access to a stream of bgp data from bmp - enabled routeviews collectors ( with only delay ) . in total, bgpstream receives data from route collectors , from all its providers .the detection service of artemis aims to detect hijacks in ( i ) _ real - time _ and ( ii ) _ without false positives _ , while monitoring the ( iii ) _ entire internet _ in a ( iv ) _ light - weight _ fashion .the detection service continuously receives from the control - plane sources ( see section [ sec : sources ] ) information about the bgp route paths for the monitored prefixes , as they are seen at the different vantage points ( e.g. , route collectors , lg servers ) .this routing information is compared with a local file that defines the legitimate origin - asns for each ip prefix that is owned by the operator of artemis ; any violation denotes a hijacking .since operator has full knowledge on the legitimate origin - asns for its prefixes , the detection service returns _ no false positives_. with the combination of sources , the detection can take place when an illegitimate route path is received by any of the sources .this is always faster than using only one source , and can decrease the time needed for detection .using multiple sources gives also the possibility to benefit from the _ large number of vantage points _ they have around the globe .this is important , because a hijacking might affect only a part of the internet , due to bgp policies and shortest - path routing .finally , artemis aims to impose limited load on the used third - party services , so that potentially 100s ases ( that run artemis ) could use them in parallel .artemis needs to receive only the data ( i.e. , the part of the bgp tables , or specific bgp updates ) that correspond to the local prefixes . as a result , _the imposed load is low _ , since ( a ) bgpmon and ripe ris ( as well as , bgpstream ) are services / tools designed and optimized to provide streams of live data to many users simultaneously , ( b ) and periscope has already a limit in the rate of requests to avoid overloading of lg servers .similarly , _ the consumption of network resources is very low _ , allowing thus a single as to monitor many prefixes .the goals of a mitigation mechanism are to be ( i ) _ fast _ and ( ii ) _ efficient _ , and ( iii ) have _ low impact _ on the internet routing system .currently mitigation relies on manual actions , e.g. , after a network administrator is notified for a prefix hijacking , she proceeds to reconfiguration of the bgp routers , or contacts other administrators to filter the hijacker s announcements .as it becomes evident , this manual intervention introduces a significant delay ( e.g. , in the youtube hijacking incident in 2008 , a couple of hours were needed for the mitigation of the problem ) .hence , our primary focus is to accelerate the mitigation . to this end, we implement an automatic mitigation mechanism , which starts the mitigation _ immediately _ after the detection , i.e. , without manual intervention .specifically , when artemis detects a hijacking in a prefix , let _ 10.0.0.0/23 _ , it proceeds to its de - aggregation : it sends a command to the bgp routers of the as to announce the two more - specific prefixes , i.e. , _ 10.0.0.0/24 _ and _ 10.0.1.0/24_. the sub - prefixes will disseminate in the internet and re - establish legitimate route paths , since more specific prefixes are preferred by bgp .prefix de - aggregation , as described above , is efficient for _ /23 _ or less specific ( i.e. , _ /22 , /21 , ..._ ) prefixes . however , when it comes to hijacking of _ /24_ prefixes , the de - aggregation might not be always efficient , since prefixes more specific than _ /24_ are filtered by most routers .although this is a shortcoming of the de - aggregation mechanism , it is not possible to overcome it in an automatic way ( manual actions are needed ) ; to our best knowledge , only solutions that require the cooperation of more than one ases could be applied . the de - aggregation mechanism of artemis , increments the number of entries in the bgp routing table by per hijacked prefix .however , since the number of concurrent hijackings is not expected to be large , and the duration of a hijacking is limited , the imposed overhead is low . finally , since artemis monitors continuously the control - plane of the internet , from many vantage points , it becomes possible to monitor in real - time the process of the mitigation .this enables a network administrator to see how efficient the mitigation is , and if needed to proceed to further ( e.g. , manual ) actions or to rely exclusively on the de - aggregation mechanism .in this section , we conduct experiments in the internet , to investigate ( a ) the overall performance of artemis , and ( b ) the efficiency of the different sources presented in section [ sec : sources ] for monitoring the control - plane of the internet . in section [ sec : experimental - setup ] we provide the details for the setup of our experiments , and present the results in section [ sec : experiments - results ] .in our experiments , we conduct _ real _ hijackings in the internet .we use the peering testbed , which provides the possibility to announce ip prefixes from real asns to the internet ; both the ip prefixes and the asns are owned by peering , hence , our experiments have no impact on the connectivity of other ases . specifically , we create a virtual as in peering , and connect it to one or more real networks .this as ( which we call `` legitimate '' as ) announces an ip prefix and uses artemis to continuously monitor this prefix .we also create another virtual as ( the `` hijacker '' as ) in peering , connect it to a real network in a different location , and hijack the prefix of the legitimate as .peering is a testbed that enables researchers to interact with the internet s routing system .it connects with several real networks at universities and internet exchange points around the world .the users of peering can announce ip prefixes using multiple asns owned by peering as the origin - as . in our experiments , we use the connections of peering to three real networks / sites ( table [ table : peering - sites ] ) organizations in ams - ix .statistics for the number providers , customers , and peers for each as are from . ] .we are given authorization to announce the prefix _ 184.164.228.0/23 _ ( as well as , its sub - prefixes ) , and use the as numbers _ 61574 _ ( for the legitimate as ) and _ 61575 _ ( for the hijacker as ) . [ cols="^,^,^,^,^,^,^",options="header " , ] we test artemis in two different types of hijacking attacks : ( a ) exact prefix hijacks , and ( b ) sub - prefix hijacks .* exact prefix hijacking * is a common attack type where the hijacker announces the same prefix that is announced by the legitimate as .since shortest route paths are typically preferred , only _ a part of the internet _ that is closer to the hijacker ( in number of as - hops ) switches to route paths towards the hijacker .exact prefix hijacks typically infect a few tens or hundreds of ases , from small stub networks to large tier-1 isps . in our experiments , the legitimate as announces the prefix _ 184.164.228.0/23 _ ; then the hijacker announces the same prefix .to mitigate the attack , the legitimate as , announces the sub - prefixes _ 184.164.228.0/24 _ and _184.164.229.0/24_. * sub - prefix hijacking * contributes around 10% of all stable hijackings in the internet .the hijacker announces a more specific prefix , which is covered by the prefix of the legitimate as . since in bgpmore specific prefixes are preferred , _ the entire internet _ switches to routing towards the hijacker for the announced sub - prefix .we configure artemis to monitor the _ 184.164.228.0/22 _ prefix .the hijacker announces the prefix _184.164.228.0/23_. the attack is mitigated by de - aggregating the hijacked prefix , i.e. , the legitimate as announces the two _ /24 _ prefixes .the experiment process comprises the following steps : * ( 1 ) * the legitimate as ( _ as61574 _ ) announces the ip prefix , and we wait for bgp convergence . * ( 2 ) * the hijacker as ( _ as61575 _ ) announces the ip prefix ( or , sub - prefix ) . * ( 3 ) * artemis detects the hijacking . * ( 4 ) * artemis starts the mitigation , i.e. , the legitimate as announces the de - aggregated sub - prefixes . * ( 5 ) * we monitor the mitigation process for , and end the experiment by withdrawing all announcements .we conduct experiments for a number of different scenarios , varying the ( a ) _ location / site _ of the legitimate and hijacker ases , and ( b ) _ number of upstream providers _ of the legitimate as .we repeat each scenario / experiment times .the experiments took place in may - june 2016 . while normally artemis proceeds immediately after a hijacking detection to its mitigation , in some experiments we add a delay between steps 3 and 4 , i.e. , we defer the mitigation .this allows us to investigate the efficiency of the different control - plane sources , i.e. , how much time each of them needs to detect the hijacking ._ bgpmon _ provides to artemis a stream of all the updates it receives from its peers .hence , configuration is not needed ; filtering and detection are internal services of artemis . __ needs only the information about the monitored prefix , and returns to artemis only the bgp messages that correspond to announcements for this prefix . in _ periscope_ , due to the limit on the rate of measurements per user , only a subset of the total 1691 lg servers can be used . to conform to the rate - limit, we use lg servers , which we select based on their performance ( response time , availability ) and location .the selected set consists of lgs in europe , in asia , in north america , and in australia .the performance of artemis depends on the control - plane sources it uses . therefore , to obtain an initial understanding about the capabilities and limitations of artemis , we present in fig . [fig : detection - mititgation ] experimental results that demonstrate the efficiency and characteristics of the different control - plane sources in hijacking detection ( fig .[ fig : boxplots - detection - delay - per - tool ] ) and mitigation monitoring ( fig .[ fig : mitigation - as - vs - time ] ) . fig .[ fig : boxplots - detection - delay - per - tool ] shows how much time is needed by bgpmon , ripe ris , and periscope to observe an illegitimate route , after it has been announced from the hijacker as .we present the distribution of the times ( among different experiments ) for both attack types : prefix and sub - prefix hijacking .a first observation is that the streaming services ( bgpmon and ripe ris ) observe the hijack in in most cases , and are significantly faster than periscope ( - ) , which monitors the control - plane by periodically issuing measurements from lg servers .this is due to the response delay of the lgs , as well as , a limit in the minimum time interval between consequent measurements imposed by periscope .the detection delay in the sub - prefix attack case ( sp ) is -on average- lower than in prefix hijacking ( p ) .this is because a sub - prefix hijacking appears in the whole internet , whereas prefix hijacking affects only a fraction of it .this partial infection of the internet can be faster observed by bgpmon that has more vantage points than ripe ris , as is indicated by the lower mean value and variance of bgpmon in the prefix hijacking case . in fig .[ fig : mitigation - as - vs - time ] , we show the mitigation progress as it has been observed by the ases with a vantage point , i.e. , an rc feed or an lg server , in all sources .the average number of ases that have been infected by the hijacker and switched back to the legitimate routes , are and in the sp and p case , respectively . despite the differences ,both attacks can be quickly mitigated ; ( sp ) and ( p ) of the ases re - establish legitimate routes in after the mitigation was launched , while almost complete mitigation is achieved in less than .furthermore , figs .[ fig : boxplots - detection - delay - per - tool ] and [ fig : mitigation - as - vs - time ] hint to an interesting trade - off : more vantage points ( and , thus , ases ) can be monitored by periscope , however , this comes with an increase in the detection delay compared to bgpmon and ripe ris .we now proceed to test the efficiency of artemis under various scenarios of network connectivity .[ fig : detection - location ] illustrates the effect of the ( i ) hijacker site and ( ii ) number of upstream providers of the legitimate as . in the prefix hijacking case ( fig .[ fig : boxplots - delay - vs - location - prefix ] ) , when the hijacking is triggered by a well connected site , as in the case of ams that it peers with real networks , the detection of the hijacking can be done in around .when the connectivity of the hijacker as is low , as in the gat case that there are less than a dozen of directly connected networks , the detection delay is always higher than and can need up to ( the average detection delay is around ) .these findings are intuitive and consistent with the conclusions of the simulation study in ; adding to this , they quantify for the first time the effects of the hijacker s connectivity with real experiments . in fig .[ fig : boxplots - delay - vs - location - prefix ] , we can also observe that when the connectivity of the legitimate as increases , i.e. , upstream providers , the detection delay ( slightly ) increases as well .this is due to the fact that with upstream providers , more ases are closer to the legitimate as ( in terms of as - hops ) than the hijacker , and thus the effect of prefix hijacking is lower ( and , consequently , its detection becomes more difficult ) .in contrast to the prefix hijacking case , when the hijacker announces a sub - prefix ( fig . [fig : mboxplots - delay - vs - location - subprefix ] ) , the connectivity of the involved networks does not play a crucial role .the effect of the hijacking is large and the detection is always completed within , and on average it needs only ! in it is shown that the `` detection delay '' of argus ( a state - of - the - art hijacking detection system ) is less than for hijacks .however , this delay , let , refers to the time needed to infer that an observation of an illegitimate route corresponds to a hijacking attack ; i.e. , if argus uses the same control - plane sources as artemis , the total detection delay of argus is . after presenting the hijacking detection efficiency , we study the gains of the automatic mitigation of artemis .specifically , fig .[ fig : percentage - mititgation ] shows the percentage of infected ases in relation to the time since the hijacking has been launched .each curve corresponds to a different `` mitigation start time '' , i.e. , the time between the hijacker s announcement and the de - aggregation .the two bottom linessec and in fig .[ fig : salvaged - as - prefix ] , and and in fig .[ fig : salvaged - as - subprefix ] . ]correspond to the near real - time automatic mitigation with artemis ( we selected representative scenarios ; cf . fig .[ fig : detection - location ] ) .the two top lines are assumed to correspond to a timely ( but not real - time ) mitigation , e.g. , with manual actions .as it can be seen , artemis can significantly decrease the impact of a hijacking .for instance , in scenarios where the detection delay of artemis is , the fraction of infected ases is and in the prefix and sub - prefix hijacking , respectively , while even a timely mitigation starting after the hijacking is not able to prevent the infection of all ases .moreover , with artemis the attack is completely mitigated in , whereas a mitigation that started after all ases are infected ( i.e. , top two lines ) needs around _ after the detection _, i.e. , .this fast and effective mitigation that artemis can achieve , is particularly important for short hijacking attacks , whose frequency increases , and which can still cause serious problems .detection mechanisms can be classified based on the type of information they use for detecting prefix hijackings as : ( i ) control - plane , ( ii ) data - plane , and ( iii ) hybrid approaches .control - plane approaches collect information , like bgp updates or tables , from route servers and/or looking glass servers ( lgs ) , from which they detect incidents that can be caused by prefix hijackings .when , for example , a change in the origin - as of a prefix , or a suspicious change in a route path , is observed , an alarm is raised .data plane approaches use ping / traceroute measurements to detect a prefix hijacking .they continuously monitor the data plane connectivity of a prefix and raise an alarm for hijacking , when significant changes in the reachability of the prefix or in the paths leading to it , are observed . a main shortcoming of data - plane mechanisms , is that a significant ( minimum ) number of active measurements is required to safely characterise an event as hijacking .hence , these systems can not be implemented in a light - weight fashion ; and if deployed by every as , they could introduce a large overhead . finally , hybrid approaches combine control and data plane information to detect , with higher accuracy , multiple types of prefix hijacking .argus is the most recent among the aforementioned detection systems , and has few false positives / negatives , and near real - time detection .however , argus is based only on bgpmon for control - plane information , whereas artemis receives data from multiple sources ( bgpmon , ripe - ris , bgpstream , periscope ) , which leads to a faster detection in more than of the cases ( as we observed in our experiments ) .the main difference between artemis and previous detection mechanisms is that most of the previous approaches are _notification systems_. they are designed to be operated by a third party and to monitor _ all _ the prefix the internet . upon detection of a suspicious event , they notify the involved ases about a possible hijacking .this process has two shortcomings : ( a ) it yields many false positives , since suspicious events can usually be due to a number of legitimate reasons , like traffic engineering , anycast , congestion of the data - plane , etc . ; and ( b ) it introduces significant delay , between the detection and mitigation of an event , as the network administrators of the involved as need to be notified and then have to manually verify the incident . on the contrary , artemis is designed to detect hijacks against owned prefixes ( for which the origin - as information is known ) and thus , overcomes the accuracy limitations , and eliminates the notification / verification delay . among previous works ,only is designed for detection of hijacks against owned prefixes .however , it is a pure data - plane mechanism , which as discussed above , introduces significant overhead ( especially , if deployed by many ases ) .in contrast , artemis , which is a pure control - plane mechanism , can be deployed simultaneously in many ases without significant overhead for the control - plane resources / tools ( see discussion in section [ sec : detection ] ) . at the timethe first of these mechanisms were proposed , the capability of the available bgp feeds for providing real - time information was limited . however , currently there exist several state - of - the - art _ publicly available _ control - plane sources / tools that enable pure control - plane mechanisms , like artemis , to detect a prefix hijacking event in near real - time ( a few seconds , or minutes , as we show in section [ sec : evaluation ] ) . to our best knowledgeartemis is the first approach to exploit the streaming interfaces of ripe ris and periscope for prefix hijacking detection .previous studies have taken measurements either over real hijacking incidents that happened in the internet or through simulations .while the former correspond to the behavior of the internet and capture the real effects of hijacking , they are limited to the investigation of a few known incidents , which do not span all possible cases .the latter are able to perform an investigation over a wider range of scenarios and study the effect of different parameters , but do not capture accurately real - world effects , since the topology , routing , and policies of the internet can not be perfectly replicated in simulations . moreover , due to the absence of the ground - truth , i.e. , if a detected routing change is indeed a hijacking or not , previous studies refer to third party sources , e.g. , the route origin authorizations ( roa ) or internet routing registries ( irr ) , for the validation of their results .however , such information is usually incomplete and/or inaccurate , and , this might have an impact on the findings .closer to our study is that tested its performance in self - triggered hijacks ( for self - owned prefixes ) in the internet .nevertheless , only few experiments ( 15 hijacks ) were conducted , whereas in this paper we conduct a large number of experiments ( spread over 4 weeks ) with varying network parameters ( location and connectivity of the hijacking / legitimate as ) and types of hijacks .we have presented artemis , a system for near real - time detection and automatic mitigation of bgp hijacking attacks .the evaluation with extensive real hijacking experiments , showed that artemis can detect hijacks in a few seconds , and completely mitigate them in less than . in this initial implementation of artemis, we detect _ origin - as _ inconsistencies in route paths , and combat them using the _ prefix de - aggregation _ method .although not a panacea , prefix de - aggregation can be also effective for adjacency / policy or last - hop anomalies , or even path interception attacks .to extend artemis towards this direction , it suffices to modify only the detection algorithm ; the monitoring and mitigation services can remain intact . finally , since the detection service of artemis is built on top ( and , independently ) of the monitoring service , the employed monitoring methodology and results ( e.g. , fig . [ fig : detection - mititgation ] ) are generic and can be useful in a number of application related to control - plane monitoring , e.g. , to provide visibility to an as of the impact of the routing changes it triggers ( anycasting , traffic engineering , etc ) . c. zheng , l. ji , d. pei , j. wang , and p. francis , `` a light - weight distributed scheme for detecting ip prefix hijacks in real - time , '' in _ acm sigcomm computer communication review _ , vol .37 , pp .277288 , 2007 .
the border gateway protocol ( bgp ) is globally used by autonomous systems ( ases ) to establish route paths for ip prefixes in the internet . due to the lack of authentication in bgp , an as can _ hijack _ ip prefixes owned by other ases ( i.e. , announce illegitimate route paths ) , impacting thus the internet routing system and economy . to this end , a number of hijacking detection systems have been proposed . however , existing systems are usually third party services that -inherently- introduce a significant delay between the hijacking detection ( by the service ) and its mitigation ( by the network administrators ) . to overcome this shortcoming , in this paper , we propose artemis , a tool that enables an as to _ timely _ detect hijacks on its own prefixes , and _ automatically _ proceed to mitigation actions . to evaluate the performance of artemis , we conduct real hijacking experiments . to our best knowledge , it is the first time that a hijacking detection / mitigation system is evaluated through extensive experiments in the real internet . our results ( a ) show that artemis can detect ( mitigate ) a hijack within a few seconds ( minutes ) after it has been launched , and ( b ) demonstrate the efficiency of the different control - plane sources used by artemis , towards monitoring routing changes .
liquidity is a notion that has gained increasing attention following the credit crisis that started in 2007 ( the crisis " in the following ) . as a matter of fact , this has been a liquidity crisis besides a credit crisis . for many market players ,problems have been aggravated by the lack of reserves when in need to maintain positions in order to avoid closing deals with large negative mark to markets .this lack of reserves forced fire sales at the worst possible moments and started a domino effect leading to the collapse of financial institutions .szego ( 2009 ) illustrates , among other factors , a negative loop involving illiquidity as fueling the crisis development .we can consider for example the following schematization : * ( further ) liquidity reduction on asset trade ; * ( further ) price contraction due to liquidity decline ; * ( further ) decline of value of bank assets portfolio ; * difficulty in refinancing , difficulty in borrowing , forced to ( further ) sale of assets ; * assets left ?if so , go back to 1 . if not : * impossibility of refinancing ; * bankruptcy .this sketchy and admittedly simplified representation highlights the three types of liquidity that generally market participants care about .one is the market / trading liquidity generally defined as the ability to trade quickly at a low cost ( ohara ( 1995 ) ) .this generally means low transaction costs coming from bid - ask spreads , low price impact of trading large volumes and considerable market depth . this notion of market liquiditycan be applied to different asset classes ( equities , bonds , interest rate products , fx products , credit derivatives etc . ) and to the overall financial markets .in addition to trading liquidity , banks and financial institutions also closely monitor the funding liquidity , which is the ease with which liabilities can be funded through different financing sources .market and funding liquidity are related since timely funding of liabilities relies on the market liquidity risk of its assets , given that a bank may need to sell some of its assets to match its liability - side obligations at certain points in time .the recent crisis prompted regulators and central banks to look very closely at both types of liquidity and to propose new guidelines for liquidity risk management ( see bis(2008 ) , fsa(2009 ) ) .a third kind of liquidity that is however implicit in the above schematization , is the systemic liquidity risk associated to a global financial crisis , characterized by a generalized difficulty in borrowing . as with other types of risks, liquidity needs to be analyzed from both a pricing perspective and a risk management one . in the pricing space ,amihud , mendelson , and pedersen ( 2005 ) provide a thorough survey of theoretical and empirical papers that analyze the impact of liquidity on asset prices for traditional securities such as stocks and bonds .other papers ( cetin , jarrow , protter , and warachka ( 2005 ) , garleanu , pedersen and poteshman ( 2006 ) ) investigated the impact of liquidity on option prices .more generally cetin , jarrow and protter ( 2004 ) extends the classical arbitrage pricing theory to include liquidity risk by considering an economy with a stochastic supply curve where the price of a security is a function of the trade size .this leads to a new definition of self - financing trading strategies and to additional restrictions on hedging strategies , all of which have important consequences in valuation .their paper also reports a good summary of earlier literature on transaction costs and trade restrictions , to which we refer the interested reader .morini ( 2009 ) analyzes the liquidity and credit impact on interest rate modeling , building a framework that consistently accounts for the divergence between market forward rate agreements ( fra ) rates and the libor replicated fra rates .he also accounts for the divergence between overnight indexed swap rates ( eonia ) and libor rates .the difference between the two rates can only be attributed to liquidity or counterparty risk , the latter being almost zero in eonia due to the very short ( practically daily ) times between payments . for illustration purposes ,we report in fig .[ fig : liboreonia ] the differences between eonia and libor rates for europe and the analogous difference for the united states .it is clear from the graphs in fig .[ fig : liboreonia ] that starting from the end of 2007 and till mid 2008 , there is a noticeable increase of the difference between 1 month libor and overnight index swap rate , which is instead very small in the beginning of the observation period .this is not surprising as the end of 2007 corresponds to the start of the subprime mortgage crisis , which then exacerbated and became the credit crunch crisis for the all period of 2008 .the analysis done by morini ( 2009 ) makes use of basis swaps between libor with different tenors , and takes into account collateralization .morini is able to reconcile the divergence in rates even under simplifying assumptions .his analysis however implicitly views liquidity as a consequence of credit rather than as an independent variable , although he does not exclude the possibility that liquidity may have a more independent component .several studies ( jarrow and subramanian(1997 ) , bangia et al . ( 1999 ) , angelidis and benos ( 2005 ) , jarrow and protter(2005 ) , stange and kaserer(2008 ) , earnst , stange and kaserer(2009 ) among few others ) propose different methods of accounting for liquidity risk in computing risk measures .bangia et al .( 1999 ) classify market liquidity risk in two categories : ( a ) the exogenous illiquidity which depends on general market conditions , is common to all market players and is unaffacted by the actions of any one participant and ( b ) the endogenous illiquidity that is specific to one s position in the market , varies across different market players and is mainly related to the impact of the trade size on the bid - ask spread .bangia et al .( 1999 ) and earnst et al .( 2009 ) only consider the exogenous illiquidity risk and propose a liquidity adjusted var measure built using the distribution of the bid - ask spreads .the other mentioned studies model and account for endogenous risk in the calculation of liquidity adjusted risk measures . in the context of the coherent risk measures literature , the general axiomsa liquidity measure should satisfy are discussed in acerbi and scandolo ( 2008 ) .they propose a formalism for liquidity risk which is compatible with the axioms of coherency of the earlier risk measures literature .they emphasize the important but easily overlooked difference between coherent risk measures defined on portfolio values and coherent risk measures defined on the vector space of portfolios .the key observation is that in presence of liquidity risk the value function on the space of portfolios is not necessarily linear . from this starting pointa theory is developed , introducing a nonlinear value function depending on a notion of liquidity policy based on a general description of the microstructure of illiquid markets and the impact that this microstructure has when marking to market a portfolio . in this paperwe focus on liquidity modeling in the valuation space , to which we go fully back now , and more specifically , in the context of credit derivatives instruments , on the impact of liquidity on credit default swaps ( cds ) premium rates . cds represent the most liquid credit instruments and are highly standardized . the basic idea to include liquidity as a spread , leading to a liquidity stochastic discount factor , follows the approach adopted for example by chen , cheng and wu ( 2005 ) , buhler and trapp ( 2006 ) and ( 2008 ) ( bt06 and bt08 ) and chen , fabozzi and sverdlove ( 2008 ) ( cfs ) , among others . all approaches but bt08 are unrealistic inthat they assume the liquidity rate to be independent of the hazard rate associated to defaults of the relevant cds entities . bt08 and predescu et al ( 2009 )show , although in different contexts , that liquidity and credit are correlated .we discuss their results .we will then analyze a different approach , by bongaerts , de jong and driessen ( 2009 ) ( bdd ) who use capital asset pricing model ( capm ) like arguments to deduce liquidity from cds data .none of these works uses data in - crisis , i.e. after june 2007 .one exception is predescu et al ( 2009 ) ( ptglkr ) , where liquidity scores for cds data are produced starting from contributors bid ask or mid cds quotes across time .this is an ordinal liquidity measure , as opposed to a more attractive cardinal one , but it represents - to the best of our knowledge - the only work dealing with cds liquidity using also crisis data .after the ordinal model by predescu et al ( 2009 ) , we go back to cardinal models and briefly hint at tang and yan ( 2007 ) , that also includes bid ask information among other variables chosen as liquidity measures : volatility to volume , number of contracts outstanding , trade to quote ratio and volume .tang and yan is reviewed more in detail in brigo , predescu and capponi ( 2010 ) .we then conclude the paper by comparing the often contradictory findings of the above works , pointing out remaining problems in liquidity estimation and pricing in the credit market .the basic idea in this approach is to include liquidity as a ( possibly stochastic ) spread , leading to a liquidity ( possibly stochastic ) discount factor . in order to be able to work on liquidity for cds we need to introduce the cds contract and its mechanics . to this endwe follow brigo and mercurio ( 2006 ) , chapter 21 .the running cds contract is defined as follows .a cds contract ensures protection against default .two parties a " ( protection buyer ) and b " ( protection seller ) agree on the following .if a third party c " ( reference credit ) defaults at time , with , b " pays to a " a certain amount ( loss given default of the reference credit c " ) . in turn , a `` pays to ' ' b " a premium rate at times or until default .set and .we can summarize the above structure as ( protection leg and premium leg respectively ) .the amount is a _ protection _ for a " in case c " defaults .typically notional , or notional - recovery " .formally , we may write the cds discounted value at time as seen from b " as where , i.e. is the first date among the s that follows and is the risk free discount factor at time for maturity . a note on terminology : in the market is usually called cds spread ". however , we will use spread " both to denote the difference between the ask and bid quotes of a security and to indicate instantaneous rates on top of the default - free instantaneous rate . to avoid confusionwe will refer to as to the cds premium rate rather than cds spread . usually , at inception time ( say ) the amount is set at a value that makes the contract fair , i.e. such that the present value of the two exchanged flows is zero .this is how the market quotes running cds s : cds are quoted via their fair s ( bid and ask ) .recently , there has been some interest in upfront cds " contracts with a fixed premium rate in the premium leg . in these contracts the premium rate is fixed to some pre - assigned canonical value , typically 100 or 500 basis points ( bps , 1bp ) , and the remaining part of the protection is paid upfront by the party that is buying protection . in other terms , instead of exchanging a possible protection payment for some coupons that put the contract in equilibrium at inception , one exchanges it with a fixed coupon and compensates for the difference with an upfront payment .we denote by ,t_a , t_b , s , { \mbox{l{\tiny gd}}}) ] , and to shorten notation further we may write . the pricing formulas for these payoffs depend on the assumptions on interest - rate dynamics and on the default time .if is assumed to be independent of interest rates , then model independent valuation formulas for cds s involving directly default ( or survival ) probabilities and default free zero coupon bonds are available . proper valuation of cds should also take into account counterparty risk , see for example brigo and chourdakis ( 2009 , unilateral case ) and brigo and capponi ( 2008 , bilateral case ) .here however we focus on works that did not consider counterparty risk in the cds valuation .in general , whichever the model , we can compute the cds price according to risk - neutral valuation : where is the risk neutral expectation conditional on the market information at time . if we define the fair premium of the cds at a given time as the value of the premium rate such that , i.e. such that the two legs of the cds have the same value , we can write , on , } { { \mathbb{e } } _ t[d(t,\tau ) ( \tau - t_{\beta(\tau)-1 } ) \mathbf{1}_{\{t_a < \tau < t_b \ } } + \sum_{i = a+1}^b d(t , t_i ) \alpha_i \mathbf{1}_{\{\tau \ge t_i\ } } ] } \ ] ] if we assume independence between rates and the default time , or more in particular deterministic interest rates , then the default time and interest rate quantities are independent .it follows that the ( receiver ) cds valuation , for a cds selling protection at time for defaults between times and in exchange of a periodic premium rate becomes ( pl = premium leg , dl = default leg or protection leg ) \\ \label{eqn : credit : modindcdsdl } \mbox{dl}_{a , b}(0,{\mbox{l{\tiny gd } } } ) = - { \mbox{l{\tiny gd}}}\left [ \int_{t_a}^{t_b } p(0,t)\d_t { \mathbb{q } } ( \tau \ge t)\right],\end{aligned}\ ] ] where ] are jointly gaussian .this is not a viable assumption for us , however , since needs to be positive , being a time - scaled probability . in order to avoid numerical methods ,most authors assume independence between credit risk and liquidity risk in order to be able to apply formula ( [ eq : defzcbter ] ) and the likes .it is worth highlighting at this point that this assumption is unrealistic , as we will discuss further in section [ sec : fitchliqscores ] . in particular , see figure [ fig : liqsmile ] below . in this paper , the authors study liquidity and its impact on single name cds prices for corporations . from a first exam of the data they notice that the bid - ask spreads are very wide , especially for the high - yield corporate names in their study .while this is pre - crisis data , they noticed that the liquidity in the cds market has improved in time , while still maintaining a larger bid - ask spread than typical bid - ask spreads in the equity market .after the preliminary analysis , the authors employ a two - factor cox - ingersoll - ross model for the liquidity and hazard rates and estimate their dynamics using maximum likelihood estimation ( mle ) . in the above formalism , this means they made two particular choices for the processes ( [ handlambda ] ) to be of the type dt + \nu^h \sqrt{h_t } dw^h_t , \ \ d { \ell}_t = [ k^{\ell}\theta^{\ell}- ( k^{\ell}+ m^{\ell } ) { \ell}_t ] dt + \nu^{\ell}\sqrt{{\ell}_t } dw^{{\ell}}_t,\ ] ] for positive constants and . the two processes are assumed , somewhat unrealistically ( see also the discussion in section [ sec : fitchliqscores ] ) , to be independent .the above is the dynamics under the risk neutral or pricing probability measure .this is the dynamics that is used in valuation , to compute risk neutral expectations and prices .the dynamics under the physical measure related to historical estimation and mle is where now are brownian motions under the physical measure . in ( [ handlambdacirphys ] )the s are mean reversion levels , the are speed of mean reversion parameters , the are instantaneous volatilities parameters. the parameters are market prices of risk , parameterizing the change of measure from the physical probability measure to the risk neutral one .see for example brigo and mercurio ( 2006 ) .brigo and hanzon ( 1998 ) hint at the possible application of filtering algorithms and quasi - maximum likelihood to a similar context for interest rate models .the advantages of the cir model are that the processes are non - negative , in that and ( and one has strict positivity with some restrictions on the parameters ) , and there is a closed form formula in terms of and for } = { \mathbb{e } } _ 0 \bigg { [ } e^{-\int_0^t h_u \ du } \bigg { ] } { \mathbb{e } } _ 0 \bigg { [ } e^{-\int_0^t { \ell}_u \ du } \bigg{]}\ ] ] and related quantities that are needed to compute the cds prices .indeed , through formula ( [ eq : cdsfairspread0 ] ) combined with formula ( [ eq : survivalhrm ] ) that is known in closed form for cir , we have the cds premium rate in closed form for our model . adding the liquidity discount term , , into to the cds premium rate formula ( [ eq : cdsfairspread ] ), we obtain } { { \mathbb{e } } _ t[d(t,\tau ) ( \tau - t_{\beta(\tau)-1 } ) \mathbf{1}_{\{t_a < \tau < t_b \ } } + \sum_{i = a+1}^b d(t , t_i ) \alpha_i \mathbf{1}_{\{\tau \ge t_i\ } } ] } \ ] ] notice that we added the additional ( il)liquidity discount term only in the numerator .this is the strategy followed by cfs , who argue that the annuity should not be adjusted by liquidity . as observed above thisis debatable , since even the premium leg of the cds is part of a traded product , and indeed for example bt[06,08 ] follow a different strategy . formula ( [ eq : cdsfairspreadwithliqcfs ] ) can be further explicited in terms of the processes via iterated expectations with respect to the sigma field .for the special case of one obtains } { \mbox{accrual}_{0,b } + \sum_{i = a+1}^b \alpha_i { \mathbb{e } } _ 0 [ \exp(-\int_0^{t_i}(r_s+h_s)ds ) ] } \\\nonumber \mbox{accrual}_{0,b } = \int_{0}^{t_b } { \mathbb{e } } _ 0\left [ h_u \exp\left(-\int_0^u(r_s+h_s)ds\right ) ( u - t_{\beta(u)-1 } ) \right]du.\end{aligned}\ ] ] this holds for general dynamics for , not necessarily of square root type , and does not require independence assumptions . in any case ,if one sticks to ( [ eq : cdsfairspreadwithliqcfs ] ) , the cds fair premium rate formula with deterministic interest rates and hazard rates independent of liquidity spreads reads = p^{\mbox \tiny cir}(0,t ; { \ell}_0 , k^{\ell},\theta^{\ell},\nu^{\ell } , m^{\ell})\label{eq : bondlambdacir } , \\ { \mathbb{q } } ( \tau \ge t ) = { \mathbb{e } } _ 0 [ e^{-\int_0^t h_s ds } ] = p^{\mbox \tiny cir}(0,t ; h_0 , k^h,\theta^h,\nu^h , m^h ) , \label{eq : bondhcir}\end{aligned}\ ] ] where is the bond price formula in a cir model having or respectively as short rate . for example , ^{2 { k^h } \theta^h / ( \nu^h)^2 } \ , \\\psi(t ) & = & \frac{2 ( \exp\{t \sqrt{z}\ } - 1 ) } { 2 \sqrt{z } + ( { k^h } + { m^h } + \sqrt{z } ) ( \exp\{t \sqrt{z}\ } - 1 ) } \ , \\ z & = & ( { k^h } + { m^h})^2 + 2 ( \nu^h)^2 \ .\end{aligned}\ ] ] notice that we have all the terms in closed form to compute formula ( [ eq : cdsfairspread0liq ] ) thanks to the cir bond price formula and the independence assumption . then ( [ eq : cdsfairspread0liq ] ) combined with ( [ eq : bondlambdacir ] ) and ( [ eq : bondhcir ] ) provides a formula for cds premium rates with liquidity as a function of , and the model parameters and .similarly , formula ( [ eq : cdsfairspread ] ) coupled with ( [ eq : bondhcir ] ) provided a formula for cds premium rates without liquidity as a function of and the model parameters .these formulas can also be applied at a time later than . indeed , while taking care of adjusting year fractions and intervals of integration , one applies the same formula at time .let us denote the at - the - money liquidity adjusted cds rate , and the at - the - money cds rate , at time by respectively . a maximum likelihood estimationwould then be used ideally , trying to obtain the transition density for given from the non - central ( independent ) chi - squared transition densities for given and given .one would then maximize the likelihood over the sample period by using such transition densities as key tools , to obtain the estimated model parameters .notice that this would be possible only when and are independent , since the joint distribution of two correlated cir processes is not known in closed form .similarly , the transition density for given would be obtained from the chi - squared transition density for given .cfs adopt a maximum likelihood estimation method based on the earlier work by chen and scott ( 1993 ) .this maximum likelihood method allows cfs to compute : * the credit parameters from a time series of ask premium rates * the liquidity parameters from a time series of mid cds premium rates .cfs find that the parameters of the hazard rate factor are more sensitive to credit ratings and those for the liquidity component are more sensitive to market capitalization and number of quotes , two proxies for liquidity .cfs also refer to earlier studies where cds premiums had been used as a pure measure of the price of credit risk .cfs argue , through a simulation analysis , that small errors in the cds credit premium rate can lead to substantially larger errors in the corporate bond credit spread for the same reference entity .empirically , they use the cds estimated hazard rate model above to reprice bonds , with ( and ) and without ( just ) taking cds liquidity into account .when using these hazard rates to calculate bond spreads , cfs find that incorporating the cds liquidity factor results in improved estimates of the liquidity spreads for the bonds in their sample .cfs thus argue that while cds premiums can be used in the analysis of corporate bond spreads , one must be careful to take into account the presence of a liquidity effect in the cds market .results reported in the earlier literature before 2006 stated that bond credit spreads were substantially wider than cds premiums .this has been contradicted by many observations during the crisis , but already cfs show that , since a small cds liquidity premium can translate into a large liquidity discount in a bond s price , mostly due to the principal repayment at final maturity , they can successfully reconcile cds premiums and bond credit spreads by incorporating liquidity into their model .however , the relevance of this analysis for data in - crisis remains to be proven .finally , it is worth noticing that in cfs work the ( il)liquidity premium is earned by the cds protection buyer . indeed , adding a positive ( il)liquidity discount rate to the model ( and to the default leg only ) lowers the fair cds premium rate with respect to the case with no illiquidity .this means that the protection buyer will pay a lower premium for the same protection in a universe where illiquidity is taken into account , i.e. the liquidity premium is earned by the protection buyer .bt06 make a different choice , and assume that the cds liquidity discount appears in the premium leg of the cds and , furthermore , that there is a bond liquidity discount for the recovery part of the default leg .bt06 see liquidity as manifesting itself in the bond component of the default leg , that is involved in the recovery , as one gets a recovery of the bond value ideally when the bond with face value equal to the cds notional is meant to be delivered upon default .their approach may look debatable as well , although it is further motivated as follows : `` a common solution to this problem both in empirical studies and theoretical models [ ... ] , is to assume that the cds mid premium is perfectly liquid and thus identical to the transaction premium .we believe that this assumption , however , is not appropriate . from a theoretical point - of - view , the assumption suggests that transaction costs , here the bid - ask - spread , are equally divided between the protection buyer and the protection seller .this fiction neglects the possibility of asymmetric market frictions which lead to asymmetric transaction costs .the empirical evidence that cds transaction premia tend to fluctuate around mid premia , see buhler and trapp ( 2005 ) , adds weight to these theoretical concerns . in order to reconcile the theoretical arbitrage considerations to a model of cds illiquidity , we assume that the mid cds premium contains an illiquidity component . ''bt06 assume that all the bonds and the cds for the same issuer have identical default intensity ( ) but different liquidity intensities : for all bonds of one issuer and for that issuer cds .they also assume independence between default free rates , default intensity and liquidity intensities . using a similar notation as in the previous sections for simplicity ,although in their actual work bt06 use discrete time payments in the default leg , the model implied cds premium rate is equal to where , \l^{b}(0,t ) = e^{-\int_0^t { \ell}^{b}_s ds } , \ \ \ a^{c}(0,t ) = { \mathbb{e } } _ { 0 } [ l^{c}(0,t ) ] , \ \l^{c}(0,t ) = e^{-\int_0^t { \ell}^{c}_s ds}.\ ] ] are the liquidity discount factors for the bond and the cds payment leg respectively .the default intensity is assumed to follow a mean reverting square root process as in ( [ handlambdacir ] ) .liquidity intensities are assumed to follow arithmetic brownian motions with constant drift and diffusion coefficients : notation is self - evident .notice that the ( il)liquidity premium will be negative in some scenarios , due to the left tail of the gaussian distribution for .this is a major difference with cfs , where the illiquidity premium is always positive .these assumptions allow for analytical solutions for bonds and cds premium rates and for calibration to observed bond spreads and cds premium rates .bt06 perform an empirical calibration of the model using bonds and cds for 10 telecommunications companies over the time period 2001 - 2005 .they find that while the credit risk components in cds and bond credit spreads are almost identical , the liquidity premia differ significantly .the illiquidity premium component of bond credit spreads is always positive and is positively correlated with the default risk premium . in times of increased default risk bondsbecome less liquid .the cds illiquidity premium can take positive or negative values , but is generally very small in absolute value .contrary to the bonds case , cds liquidity improves when default risk increases .thus their framework can explain both positive and negative values for the cds bond basis through variations in the cds and bond liquidity .given the very small sample size in their study , it is not clear whether these results are representative of the whole market . also , they too use only pre - crisis data .finally , this approach suffers again from the independence assumption , that is rather unrealistic ( see once more the discussion in section [ sec : fitchliqscores ] ) .the correlation issue is addressed in bt08 , which extends the previous model in bt06 to a reduced form model incorporating now correlation between bond liquidity and cds liquidity and between default and bond / cds liquidity .additionally , they assume different liquidity intensities associated with the ask ( ) and bid cds ( ) . in the bt08 model the stochastic default intensity ( ) and the illiquidity intensities ( , , )are all driven by four independent latent factors as follows where is modeled as a mean reverting square root process as in ( [ handlambdacir ] ) and as arithmetic brownian motions as in ( [ eq : lintbt ] ) .again , notation is self - evident .note that in this model and shape the correlations between the default intensity and the liquidity intensities , while shape the liquidity spillover effects between bonds and cds , which are assumed to be symmetric .notice that the system of equations in ( [ eq : modelbt08 ] ) does not guarantee , thus not excluding the case .however , in their empirical study , they find that this never occurs .it is further assumed that risk free interest rates are independent of the default and liquidity intensities .valuing bonds and cds in the bt08 framework mainly involves the computation of the expectation of the risk free discount factor and the expectation of the product of the default and liquidity discount factors .the latter expectation is not a product of expectations as before given the assumed dependence between default and liquidity , so that the analogous of formula ( [ eq : defzcbfour ] ) can not be applied .the bid cds premium rate formula becomes : dt } { \sum_{i=1}^{b } p(0,t_i ) \alpha_i { \mathbb{e } } _ { 0 } [ e^{-\int_0^{t_i } { \ell}^{c , bid}_s ds } e^{-\int_0^{t_i } h_s ds } ] + \mbox{accrual } } , \ ] ] dt\ ] ] the ask cds premium rate formula is similar with instead of . note that the ask illiquidity discount rate appears in the payment leg and captures the fact that part of the ask cds premium rate may not be due to default risk but reflects an additional premium for illiquidity demanded by the protection seller . on the other hand would capture the illiquidity premium demanded by the protection buyers .different illiquidity ask and bid spreads reflect asymmetric transaction costs which are driven by the general observed asymmetric market imbalances .the assumed factor structure of the model and the independence between the latent factors imply an affine term structure model with analytical formulae for both bonds and cds . for example ,expectations in ( [ eq : cdsfairspreadbidwithliqbt ] ) can be computed in closed form .data on bonds yields and cds premium rates on 155 european firms for the time period covering 2001 to 2007 is then used to estimate the model parameters .the estimation procedure generates firm - level estimates for the parameters of the latent variables processes , sensitivities of the different intensities to the latent factors ( ) and the values for the credit and liquidity intensities at each point in time ( ) .the empirical estimation in bt08 implies several interesting findings .first , their results suggest that credit risk has an impact on both bond and cds liquidity . as credit risk increases, liquidity dries up for bonds and for the cds ask premium rates ( , are positive and significant ) .however the impact of increased credit risk on cds bid liquidity spreads is mixed across different companies , but on average higher credit risk results in lower cds bid liquidity intensity ( is on average negative and significant ) .second , their results suggest that while the impact of bond or cds liquidity on credit risk is negligible ( , , are not statistically significant ) , the spill - over effects between bond and cds market liquidities are significant ( , are negative and significant , is positive and significant ) .they explain the signs of , as a substitution effect between bonds and cds : as bond liquidity dries up ( bond illiquidity intensity goes up ) , bond prices go down and thus taking on credit risk using bonds becomes more attractive .if a trader intends to be long credit risk by selling protection through cds , she will need to drop the ask price ( cds ask liquidity intensity goes down ) compared to the case of high bond liquidity . at the same time lower bond prices in case of lower bond liquidity ( higher ) makes shorting credit risk via bonds more costly which then drives bid quotes in the cds market higher ( higher ) .additionally bt use the empirical parameter and intensity estimates to decompose the bond spreads and cds premium rates into three separate components : the pure credit risk premium , the pure liquidity risk premium and the rest , the credit - liquidity correlation premium .in particular they estimate the pure cds credit risk premium ( ) as the theoretical cds premium rate implied by the model when the liquidity intensities are switched off to zero . the pure cds liquidity premium ( ) is subsequently computed as the difference between the average theoretical mid cds premium rate for uncorrelated credit and liquidity intensities ( and are zero ) and the pure cds credit risk premium . finally the correlation premium is calculated as the difference between the observed market cds premium rate and the sum of the pure credit and pure liquidity premiums . on average bt08 find that , for cds , the credit risk component accounts for 95% of the observed mid premium , liquidity accounts for 4% and correlation accounts for 1% .they proceed in similar fashion for the bond spread decomposition and find that overall 60% of the bond spread is due to credit risk , 35% is due to liquidity and 5% to correlation between credit risk and liquidity .cross - sectionally all credit , illiquidity and correlation premia for bonds and cds increase monotonically as the credit rating worsens from aaa to b and then drop for the ccc category .these findings are in contrast to the ptglkr findings discussed in section [ sec : fitchliqscores ] and in figure [ fig : liqsmile ] in particular .bt08 also examine the time series dynamics of the different components .they find that , while generally similar behavior can be observed for the credit risk premium for both investment grade ( ig ) and high yield ( hy ) firms , the same is not true for the liquidity premium . during a period with high credit spreads ( 2001 - 2002 , around enron and worldcom defaults ) the bond liquidity premium for ig is very volatile and then flattens out at a higher level about mid 2003 . on the other hand bond liquidity premium for hy firms reaches the highest level after worldcom default and decreases to a lower level for the rest of the time period . in the cds marketthe cds liquidity premium for the ig firms is close to 0 for most of time , while for hy it is very volatile and becomes negative when credit risk is high .a negative cds liquidity premium is consistent with more bid - initiated transactions in the market .the bond premium dynamics tend to comove over time with the credit risk premium dynamics .interestingly the correlation premium is larger / smaller than the liquidity premium when credit spreads are high / low .bt interpret this finding as being consistent with the flight to quality / liquidity hypothesis .in other words , in times of stress , investors will try to move away from assets whose liquidity would decrease as credit risk increases and instead acquire liquid assets that can be easily traded .high correlation between illiquidity and credit will thus command a high spread premium component .all the empirical results with respect to the difference between ig and hy should , in our view , be considered carefully since their sample is highly biased towards investment grade firms .also , as before , no data in - crisis has been used .there is a fourth work by bongaerts , de jong and driessen ( 2009 ) [ bdd ] who use capm like arguments to quantify the impact of liquidity on cds returns .they construct an asset - pricing model for cds contracts inspired by the work of acharya and pedersen ( 2005 ) ( ap ) that allows for expected liquidity and liquidity risk .since their approach is heavily based on ap , it is worth recalling ap s general result .ap start from the fundamental question : how does liquidity risk affect asset prices in equilibrium ? " .this question is answered by proposing an equilibrium asset pricing model with liquidity risk .their model assumes a dynamic overlapping generations economy where risk averse agents trade securities ( equities ) whose liquidity changes randomly over time .agents are assumed to have constant absolute risk aversion utility functions and live for just one period .they trade securities at times and and derive utility from consumption at time .they can buy a security at a price but must sell at thus incurring a liquidity cost .liquidity risk in this model is born from the uncertainty about illiquidity costs .under further assumptions such as no short selling , ar(1 ) processes with i.i.d .normal innovations for the dividends and illiquidity costs , ap derive the liquidity - adjusted conditional capm : }_{expected \ asset \ gross \ return } = \underbrace{r^{f}}_{risk \ free \rate}+ \underbrace{\mathbb{e}^{p}_{t}\left[c_{t+1}\right]}_{expected \ illiquidity \ cost } \\\nonumber + \underbrace{\pi_{t}}_{risk \ premium } \underbrace{\frac{cov_{t } \left ( r_{t+1 } , r^{m}_{t+1}\right ) } { var_{t } \left ( r^{m}_{t+1}-c^{m}_{t+1 } \right ) } } _ { \beta_{mkt , t } } + \pi_{t}\underbrace{\frac{cov_{t } \left ( c_{t+1 } , c^{m}_{t+1 } \right ) } { var_{t } \left ( r^{m}_{t+1}-c^{m}_{t+1 } \right ) } } _ { \beta_{2,t } } \\ \nonumber -\pi_{t}\underbrace{\frac{cov_{t } \left ( r_{t+1 } , c^{m}_{t+1 } \right ) } { var_{t } \left ( r^{m}_{t+1}-c^{m}_{t+1 } \right ) } } _ { \beta_{3,t}}-\pi_{t}\underbrace{\frac{cov_{t } \left ( c_{t+1 } , r^{m}_{t+1 } \right ) } { var_{t } \left ( r^{m}_{t+1}-c^{m}_{t+1 } \right ) } } _ { \beta_{4,t}}\end{aligned}\ ] ] where is the conditional market risk premium , with the expectation taken under the physical measure .the remaining notation is self - evident .the liquidity - adjusted conditional capm thus implies that the asset s required conditional excess return depends on its conditional expected illiquidity cost and on the conditional covariance of the asset return and asset illiquidity cost with the market return and the market illiquidity cost . the systematic market and liquidity risksare captured by four conditional betas . the first beta ( )is the traditional capm that measures the co - variation of individual security s return with the market return .the second beta ( ) measures the covariance between asset s illiquidity and the market illiquidity .the third beta ( ) measures the covariance between asset s return and the market illiquidity .this term affects negatively the required return .investors will accept a lower return on securities that have high return in times of high market illiquidity .the fourth beta ( ) measures the covariance between asset s illiquidity and the market return .the effect of this is also negative .investors will accept a lower return on securities that are liquid in times of market downturns . in order to estimate the model empirically ,the unconditional version of the model is derived under the assumption of constant conditional covariances between illiquidity and returns innovations . the unconditional liquidity adjusted capm can be written as : =\mathbb{e}^{p}\left[c_{t}\right]+\pi \beta_{mkt}+ \pi \beta_{2}-\pi \beta_{3}- \pi \beta_{4}\end{aligned}\ ] ] where $ ] is the unconditional market risk premium . ap perform the empirical estimation of the model using daily return and volume data on nyse and amex stocks over the period 1962 - 1999 .the illiquidity measure for a stock is the monthly average of the daily absolute return to volume ratio proposed by amihud ( 2002 ) .illiquid stocks will have higher ratios as a small volume will have a high impact on price .the amihud illiquidity measure ratio addresses only one component of liquidity costs , namely the market impact of traded volume .other components include broker fees , bid - ask spreads and search costs . using portfolios sorted along different dimensions , ap find that the liquidity adjusted capm performs better than the traditional capm in explaining cross - sectional variations in returns , especially for the liquidity sorted portfolios .liquidity risk and expected liquidity premiums are found to be economically significant . on average the premium for expected liquidity ,i.e. the empirical estimate for the unconditional expected illiquidity cost , is equal to 3.5% .the liquidity risk premium , calculated as , is estimated to be 1.1% .about 80% of the liquidity risk premium is due to the third component which is driven by the covariation of individual illiquidity cost with the market return .bdd extend the model proposed by ap to an asset pricing model for both assets in positive net supply ( like equities ) and derivatives in zero net supply . differently from the ap framework where short selling is not allowed , in the bdd modelsome of the agents are exposed to non - traded risk factors and in equilibrium they hold short positions in some assets to hedge these risk factors .specifically there are two types of assets in the model : basic or non - hedge assets ( e.g. equities ) which agents hold long positions on in equilibrium and hedge assets which can be held long or short by different agents in equilibrium .hedge assets are sold short by some agents to hedge their exposures to non - traded risks .examples of such risks are non - traded bank loans or illiquid corporate bonds held by some financial institutions such as commercial banks .these institutions can hedge the risks with cds contracts .other agents such as hedge funds or insurance companies may not have such exposures and may sell cds to commercial banks to earn the spread .the bdd model implies that the equilibrium expected returns on the hedge assets can be decomposed in several components : priced exposure to the non - hedge asset returns , hedging demand effects , an expected illiquidity component , liquidity risk premia and hedge transaction costs . unlike the ap model where higher illiquidity leads to lower prices and higher expected returns ,the impact of the liquidity on expected returns in bdd model is more complex .the liquidity risk impact depends on several factors such as heterogeneity in investors non - traded risk exposure , risk aversion , horizon and agents wealth .additionally bdd model implies that , for assets in zero net supply like cdss , sensitivity of individual liquidity to market liquidity ( ) is not priced .bdd perform an empirical test of the model on cds portfolio returns over the 2004 - 2008 period .the cds sample captures 46% of the corporate bond market in terms of amount issued .the estimation procedure is a two - step procedure . in the first step ,expected cds returns , liquidity measures ( proxied by the bid - ask spread ) , non traded risk factor returns , non - hedge asset returns and different betas with respect to market returns and market liquidity are estimated .the non - hedge asset returns are proxied by the s&p 500 equity index returns .such estimates represent the explanatory variables of the asset pricing model , where the response variable is the expected excess return on the hedge asset . in the second step , the generalized method of momentsis used to estimate the coefficients of the different explanatory variables in the model .their results imply a statistically and economically significant expected liquidity premium priced in the expected cds returns . on average thisis 0.175% per quarter and it is earned by the protection seller , contrary to ccw and cfs above .they also find that the liquidity risk premium is statistically significant , but economically very small , -0.005% .somewhat questionably , the equity and credit risk premia together account for only 0.060% per quarter .predescu et al ( 2009 ) have built a statistical model that associates an ordinal liquidity score with each cds reference entity .this provides a comparison of relative liquidity of over 2,400 reference entities in the cds market globally , mainly concentrated in north america , europe , and asia .the model estimation and the model generated liquidity scores are based upon the fitch cds pricing service database , which includes single - name cds quotes on over 3000 entities , corporates and sovereigns , across about two dozen broker - dealers back to 2000 .the liquidity score is built using well - known liquidity indicators like the bid - ask spread as well as other less accessible predictors of market liquidity such as number of active dealers quoting a reference entity , staleness of quotes of individual dealers and dispersion in mid - quotes across market dealers .the bid - ask spread is essentially an indicator of market breadth ; the existence of orders on both sides of the trading book typically corresponds to tighter bid - asks .the other measures are novel measures of liquidity which appear to be significant model predictors for the otc cds market .dispersion of mid quotes across dealers is a measure of price uncertainty about the actual cds price .less liquid names are generally associated with more price uncertainty and thus large dispersion .the third liquidity measure aggregates the number of active dealers and the individual dealers quote staleness into an ( in)activity measure , which is meant to be a proxy for cds market depth .illiquidity increases if any of the liquidity predictors increases , keeping everything else constant. therefore liquid ( less liquid ) names are associated with smaller(larger ) liquidity scores .the liquidity scores add insight into the liquidity risk of the credit default swap ( cds ) market including : understanding the difference between liquidity and credit risk , how market liquidity changes over time , what happens to liquidity in times of stress , and what impact certain credit events have on the liquidity of individual assets . for example it reveals a u - shape relation between liquidity and credit with names in bb and b categories being the most liquid and names at the two ends of credit quality ( aaa and ccc / c ) being less liquid .( see figure [ fig : liqsmile ] . )the u - shape relation between liquidity and credit is quite intuitive as one would expect a larger order imbalance between buy and sell orders for names with a very high or a very low credit rating than for names with ratings in the middle range . in particular, it is reasonable to expect more buying pressure for ccc names and more selling pressure for aaa names .most of the trading will take place in the middle categories ( bbb , bb , and b ) .the extent of the illiquidity at the two extremes also changes over time .this is particularly more pronounced for c - rated entities , which were relatively less liquid in 2007 .additionally predescu et al ( 2009 ) find that the liquidity score distribution shifts significantly over the credit / liquidity crisis , having fatter tails ( i.e. more names in the very liquid and very illiquid groups ) than before the crisis .the score allows for the construction of liquidity indices at the aggregate market , region or sector levels and , therefore , is very useful for studying market trends in liquidity .this study produces an operational measure of liquidity for each cds reference entity name on a daily basis , and has been extensively validated against external indicators of liquidity . after the ordinal model by predescu et al ( 2009 ) , we go back to works that decompose cds premium rates levels or changes into liquidity and credit components , and briefly hint at one such study : tang and yan ( 2007 ) .they also include bid ask information among other variables chosen as liquidity measures : volatility to volume , number of contracts outstanding , trade to quote ratio and volume .they separate liquidity from credit by including other credit control variables in the regression .the liquidity variables are generally statistically significant , however their impact on premium rates differs .a more detailed review of tan and yang is available in brigo , predescu and capponi ( 2010 ) .this paper reviews different theoretical and empirical approaches for measuring the impact of liquidity on cds prices .we start by investigating a reduced form approach that incorporates liquidity as an additional discount yield . the different studies ( chen , fabozzi and sverdlove ( 2008 ) , buhler and trapp ( 2006 , 2008 ) ) that use a reduced form model make different assumptions about how liquidity intensity enters the cds premium rate formula , about the dynamics of liquidity intensity process and about the credit - liquidity correlation . among these studies bt08provides the most general and a more realistic reduced form framework by incorporating correlation between liquidity and credit , liquidity spillover effects between bonds and cds contracts and asymmetric liquidity effects on the bid and ask cds premium rates .however the empirical testing of their model can be significantly improved by using a larger , more representative sample over a longer time period including the crisis .we then discuss the bongaerts , de jong and driessen ( 2009 ) study which derives an equilibrium asset pricing model with liquidity effects .they test the model using cds data and find that both expected liquidity and liquidity risk have a statistically significant impact on expected cds returns .however only compensation for expected liquidity is economically significant with higher expected liquidity being associated with higher expected returns for the protection sellers .this finding is contrary to chen , cheng and wu ( 2005 , reviewed in brigo , predescu and capponi , 2010 ) and chen , fabozzi and sverdlove ( 2008 ) that found protection buyers to earn the liquidity premium .we approach the end of our review with a discussion of predescu et al ( 2009 ) which provides the only operational measure of cds liquidity that is currently available in the market .they propose a statistical model that associates an ordinal liquidity score with each cds reference entity and allows one to compare liquidity of over 2400 reference entities .after the ordinal model by predescu et al ( 2009 ) , we briefly hint at the work by tang and yan ( 2007 ) , that decomposes again cds premiums into liquidity and credit components , also including bid ask information among other variables chosen as liquidity measures .this is reviewed more in detail in brigo , predescu and capponi ( 2010 ) . despite their methodological differences , all these studies point to one common conclusion andthat is that cds premium rates should not be assumed to be only pure measures of credit risk .cds liquidity varies cross - sectionally and over time .more importantly , cds expected liquidity and liquidity risk premia are priced in cds expected returns and premium rates . nevertheless further research is needed to test and validate the cds liquidity premiums and the separation between credit and liquidity premia at cds contract level .bangia , a. , diebold , f.x . , schuermann , t. and stroughair , j.d .modeling liquidity risk with implications for traditional market risk measurement and management . _working paper _ , financial institutions center at the wharton school .brigo , d. , and capponi , a. ( 2008 ) .bilateral counterparty risk valuation with stochastic dynamical models and application to credit default swaps .available at papers.ssrn.com/sol3/papers.cfm?abstract_id=1318024 .brigo , d. , and chourdakis , k. ( 2009 ) .counterparty risk for credit default swaps : impact of spread volatility and default correlation ._ international journal of theoretical and applied finance _12(7 ) , pp . 10071026 .brigo , d. , presescu , m. , and capponi , a. ( 2010 ) .liquidity modeling for credit default swaps : an overview . to appear in : bielecki , t. , brigo , d. , and patras , f. ( editors ) , credit risk frontiers : the subprime crisis , pricing and hedging , cva , mbs , ratings and liquidity , bloomberg press .cetin , u. , jarrow , r. , protter , p. , and warachka , m. ( 2005 ) .pricing options in an extended black - scholes economy with illiquidity : theory and empirical evidence ._ review of financial studies _19(2 ) , pp . 439529 .chen , r.r . ,cheng , x. , and wu , l. ( 2005 ) .dynamic interactions between interest rate , credit , and liquidity risks : theory and evidence from the term structure of credit default swaps . _ working paper _ , available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=779445 .chen , r.r , fabozzi , f. , and sverdlove , r. ( 2008 ) .corporate cds liquidity and its implications for corporate bond spreads ._ working paper _ , available at http://business.rutgers.edu/files/cfs20080907.pdf .predescu , m. , thanawalla , r. , gupton , g. , liu , w. , kocagil , a. , and reyngold , a. ( 2009 ) .measuring cds liquidity .fitch solutions presentation at the bowles symposium , georgia state university , february 12 2009 , available at http://www.pfp.gsu.edu/bowles/bowles2009/liu-liquiditymeasure_methodlogyresults.pdf .
we review different theoretical and empirical approaches for measuring the impact of liquidity on cds prices . we start by reduced form models incorporating liquidity as an additional discount rate . we review chen , fabozzi and sverdlove ( 2008 ) and buhler and trapp ( 2006 , 2008 ) , adopting different assumptions on how liquidity rates enter the cds premium rate formula , about the dynamics of liquidity rate processes and about the credit - liquidity correlation . buhler and trapp ( 2008 ) provides the most general and realistic framework , incorporating correlation between liquidity and credit , liquidity spillover effects between bonds and cds contracts and asymmetric liquidity effects on the bid and ask cds premium rates . we then discuss the bongaerts , de jong and driessen ( 2009 ) study which derives an equilibrium asset pricing model incorporating liquidity effects . findings include that both expected illiquidity and liquidity risk have a statistically significant impact on expected cds returns , but only compensation for expected illiquidity is economically significant with higher expected liquidity being associated with higher expected returns for the protection sellers . this finding is contrary to chen , fabozzi and sverdlove ( 2008 ) that found protection buyers to earn the liquidity premium instead . we finalize our review with a discussion of predescu et al ( 2009 ) , which analyzes also data in - crisis . this is a statistical model that associates an ordinal liquidity score with each cds reference entity and allows one to compare liquidity of over 2400 reference entities . this study points out that credit and illiquidity are correlated , with a smile pattern . all these studies highlight that cds premium rates are not pure measures of credit risk . cds liquidity varies cross - sectionally and over time . cds expected liquidity and liquidity risk premia are priced in cds expected returns and spreads . further research is needed to measure liquidity premium at cds contract level and to disentangle liquidity from credit effectively . * ams classification codes * : 60h10 , 60j60 , 91b70 ; * jel classification codes * : c51 , g12 , g13 * keywords : * credit default swaps , liquidity spread , liquidity premium , credit liquidity correlation , liquidity pricing , intensity models , reduced form models , capital asset pricing model , credit crisis , liquidity crisis .
due to numerous perturbations and errors , one can not expect a spacecraft steered by the precomputed optimal control to exactly move on the correspondingly precomputed optimal trajectory .the precomputed optimal trajectory and control are generally referred to as the nominal trajectory and control , respectively .once a deviation from the nominal trajectory is measured by navigational systems , a guidance strategy is usually required to calculate a new ( or corrected ) control in each guidance cycle such that the spacecraft can be steered by the new control to track the nominal trajectory or to move on a new optimal trajectory . since the 1960s , various guidance schemes have been developed , among of which there are two main categories : implicit one and explicit one . while the implicit guidance strategy generally compares the measured state with the nominal one to generate control corrections ; the explicit guidance strategy recomputes a flight trajectory by onboard computers during its motion . to implement an explicit guidance strategy , numerical integrations and iterations are usually required to solve a highly nonlinear two - point boundary - value problem ( tpbvp ) andthe time required for convergence heavily depends on the merits of initial guesses as well as on the integration time of each iteration . in recent years , through employing a multiple shooting method and the analytical property arizing from the assumption that the gravity field is linear , an explicit closed - loop guidance is well developed by lu _et al . _ for exo - atmospheric ascent flights and for deorbit problems .this explicit type of guidance for endo - atmospheric ascent flights were studied as well in refs . . whereas , the duration of a low - thrust orbtial transfer is so exponentially long that the onboard computer can merely afford the large amount of computational time for integrations and iterations once a shooting method is employed , which makes the explicit guidance strategy unattractive to low - thrust orbital transfer problems .the nog is an implicit and less demanding guidance scheme , which not only allows the onboard computer to realize an online computation once the gain matrices associated with the nominal extremal are computed offline and stored in the onboard computer but also handles disturbances well . assuming the optimal control function is totally continuous , the linear feedback of control was proposed independently by breakwell _ , kelley , lee , speyer _ et al . _ , bryson _et al_. , and hull through minimizing the second variation of the cost functional subject to the variational state and adjoint equations . based on this method , an increasing number of literatures , including refs . and the references therein , on the topic of the nog for orbital transfer problems have been published .more recently , a variable - time - domain nog was proposed by pontani _ to avoid the numerical difficulties arising from the singularity of the gain matrices while approaching the final time and it was then applied to a continuous thrust space trajectories .however , difficulties arize when we consider to minimize the fuel consumption for a low - thrust orbital transfer because the corresponding optimal control function exhibits a bang - bang behavior if the prescribed transfer time is bigger than the minimum transfer time for the same boundary conditions . considering the control function as a discontinuous scalar, the corresponding neighboring optimal feedback control law was studied by mcintyre and mcneal .then , foerster __ extended the work of mcintyre and mcneal to problems with discontinuous vector control functions . using a multiple shooting technique ,the algorithm for computing the nog of general optimal control problems with discontinuous control and state constraints was developed in ref . , which was then applied to a space shuttle guidance in ref .as far as the author knows , a few scholars , including chuang _ et al . _ and kornhauser _ et al . _ , have made efforts on developing the nog for low - thrust multi - burn orbital transfer problems . in the work by chuang _et al . _ , without taking into account the feedback on thrust - on times , the second variation on each burn arc was minimized such that the neighboring optimal feedbacks on thrust direction and thrust off - times were obtained . considering both endpointsare fixed , kornhauser and lion developed an amp for bounded - thrust optimal orbital transfer problems .then , through minimizing this amp , the linear feedback forms of thrust direction and switching times were derived . as is well known, it is impossible to construct the nog unless the jc holds along the nominal extremal since the gain matrices are unbounded if the jc is violated .this result was actually obtained by kelley , kornhauser _ , chuang _ , pontani _ et al . _ , and many others who minimize the amp to construct the nog . as a matter of fact ,given every infinitesimal deviation from the nominal state , the jc , once satisfied , guarantees that there exists a neighboring extremal trajectory passing through the deviated state .therefore , the existence of neighboring extremals is a prerequisite to establish the nog .once the optimal control function exhibits a bang - bang behavior , it is however not clear what conditions have to be satisfied in order to guarantee the existence of neighboring extremals .to construct the conditions that , once satisfied , guarantee that for every state in an infinitesimal neighborhood of the nominal one there exists a neighboring extremal passing through it , this paper presents a novel neighboring extremal approach to establish the nog .the crucial idea is to construct a parameterized family of neighboring extremals around the nominal one .then , as a result of a geometric study on the projection of the parameterized family from tangent bundle onto state space , it is presented in this paper that the conditions sufficient for the existence of neighboring extremals around a bang - bang extremal consist of not only the jc between switching times but also a tc at each switching time .according to recent advances in geometric optimal control , the jc and the tc , once satisfied , are also sufficient to guarantee the nominal extremal to be locally optimal provided some regularity assumptions are satisfied .given these two existence conditions , the neighboring optimal feedbacks on thrust direction and switching times are established in this paper through deriving the first - order taylor expansion of the parameterized neighboring extremals . the present paper is organized as follows : in sect .[ se : problem_formulation ] , the fixed - time low - thrust fuel - optimal orbital transfer problem is formulated and the first - order necessary conditions are presented by applying the pontryagin maximum principle ( pmp ) . in sect .[ se : sufficient ] , a parameterized family of neighbouring extremals around a nominal one is first constructed . through analyzing the projection behavior of the parameterized family from tangent bundle onto state space ,two conditions sufficient for the existence of neighboring extremals are constructed .then , the neighboring optimal feedbacks on thrust direction and switching times are derived . in sect .[ se : implementation ] , the numerical implementation for the nog scheme is presented . in sect .[ se : numerical ] , a fixed - time low - thrust fuel - optimal orbital transfer problem is computed to verify the development of this paper . finally , a conclusion is given in sect .[ se : conclusion ] .throughout the paper , we denote the space of -dimensional column vectors by and the space of -dimensional row vectors by .consider the spacecraft is controlled by a low - thrust propulsion system , the state ^t\in\mathbb{r}^n ] the normalized mass flow rate of the engine , i.e. , , and let be the unit vector of the thrust direction , one immediately gets .accordingly , and can be considered as control variables .set \times\mathbb{s}^2 ] from the fixed initial point to a final point such that the fuel consumption is minimized , i.e. , for every , if is small enough , the controllability of the system holds in the admissible set .let be the minimum transfer time of the system for the same boundary conditions as the _ fop _ , if , there exists at least one fuel - optimal solution in .thanks to the controllability and the existence results , the pmp is applicable to formulate the following necessary conditions .hereafter , we define by the column vector the costate of . then , according to the pmp in ref . , if an admissible controlled trajectory \rightarrow \mathcal{x} ] is an optimal one of the _ fop _ , there exists a nonpositive real number and an absolutely continuous mapping on ] , such that the 5-tuple on ] , if satisfying eqs .( [ eq : canonical][eq : hamiltonian_h ] ) , is called an extremal .furthermore , an extremal is called a normal one if and it is called an abnormal one if .the abnormal extremals were readily ruled out by gergaud and haberkorn .thus , only normal extremals are considered and is normalized in such a way that in this paper . according to the maximum condition in eq .( [ eq : maximum_condition ] ) , the extremal control is a function of on ] in tangent bundle the extremal and by on ] .then , the maximum condition in eq .( [ eq : maximum_condition ] ) implies and thus , the optimal direction of the thrust vector is collinear to , well known as the primer vector .an extremal on ] with , and the singular value of can be obtained by repeatedly differentiating the identity until explicitly appears .it is called a nonsingular one if the switching function on ] and \rightarrow \mathcal{t} ] , we say every time solution trajectory \rightarrow t\mathcal{x} ] lies in .[ de : reference ] in next paragraph , the neighboring extremals will be parameterized .let us define a submanifold as then , according to definition [ de : reference ] , for every neighbouring extremal on ] . given the nominal extremal on ] .[ de : parameterized ] for the sake of notational clarity , let us define a mapping that projects a submanifold from the tangent bundle onto the state space . given the nominal extremal on ] is a prerequisite to construct the nog . in next subsection , through analyzing the projection behavior of the family at each time from onto , the conditions for the existence of neighboring extremals around a nominal one with a bang - bang control will be established .hereafter , we denote by the current time and let be the measured ( or actual ) state of the spacecraft at . generally speaking , there holds let be the minimum time to steer the system by measurable controls \rightarrow\mathcal{t} ] connecting and . there exists at least one point such that .[ as : tm ] according to the controllability results in ref . , this assumption implies that there exists at least one fuel - optimal trajectory on ] , let assumption [ as : tm ] be satisfied for every and denote by an infinitesimal open neighborhood of the point .then , if the point lies on the boundary of the domain for a subset , no matter how small the neighborhood is , there are some such that , i.e , no neighboring extremals in the family restricted to the subset can pass through the point at .[ pr : diffeomorphism1 ] if the point lies on the boundary of the domain , for every open neighborhood of , the set is not empty .thus , for every , there holds , which proves the proposition .if the projection of at is a fold singularity , the trajectories around intersect with each other as is shown by the typical picture in figure [ fig : smooth_fold ] ..,width=288 ] as is illustrated by the right plot in figure [ fig : smooth_fold1 ] , the point lies on the boundary of the domian for every sufficiently small subset if the projection of at is a fold singularity .consequently , proposition [ pr : diffeomorphism1 ] indicates that for some sufficiently small deviation there holds for every once the projection of at is a fold singularity .given the nominal extremal on ] , it is enough to establish the conditions that guarantee the projection of the family at each time is a diffeomorphism . in next paragraph, the conditions related to the projection properties of at each time will be established . without loss of generality , we assume that , from the current time on , there exist switching times ( ) such that along the nominal extremal on ] , each switching point is assumed to be a regular one , i.e. , and for .[ as : regular_switching ] as a result of this assumption , if the subset is small enough , the -th switching time of the extremals in is a smooth function of .thus , we are able to define as the -th switching time of the extremal on ] on .[ con : disconjugacy_bang ] this condition is equivalent with the jc .if the subset is small enough , this condition guarantees that projection of the family on each subinterval for with is a diffeomorphism , see refs . . however , this condition is not sufficient to guarantee the projection of the family on the whole semi - open interval is a diffeomorphism because there exists another type of fold singularity near each switching time , as is illustrated by figure [ fig : trans ] that the trajectories around the switching time may intersect with each other .let and denote the instants a priori to and after the switching time , respectively . if condition [ con : disconjugacy_bang ] is satisfied , according to refs . , there exists an inverse function such that then , the set is the switching surface in -space .obviously , the projection of at the switching time is a diffeomorphism if the flows on the sufficiently short interval ] , as is presented by the following remark .assume the subset is small enough and that assumption [ as : regular_switching ] is satisfied .then , for every , eq .( [ eq : tc ] ) is satisfied if and only if \text{det}\left[\frac{\partial \x}{\partial \q}(t_{i}^-({\q}),{\q})\right ] > 0.\nonumber\end{aligned}\ ] ] and , eq .( [ eq : fc ] ) is satisfied if and only if \text{det}\left[\frac{\partial \x}{\partial \q}(t_{i}^-(\q),{\q})\right ] < 0. \label{eq : fc1}\end{aligned}\ ] ] let the strict inequality \text{det}\left[\frac{\partial \x}{\partial \q}(t_{i}^-(\bar{\q}),\bar{\q})\right ] > 0 ] . [ as : transversal ] according to previous analysis , if assumption [ as : regular_switching ] is satisfied and the subset is small enough , conditions [ con : disconjugacy_bang ] and [ as : transversal ] are sufficient to guarantee the projection of at each time is a diffeomorphism .then , according to proposition [ pr : diffeomorphism ] , one obtains the following result . given the nominal extremal on ] such that each switching point is regular ( cf .assumption [ as : regular_switching ] ) , if conditions [ con : disconjugacy_bang ] and [ as : transversal ] are satisfied , the nominal trajectory on ] in associated with the measurable control \rightarrow \mathcal{t} ] .[ th : optimality ] consequentely , _ conditions _ [ con : disconjugacy_bang ] and [ as : transversal ] are also sufficient to guarantee the local optimizer or the absence of conjugate points on the nominal extremal on ] can keep bounded even though eq .( [ eq : fc1 ] ) is satisfied .hence , the classical variational method , which detects conjugate points through testing the unbounded time of the matrix \left[{\partial \x(t,\bar{\q})}/{\partial \q}\right]^{-1} ] .according to corollary [ co : existence ] , if conditions [ con : disconjugacy_bang ] and [ as : transversal ] are satisfied and the deviation is small enough , there then exists a such that .obviously , once the new extremal on the interval ] to fly to .though various numerical methods , e.g. , direct ones , indirect ones , and hybrid ones , are available in the literature to compute on ] , will be derived such that the spacecraft can be controlled to move closely enough along the extremal trajectory on ] .set , the first - order taylor expansion of is where is the sum of second and higher order terms .note that there holds for every .differentiating the identity with respect to yields note that by assumption [ as : regular_switching ] , one obtains /\dot{h}_1(\bar{\x}(t_i),\bar{\p}(t_i ) ) , \label{eq : variation_switching_time}\end{aligned}\ ] ] where two vectors can be directly computed once the nominal extremal on ] as ^{-1}. \label{eq : s}\end{aligned}\ ] ] set ^{-1}\delta \x,\end{aligned}\ ] ] it is clear that is the first - order term of the deviation .substituting eq .( [ eq : delta_q ] ) and eq .( [ eq : variation_switching_time ] ) into eq .( [ eq : delta_ti ] ) , one gets \delta \x_i /\dot{h}_1(\bar{\x}(t_i),\bar{\p}(t_i))\nonumber\\ & + & \frac{d t_i(\bar{\q})}{d\q } o_{\q}({\left \| \delta\x\right \|}^2 ) + o_{t_i}({\left \|\delta \q\right \|}^2 ) .\nonumber \ ] ] let \delta \x_i /\dot{h}_1(\bar{\x}(t_i),\bar{\p}(t_i ) ) , \label{eq : delta_ti}\end{aligned}\ ] ] be the first - order term of . then , if is infinitesimal , it suffices to use as the neighboring optimal feedback on switching times .note that there may exist some profiles of the switching function as is shown by the solid line in figure [ fig : h1_time ] .then , a small perturbation may result in the change on the number of switching times , as is illustrated by the two dashed lines in figure [ fig : h1_time ] . however , eq . ( [ eq : delta_ti ] ) is unable to provide the feedback on switching times if the number of switching times on the neighboring extremal is different from that of the nominal one. set ^t:=\delta \p = \p(t_0,\q _ * ) - \p(t_0,\bar{\q}) ] . according to eq .( [ eq : max_condition1 ] ) , the switching function gives a natural feedback on the optimal thrust magnitude of the new extremal at , i.e. , /2 \nonumber\\ & = & \big[1 + \mathrm{sgn}\big(\frac{\|\bar{\p}_v(t_0 ) + \delta \p_v\|}{\bar{m}(t_0 ) + \delta m } u_{max}- \beta ( \bar{p}_m(t_0 ) + \delta p_m ) u_{max } - 1\big)\big]/2,\nonumber\end{aligned}\ ] ] where is the typical sign function .thus , instead of using the first order term in eq .( [ eq : delta_ti ] ) to approximate switching times , one can directly check the sign of the switching function to generate the optimal thrust magnitude once and are computed .the first - order taylor expansion of around is where is the sum of second and higher order terms . substituting eq .( [ eq : delta_q ] ) into eq .( [ eq : delta_p0 ] ) leads to denote by the first three rows , the forth to sixth rows , and the last row of the gain matrix such that it is clear that and are the first order terms of and , respectively .thus , if is small enough , it is sufficient to use /2 , \label{eq : thrust_magnitude}\end{aligned}\ ] ] as the neighboring optimal feedback on thrust magnitude . according to eq .( [ eq : max_condition2 ] ) , if , the optimal thrust direction on the new extremal at is analogously , assume is infinitesimal , if , we can use as the neighboring optimal feedback on the thrust direction . one advantage of using eq .( [ eq : thrust_magnitude ] ) and eq .( [ eq : thrust_direction ] ) to generate neighboring optimal feedbacks is that only instead of the whole block of the time - varying gain matrix on is required to store in the onboard computer .[ re : remark2 ]once the perturbation is measured at , it amounts to compute the two matrices and in order to compute the neighbouring optimal feedbacks in eq .( [ eq : thrust_magnitude ] ) and eq .( [ eq : thrust_direction ] ) .it follows from the classical results about solutions to ordinary differential equations that the trajectory and its time derivative on ] can be computed by integrating the differential equations in eq .( [ eq : homogeneous_matrix ] ) between switching times and by using the updating formulas in eq .( [ eq : update_formula ] ) at each switching time .typically , the sweep variables are used to compute the initial values and .note that the matrix = \frac{d\f^{-1}(\bar{\q})}{d\q}\nonumber\end{aligned}\ ] ] is a set of basis vectors of the tangent space at .thus , to compute the initial values and , it amounts to compute a basis of the tangent space at .if , the final state is fixed since the submanifold reduces to a singleton .thus , in the case of , one can simply set , which indicates note that and .thus , the subset is diffeomorphic to if the subset is small enough . in analogy with parameterizing neighbouring extremals ,if the subset is small enough and , there exists an invertible function such that both the function and its inverse are smooth .then , for every , there exists one and only one such that . according to the transversality condition in eq .( [ eq : transversality ] ) , for every , there exists a such that let us define a function as ^{-1},\nonumber\end{aligned}\ ] ] such that . by assumption[ as : full_rank ] , if the subset is small enough , the function is a diffeomorphism from the domain onto its image .thus , it is enough to set ] .let , we have where denotes the vector of the lagrangian multipliers for the nominal extremal on ] . differentiating this equation with respect to time yields on ] , which is exactly the riccati - type differential equation in refs . . according to eq .( [ eq : update_formula ] ) , the gain matrix is discontinuous at each switching time .assume the matrix is nonsingular , multiplying by ^{-1} ] with to get the matrices and . then , substituting the matrices and into the matrix , one can use eq .( [ eq : riccati_equation ] ) and eq .( [ eq : update_riccati ] ) to get on ] is plotted in figure [ fig : transferring_orbit3 ] , on ] .it is seen from this figure that the number of burn arcs along the low - thrust fuel - optimal trajectory is 13 with 24 switching points and that each switching point is regular , i.e. , assumption [ as : regular_switching ] holds along the computed extremal .the profiles of semi - major axis , eccentricity , and inclination along the low - thrust fuel - optimal trajectory are plotted in figure [ fig : a_e_i ] .note that , except the final mass , all other final states are fixed if we use the _ meoe _ as states such that and thus , applying eqs .( [ eq : initial_matrix2][eq : initial_matrix3 ] ) , we get the initial condition as then , starting from this initial condition , we propogate eq .( [ eq : homogeneous_matrix ] ) backward from the final time and use the updating formulas in eq .( [ eq : update_formula ] ) at each switching time to compute the matrices and on ] on ] is ploted in the top subplot of figure [ fig : det_4xtf ] .note that on ] .we can clearly see from this figure that conditions [ con : disconjugacy_bang ] and [ as : transversal ] are satisfied on . according to theorem [ th : optimality ] ,the low - thrust multi - burn fuel - optimal trajectory on ] in such that .thus , the nog can be constructed along the computed extremal trajectory . in order to see the occurrence of focal points or to see the sign changes of , the profile of on the extended time interval ] is not optimal any more if .in addition , as is shown by corollary [ co : existence ] , no matter how small the absolute value is , there exist some unit vectors such that for every . hence , though the jc of refs . is satisfied , it is impossible to construct the nog along the computed extremal on ] .a series of deviations for ] , the trajectories starting from the point associated with the neighbouring optimal feedback control in eq .( [ eq : thrust_magnitude ] ) and eq . ( [ eq : thrust_direction ] ) as well as the nominal control are computed .hereafter , we say the trajectories associated with the neighboring optimal feedback control as the neighboring optimal ones , and we say the trajectories associated with the nominal control as the perturbed ones .the final values of , , and with respect to ] for the neighboring optimal trajectories and the perturbed trajectory are plotted in figure [ fig : error_hx_hy_l ] . as is seen from figure [ fig : error_a_ex_ey ] , when increases up to , while the error of the final semi - major axis for the neighboring optimal trajectory remains small , that for the perturbed trajectory increases up to approximately km .we can also see from figures [ fig : error_a_ex_ey ] and [ fig : error_hx_hy_l ] that the final values of , , , , and for the neighboring optimal trajectories keep almost unchanged for ] . therefore , the neighboring optimal feedback control in eq . ( [ eq : thrust_magnitude ] ) and eq .( [ eq : thrust_direction ] ) greatly reduce the errors of final conditions . to see the advantage of using eq .( [ eq : thrust_magnitude ] ) rather than eq .( [ eq : delta_ti ] ) to provide the neighboring optimal feedback on thrust magnitude , the profiles of switching function on the time interval ] are plotted in figure [ fig : variaion_h1 ] .we can clearly see from this figure that some switching times disappear around and with the increase of . in this case, while eq .( [ eq : delta_ti ] ) can not capture the variations of switching times , one can still compute the thrust magnitude of neighboring optimal trajectories by using eq .( [ eq : thrust_magnitude ] ) .though the disturbances are relativly big as takes values up to , figure [ fig : variaion_h1 ] shows the potential failure of using eq .( [ eq : delta_ti ] ) .the neighbouring optimal feedback control strategy for fixed - time low - thrust multi - burn orbital transfer problems is established in this paper through constructing a parameterized family of neighboring extremals around the nominal one .two conditions , including the jc and the tc , sufficient for the existence of neighbouring extremals in an infinitesimal neighborhood of a bang - bang extremal are formulated . as a byproduct ,the sufficient conditions for the local optimality of bang - bang extremals are obtained .then , through deriving the first - order taylor expansion of the paramterised neighboring extremals , the neighboring optimal feedbacks on thrust direction as well as on thrust magnitude are presented .the formulas of the neighboring optimal feedbacks show that to store only rather than the whole block of a gain matrix of in the onboard computer is sufficient to realize the online computation .finally , a fixed - time low - thrust orbital transfer from an inclined elliptic orbit to the earth geostationary orbit is computed , and various initial perturbations are tested to show that the nog developed in this paper significantly reduces the errors of final conditions .the nog for open - time multi - burn orbital transfers will be studied in the subsequent research .chuang , c .- h . , goodson , t. d. , ledsinger , l. a. , and hanson , j. , `` optimality and guidance for plannar multiple - burn orbital transfers , '' _ journal of guidance , control , and dynamics _23 , no . 2 , 1996 , pp .241 - 250 .calise , a. j. , melamed , n. , and lee , s. , `` design and evaluation of a three - dimensional optimal ascent guidance algorithm , '' _ journal of guidance , control , and dynamics _ , vol .6 , 1998 , pp . 867875 . noble , j. and schttler , h. , `` sufficient conditions for relative minima of broken extremals in optimal control theory , '' _ journal of mathematical analysis and applications _, vol . 269 , 2002 , pp.98 - 128 .agrachev , a. a. and sachkov , y. l. , `` control theory from the geometric viewpoint , '' _ encyclopedia of mathematical sciences _ ,87 , control theory and optimization , ii .springer - verlag , berlin , 2004 .kugelmann , b. , and pesch , h. j. , `` new general guidance method in constrained optimal control , part 1 : numerical method , '' journal of optimization theory and applications , vol .67 , no . 3 , 1990 , pp. 421436 .kugelmann , b. , and pesch , h. j. , `` new general guidance method in constrained optimal control , part 2 : application to space shuttle guidance , '' journal of optimization theory and applications , vol .67 , no . 3 , 1990 , pp .437446 .pontani , m. , cecchetti , g. , and teofilatto , p. , `` variable - time - domain neighboring optimal guidance , part 1 : algorithm structure , '' _ journal of optimization theory and applications _ ,vol . 166 , 2015 , pp .pontani , m. , cecchetti , g. , and teofilatto , p. , `` variable - time - domain neighboring optimal guidance , part 2 : application to lunar descent and soft landing , '' _ journal of optimization theory and applications _ , vol . 166 , 2015 , pp . 93114 .afshari , h. h. , novinzadeh , a. b. , and roshanian , j. , `` determination of nonlinear optimal feedback law for satellite injection problem using neighboring optimal control , '' _ american journal of applied sciences _ , vol . 6 , no . 3 , 2009 , pp .430438 .shafieenejad , i. , novinzade , a. b. , and shisheie , r. , `` analytical mathematical feedback guidance scheme for low - thrust orbital plane change manoeuvres , '' _ mathematical and computer modelling _ , vol .58 , no . 1112 , 2013 , pp .17141726 .naidu , d. s. , hibey , j. l. , and charalambous , c. d. , `` neighboring optimal guidance for aeroassisted orbital transfers , '' _ in aerospace and electronic systems , ieee transaction on _ , vol , 29 , no . 3 , 1993 , pp .656665 .foerster , r. e. , and flgge - lotz , i. , `` a neighboring optimal feedback control scheme for systems using discontinuous control , '' _ journal of optimization theory and applications _ , vol .8 , no . 5 , 1971 , pp . 367395 .caillau , j .- b . ,gergaud , j. , and noailles , j. , `` 3d geosynchronous transfer of a satellite : continuation on the thrust , '' _ journal of optimization theory and applications _ , vol .118 , no . 3 , 2003 , pp .541 - 565 .
this paper presents a novel neighboring extremal approach to establish the neighboring optimal guidance ( nog ) strategy for fixed - time low - thrust multi - burn orbital transfer problems . unlike the classical variational methods which define and solve an accessory minimum problem ( amp ) to design the nog , the core of the proposed method is to construct a parameterized family of neighboring extremals around a nominal one . a geometric analysis on the projection behavior of the parameterized neighboring extremals shows that it is impossible to establish the nog unless not only the typical jacobi condition ( jc ) between switching times but also a transversal condition ( tc ) at each switching time is satisfied . according to the theory of field of extremals , the jc and the tc , once satisfied , are also sufficient to ensure a multi - burn extremal trajectory to be locally optimal . then , through deriving the first - order taylor expansion of the parameterized neighboring extremals , the neighboring optimal feedbacks on thrust direction and switching times are obtained . finally , to verify the development of this paper , a fixed - time low - thrust fuel - optimal orbital transfer problem is calculated .
these days a large variety of complex statistical models can be fitted routinely to complex data sets as a result of widely accessible high - level statistical software , such as ` r ` [ ] or ` winbugs ` [ ] .for instance , the nonspecialist user can estimate parameters in generalized linear mixed models or run a gibbs sampler to fit a model in a bayesian setting , and expert programming skills are no longer required .researchers from many different disciplines are now able to analyze their data with sufficiently complex methods rather than resorting to simpler yet nonappropriate methods .in addition , methods for the assessment of a model s fit as well as for the comparison of different models are widely used in practical applications .the routine fitting of spatial point process models to complex data sets , however , is still in its infancy .this is despite a rapidly improving technology that facilitates data collection , and a growing awareness of the importance and relevance of small - scale spatial information .spatially explicit data sets have become increasingly available in many areas of science , including plant ecology [ ; ] , animal ecology [ ( ) ] , geosciences [ ; ] , molecular genetics [ ] , evolution [ ] and game theory [ ] , with the aim of answering a similarly broad range of scientific questions .currently , these data sets are often analyzed with methods that do not make full use of the available spatially explicit information .hence , there is a need for making existing point process methodology available to applied scientists by facilitating the fitting of suitable models .in addition , real data sets are often more complex than the classical data sets that have been analyzed with point process methodology in the past .they often consist of the exact spatial locations of the objects or events of interest , and of further information on these objects , that is , potentially dependent qualitative as well as quantitative marks or spatial covariates [ ; ] .there is an interest in fitting complex joint models to the marks ( or the covariates ) as well as to the point pattern .so far , the statistical literature has discussed few examples of complex point process models of this type. there have been previous advances in facilitating routine model fitting for spatial point processes , in particular , for gibbs processes .most markedly , the work by baddeley and turner ( ) has facilitated the routine fitting of gibbs point processes based on an approximation of the pseudolikelihood to avoid the issue of intractable normalizing constants [ ; ] as well as the approximate likelihood approach by .work by and has provided methods for model assessment for some gibbs processes .many of these have been made readily available through the library ` spatstat ` for ` r ` [ ] .however , most gibbs process models considered in the literature are relatively simple in comparison to models that are commonly used in the context of other types of data . in an attempt to generalize the approach in , include random effects in gibbs point processes but more complex models , such as hierarchical models or models including quantitative marks , currently can not be fitted in this framework .similarly , methods for model comparison or assessment considered in and are restricted to relatively simple models .furthermore , both estimation based on maximum likelihood and that based on pseudolikelihood are approximate so that inference is not straightforward .the approximations become less reliable with increasing interaction strength [ ] .cox processes are another , flexible , class of spatial point process models [ ] , assuming a stochastic spatial trend makes them particularly realistic and relevant in applications .even though many theoretical results have been discussed in the literature for these [ ] , the practical fitting of cox point process models to point pattern data remains difficult due to intractable likelihoods .fitting a cox process to data is often based on markov chain monte carlo ( mcmc ) methods .these require expert programming skills and can be very time - consuming both to tune and to run [ ] so that fitting complex models can easily become computationally prohibitive . for simple models , fastminimum contrast approaches to parameter estimation have been discussed [ ] .however , approaches to routinely fitting cox process models have been discussed very little in the literature ; similarly , methods for model comparison or assessment for cox processes have rarely been discussed in the literature [ ; ] . to the authors knowledge ,cox processes have not been used outside the statistical literature to answer concrete scientific questions . within the statistical literature cox process modelshave focused on the analysis of relatively small spatial patterns in terms of the locations of individual species .very few attempts have been made at fitting models to both the pattern and the marks [ ; ] , in particular , not to patterns with multiple dependent continuous marks , and joint models of covariates and patterns have not been considered .this paper addresses two issues .it develops complex joint models and , at the same time , provides methods facilitating the routine fitting of these models .this provides a toolbox that allows applied researchers to appropriately analyze realistic point pattern data sets .we consider joint models of both the spatial pattern and associated marks as well as of the spatial pattern and covariates .using a bayesian approach , we provide modern model fitting methodology for complex spatial point pattern data similar to what is common in other areas of statistics and has become a standard in many areas of application , including methods for model comparison and validation .the approach is based on integrated nested laplace approximation ( inla ) [ ] , which speeds up parameter estimation substantially so that cox processes can be fitted within feasible time . in order to make the methods accessible to nonspecialists , an ` r ` package that may be used to run inla is available and contains generic functions for fitting spatial point process models ; see http://www.r - inla.org/. applied researchers are aware that spatial behavior tends to vary at a number of spatial scales as a result of different underlying mechanisms that drive the pattern [ ; ] .local spatial behavior is often of specific interest but the spatial structure also varies on a larger spatial scale due to the influence of observed or unobserved spatial covariates .cox processes model spatial patterns relative to observed or unobserved spatial trends and would be ideal models for these data sets. however , cox processes typically do not consider spatial structures at different spatial scales within the same model .more specifically , a specific strength of spatial point process models is their ability to take into account detailed information at very small spatial scales contained in spatial point pattern data , in terms of the local structure formed by an individual and its neighbors .so far , cox processes have often been used to relate the locations of individuals to environmental variation , phenomena that typically operate on larger spatial scales . however , different mechanisms operate at a smaller spatial scale .spatial point data sets are often collected with a specific interest in the local behavior of individuals , such as spatial interaction or local clustering [ ; ] .we consider an approach to fitting cox process models that reflects both the local spatial structure and spatial behavior at a larger spatial scale by using a constructed covariate together with spatial effects that account for spatial behavior at different spatial scales .this approach is assessed in a simulation study and we also discuss issues specific to this approach that arise when several spatial scales are accounted for in a model .this paper is structured as follows .the general methodology is introduced in section [ sec2 ] . in section [ constructcov ]we investigate the idea of mimicking local spatial behavior by using constructed covariates in a simulation study in the context of ( artificial ) data with known spatial structures and inspect patterns resulting from the fitted models .section [ rainforest ] discusses a joint model of a large point pattern and two empirical covariates along with a constructed covariate and fits this to a rainforest data set .a hierarchical approach is considered in section [ koalas ] , where both ( multiple ) marks and the underlying pattern are included in a joint model and fitted to a data set of eucalyptus trees and koalas foraging on these trees .[ sec2 ] spatial point processes have been discussed in detail in the literature ; see , , , ( ) and . herewe aim at modeling a spatial point pattern , regarding it as a realization from a spatial point process .for simplicity we consider only point processes in , but the approaches can be generalized to point patterns in higher dimensions .we refer the reader to the literature for information on different ( classes of ) spatial point process models such as the simple poisson process , the standard null model of complete spatial randomness , as well as the rich class of gibbs ( or markov ) processes [ ] . here, we discuss the class of cox processes , in particular , log - gaussian cox processes .cox processes lend themselves well to modeling spatial point pattern data with spatially varying environmental conditions [ ] , as they model spatial patterns based on an underlying ( or latent ) random field that describes the random intensity , assuming independence given this field . in other words ,given the random field , the point pattern forms a poisson process .log - gaussian cox processes as considered , for example , in and ( ) , are a particularly flexible class , where has the form and is a gaussian random field , .other examples of cox processes include shot - noise cox processes [ ] . here , we consider a general class of complex spatial point process models based on log - gaussian cox processes that allows the joint modeling of spatial patterns along with marks and covariates .we include both small and larger scale spatial behavior , using a constructed covariate and additional spatial effects .the resulting models can be regarded as latent gaussian models and , hence , inla can be used for parameter estimation and model fitting .cox processes are a special case of the very general class of _ latent gaussian models _ ,models of an outcome variable that assume independence conditional on some underlying latent field and hyperparameters . show that if has a sparse precision matrix and the number of hyperparameters is small ( i.e. , ) , inference based on inla is fast.=-1 the main aim of the inla approach is to approximate the posteriors of interest , that is , the marginal posteriors for the latent field and the marginal posteriors for the hyperparameters , and use these to calculate posterior means , variances , etc .these posteriors can be written as the nested formulation is used to compute by approximating and , and then to use numerical integration to integrate out .this is feasible , since the dimension of is small .similarly , is calculated by approximating and integrating out .the marginal posterior in equations ( [ posterior1 ] ) and ( [ posterior2 ] ) can be calculated using the laplace approximation where is the gaussian approximation to the full conditional of , and is the mode of the full conditional for , for a given .this makes sense , since the full conditional of a zero mean gauss markov random field can often be well approximated by a gaussian distribution by matching the mode and the curvature at the mode [ ] .further details are given in who show that the nested approach yields a very accurate approximation if applied to latent gaussian models .as a result , the time required for fitting these models is substantially reduced .[ lgc ] the class of latent gaussian models comprises log - gaussian cox processes and , hence , the inla approach may be applied to fit these .specifically , the observation window is discretized into grid cells , each with area , .the points in the pattern can then be described by with , where denotes the observed number of points in grid cell .we condition on the point pattern and , conditionally on , we have see .we model as where the functions are spatially structured effects that reflect large scale spatial variation in the pattern . these effectsare modeled using a second - order random walk on a lattice , using vague gamma priors for the hyperparameter and constrained to sum to zero [ ] . in the models that we discuss below, the spatially structured effects relate to observed and unobserved spatial covariates as discussed in the examples in sections [ rainforest ] and [ sec5 ] .including spatial covariates directly in the model as fixed effects in addition to the random effects is straightforward . for simplicity, we omit these in equation ( [ general ] ) since this is not relevant in the specific data sets and models discussed below . denotes a spatially unstructured zero - mean gaussian i.i.d.error term , using a gamma prior for the precision .further , denotes a constructed covariate .constructed covariates are summary characteristics defined for any location in the observation window reflecting inter - individual spatial behavior such as local interaction or competition .we assume that this behavior operates at a smaller spatial scale than spatial aggregation due to ( observed or unobserved ) spatial covariates , and hence the spatially structured effects .the use of constructed covariates yields models with local spatial interaction within the flexible class of log - gaussian cox process models .it avoids issues with intractable normalizing constants that are common in the context of gibbs processes [ ] , since the covariates operate directly on the intensity of the pattern rather than on the density or the conditional intensity [ ] .the functional relationship between the outcome variable and the constructed covariate is typically not obvious and might often not be linear .we thus estimate this relationship explicitly by a smooth function and inspect this estimate to gain further information on the form of the spatial dependence .this function will be modeled as a first - order random walk , also constrained to sum to zero . the constructed covariate considered in this paperis based on the nearest point distance , which is simple and fast to compute .specifically , for each center point of the grid cells we find the distance to the nearest point in the pattern outside this grid cell as where denotes the center point of cell and denotes the euclidean distance . defined this way , the constructed covariate can be used both to model local repulsion and local clustering . during the modeling process, methods for model comparison based on the deviance information criterion ( dic ) [ ] may be used to compare different models with different levels of complexity .furthermore , both the ( estimated ) spatially structured field and the error field in ( [ general ] ) may be used to assess the model fit .the spatially structured effect may be used to reveal remaining spatial structure that is unexplained by the current model and the unstructured effects may be interpreted as a spatial residual .this provides a method for model assessment akin to residuals in , for example , linear models .this approach yields a toolbox for fitting , comparing and assessing realistically complex models of spatial point pattern data .we show that different types of flexible models can be fitted to point pattern data with complex structures using the inla approach within reasonable computation time .this includes joint models of large point patterns and covariates operating on a large spatial scale and local clustering ( section [ rainforest ] ) as well as of a pattern with several dependent marks which also depend on the pattern ( section [ koalas ] ) . in the natural world, different mechanisms operate at different spatial scales [ ] , and hence are reflected in a spatial pattern at these scales .it is crucial to bear this in mind during the analysis of spatial data derived from nature , including spatial point pattern data. some mechanisms , such as seed dispersal in plants or territorial behavior in animals , may operate at a local spatial scale , while others , such as aggregation resulting from an association with certain environmental covariates , operate on the scale of the variation in these covariates , and hence often on a larger spatial scale .in addition , a spatial scale that is relevant in one application may not be relevant for a different data set .hence , the analysis of a spatial point pattern always involves a consideration of the appropriate spatial scales at which mechanisms of interest may operate , regardless of the concrete analysis methods . even as early as at the outset of a study ,when an appropriately sized observation window has to be chosen , relevant spatial scales operating in the system of interest have to be taken into consideration . during the analysis the researcher has to carefully decide if variation at a specific scale constitutes noise or whether it reflects a true signal .it is hence crucial to be aware of which mechanisms operate at which spatial scales prior to any spatial data analysis .this may be done based on either background knowledge ( such as existing data on dispersal distances in plants or the sizes of home ranges in territorial animals ) or common sense . in the models we discuss here , we explicitly take mechanisms operating at several different scales into account and have to choose these sensibly , based on knowledge of the systems .the spatially structured effect reflects spatial autocorrelation at a large spatial scale , whereas the constructed covariate is used to describe small scale inter - individual behavior .in addition , since we grid the data in this approach , the number of grid cells clearly determines the spatial resolution , especially at a small scale , and is clearly linked to computational costs and the extent to which information is lost through gridding the data . in the following ,we discuss issues related to each of these three parts of the models where spatial scale is relevant . a spatially structured effect is typically included in a spatial model as a spatially structured error term , that is , in order to account for any spatial autocorrelation unexplained by covariates in the model .inla currently supports the 2nd order random walk on a lattice as a model for this , with a gamma prior for the variance of the spatially structured effect .the choice of this prior determines the smoothness of the spatial effect and through this , the spatial scale at which it operates .this prior has to be chosen carefully to avoid overfitting .this is particularly crucial in the context of spatial point patterns with relatively small numbers of points , where the gridded data are typically rather sparse [ ] .if the spatial effect is chosen to be too coarse , it explains the spatial variation at too small a scale , resulting in a coarse estimate of the spatially structured effect .this estimate would perfectly explain every single data point , resulting in overfitting rather than in a model of a generally interpretable trend .given the role of the spatially structured effect , it appears plausible to choose the prior so that the spatial effect operates at a similar spatial scale as the covariate .problems can occur when the spatially structured effect operates at a smaller scale than the covariate , as it is then likely to explain the data better than the covariates , rendering the model rather useless . in the absence of covariate data ,background knowledge on spatial scales may aid in choosing the prior .small scale inter - individual spatial behavior is modeled by the constructed covariate . as mentioned ,this is done to account for local spatial behavior if this is of specific interest in the application .again , there is a danger of overfitting , especially since the constructed covariate is estimated directly from the data .we discuss the practicality of using a spatial constructed covariate in detail in section [ sec3 ] and only point out here that it has to be carefully chosen , if possible with appropriate knowledge of the specific system the data have been derived from .the choice of prior for the spatially structured effect is strongly related to the choice of grid size .however , in our experience the overall results often do not change substantially when the grid size was varied within reason . in applications ,the locations of the modeled objects as well as spatial covariates are sometimes given on a grid with a fixed resolution .we recommend using a grid that is not finer than that given by the data in the analysis .[ constructcov ] in section [ rainforest ] we use a constructed covariate primarily to incorporate local spatial structure into a model , while accounting for spatial variation at a larger spatial scale . to illustrate the use of the given constructed covariate and to assess the performance of the resulting models , we simulate point patterns from various classical point - process models .note , however , that we do not aim at explicitly estimating the parameters of these models but at assessing ( i ) whether known spatial structures may be detected through the use of the constructed covariate , as suggested here , and ( ii ) whether simulations from the fitted models generate patterns with similar characteristics . in the applications we have in mind , such as those discussed in the example in section [ rainforest ] ,the data structure is typically more complicated . for the purpose of this simulation study we consider three different situations : patterns with local repulsion ( section [ repul ] ) , patterns with local clustering ( section [ clust ] ) and patterns with local clustering in the presence of a larger - scale spatial trend ( section [ clustinhom ] ) .we generate example patterns from different point process models with these properties on the unit square . for all simulation resultsthis observation window has been discretized into a grid . in sections [ repul ] and [ clust ]we initially assume that there is no large - scale spatial variation , with the aim of inspecting only the constructed covariate , and we consider using the notation in section [ lgc ] . in section [ clustinhom ]we consider both small- and large - scale spatial structures by including a spatially structured effect in addition to the constructed covariate and to evaluate a fitted model , we apply the metropolis algorithm [ ] to simulate patterns from these models and then compare characteristics of the simulated patterns with the generated example patterns . more specifically , for and , denote the joint distribution of given the latent field , by where the mean .for a given example pattern , we first apply inla to find the estimate of the latent field for all grid cells .to evaluate the estimated function of the constructed covariate for all arguments , we apply the ` splinefun ` command in r to perform cubic spline interpolation of the original data points . using the metropolis algorithm ,we assume an initial pattern , which is randomly scattered in the unit square , having the same number of points as the original pattern . the step of the algorithm is performed by randomly selecting one point of the pattern and proposing to move this point to a new position drawn uniformly in the unit square .the proposal is accepted with probability where denotes the resulting grid cell counts for .the simulated patterns in sections [ repul][clustinhom ] each result from iterations of the algorithm .@ , , ) , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100,000 iterations , the estimated -function for the original pattern ( solid line ) and for the simulated pattern ( dashed line ) and simulation envelopes for the -function for 50 simulated patterns .,title="fig : " ] & , , ) , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100,000 iterations , the estimated -function for the original pattern ( solid line ) and for the simulated pattern ( dashed line ) and simulation envelopes for the -function for 50 simulated patterns .,title="fig : " ] + ( a)&(b ) + , , ) , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100,000 iterations , the estimated -function for the original pattern ( solid line ) and for the simulated pattern ( dashed line ) and simulation envelopes for the -function for 50 simulated patterns .,title="fig : " ] & , , ) , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100,000 iterations , the estimated -function for the original pattern ( solid line ) and for the simulated pattern ( dashed line ) and simulation envelopes for the -function for 50 simulated patterns .,title="fig : " ] + ( c)&(d ) + , , ) , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100,000 iterations , the estimated -function for the original pattern ( solid line ) and for the simulated pattern ( dashed line ) and simulation envelopes for the -function for 50 simulated patterns .,title="fig : " ] & , , ) , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100,000 iterations , the estimated -function for the original pattern ( solid line ) and for the simulated pattern ( dashed line ) and simulation envelopes for the -function for 50 simulated patterns .,title="fig : " ] + ( e)&(f ) [ fig1 ] [ repul ] to inspect the performance of the constructed covariate for repulsion , we generate patterns from a homogeneous strauss process [ ] on the unit square , with medium repulsion ( intensity parameter ) , ( interaction parameter ) and interaction radius [ see figure [ straussplot](a ) for an example ] .we then fit a model to the pattern as in equation ( [ simplemodel ] ) using the constructed covariate in ( [ nearestn ] ) [ figure [ straussplot](b ) ] .the shape of the estimated functional relationship between the constructed covariate and the outcome variable is shown in figure [ straussplot](c ) .this function illustrates that the intensity in a grid cell is influenced by the calculated distance in ( [ nearestn ] ) , as higher distances will give higher intensities .thus , the intensity is positively related to the value of the constructed covariate , clearly reflecting repulsion . at larger distances ( .05 ) the function levels outdistinctly , indicating that beyond these distances the covariate and the intensity are unrelated , that is , the spatial pattern shows random behavior .in other words , the functional relationship not only characterizes the pattern as regular but also correctly identifies the interaction distance as 0.05 .the pattern resulting from the metropolis hastings algorithm [ figure [ straussplot](d ) ] shows very similar characteristics to those in the original pattern .this indicates that the model based on the nearest point constructed covariate in equation ( [ eq2.5 ] ) captures adequately the spatial information contained in the repulsive pattern .the estimated -function [ ] for the simulated pattern and the original pattern confirm this impression , as they look very similar [ figure [ straussplot](e ) ] .additionally , we have calculated simulation envelopes for the -function of strauss processes with the given parameter values , using 50 simulated patterns and 100000 iterations of the metropolis algorithm for each pattern [ figure [ straussplot](f ) ] .we notice that the estimated -functions of the original patterns are well within the simulation envelopes for all distances .[ clust ] in order to assess the performance of the model in ( [ simplemodel ] ) in the context of clustered patterns , we generate patterns from a homogeneous thomas process [ ] in the unit square , with parameters ( the intensity of the poisson process of cluster centers ) , ( the standard deviation of the distance of a process point from the cluster center ) and ( the expected number of points per cluster ) [ see figure [ thomasplot](a ) for an example ] .we fit the model in equation ( [ simplemodel ] ) using the constructed covariate in ( [ nearestn ] ) [ figure [ thomasplot](b ) ] .the shape of the estimated functional relationship between the constructed covariate and the outcome variable [ figure [ thomasplot](c ) ] now indicates that the intensity is negatively related to the value of the constructed covariate as the intensities increase for smaller distances , reflecting local clustering . at larger distances ( .1 ) the function levels out , indicating that at these distances the covariate and the intensity are unrelated .@ , and , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100000 iterations , the estimated l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) and simulation envelopes for the l - function for 50 simulated patterns .,title="fig : " ] & , and , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100000 iterations , the estimated l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) and simulation envelopes for the l - function for 50 simulated patterns .,title="fig : " ] + ( a)&(b ) + , and , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100000 iterations , the estimated l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) and simulation envelopes for the l - function for 50 simulated patterns .,title="fig : " ] & , and , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100000 iterations , the estimated l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) and simulation envelopes for the l - function for 50 simulated patterns .,title="fig : " ] + ( c)&(d ) + , and , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100000 iterations , the estimated l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) and simulation envelopes for the l - function for 50 simulated patterns .,title="fig : " ] & , and , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , a pattern simulated from the fitted model after 100000 iterations , the estimated l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) and simulation envelopes for the l - function for 50 simulated patterns .,title="fig : " ] + ( e)&(f ) [ fig2 ] the pattern simulated from the fitted model [ figure [ thomasplot](d ) ] shows that the constructed covariate introduces some clustering in the model. however , the resulting pattern shows fewer and less distinct clusters than the original pattern .similarly , the estimated -function for the pattern simulated from the fitted model shows a weaker local clustering effect than the original pattern [ figure [ thomasplot](e ) ] .this is also illustrated by the simulation envelopes for 50 patterns of the fitted model which do not include the true -function [ figure [ thomasplot](f ) ] .[ clustinhom ] so far , we have considered constructed covariates only for patterns with local interaction to illustrate their use . in applications , however , different mechanisms operate at different spatial scales .patterns may be locally clustered , for example , due to dispersal mechanisms , but may also show aggregation at a larger spatial scale , for example , due to dependence on underlying observed or unobserved covariates . hence , the main reason for using constructed covariates in the data example in section [ rainforest ] is to distinguish behavior at different spatial resolutions , in order to provide information on mechanisms operating at different spatial scales .we illustrate the use of constructed covariates in this context by generating an inhomogeneous , locally clustered pattern mimicking a situation where different mechanisms have caused local clustering and large scale inhomogeneity . in applications, the inhomogeneity may be modeled using suitable spatially varying covariates or assuming an unobserved spatial variation or both .we generate patterns from an inhomogeneous thomas process with parameters and and a simple trend function for the intensity of parent points given by . each patternis then superimposed with a pattern generated from an inhomogeneous poisson process with trend function [ figure [ inhomthom](a ) ] .@ , and trend function superimposed on an inhomogeneous poisson process with trend function , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , the estimated spatially structured effect , a pattern simulated from the fitted model after 100000 iterations and the inhomogeneous l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) .,title="fig : " ] & , and trend function superimposed on an inhomogeneous poisson process with trend function , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , the estimated spatially structured effect , a pattern simulated from the fitted model after 100000 iterations and the inhomogeneous l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) .,title="fig : " ] + ( a)&(b ) + , and trend function superimposed on an inhomogeneous poisson process with trend function , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , the estimated spatially structured effect , a pattern simulated from the fitted model after 100000 iterations and the inhomogeneous l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) .,title="fig : " ] & , and trend function superimposed on an inhomogeneous poisson process with trend function , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , the estimated spatially structured effect , a pattern simulated from the fitted model after 100000 iterations and the inhomogeneous l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) .,title="fig : " ] + ( c)&(d ) + , and trend function superimposed on an inhomogeneous poisson process with trend function , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , the estimated spatially structured effect , a pattern simulated from the fitted model after 100000 iterations and the inhomogeneous l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) .,title="fig : " ] & , and trend function superimposed on an inhomogeneous poisson process with trend function , the associated constructed covariate for this pattern , the estimated functional relationship between the outcome and the constructed covariate , the estimated spatially structured effect , a pattern simulated from the fitted model after 100000 iterations and the inhomogeneous l - function for the original pattern ( solid line ) and the simulated pattern ( dashed line ) .,title="fig : " ] + ( e)&(f ) [ fig3 ] we again use the constructed covariate in ( [ nearestn ] ) [ see figure [ inhomthom](b ) ] and fit the model in ( [ trendmodel ] ) .the inspection of the functional relationship between the constructed covariate and the outcome [ figure [ inhomthom](c ) ] shows that at small values of the covariate the intensity is negatively related to the constructed covariate , reflecting clustering at smaller distances .the estimated spatially structured effect picks up the larger - scale spatial behavior [ figure [ inhomthom](d ) ] .patterns simulated from the fitted model look quite similar to the original pattern [ figure [ inhomthom](e ) ] .however , local clustering is slightly stronger in the original pattern than in the simulated pattern [ figure [ inhomthom](f ) ] .this is again confirmed by the simulation envelopes for the simulated patterns from the fitted model , as shown in figure [ inhomthomse ] .the mean estimated l - function for the generated patterns is very close to the upper edge of the simulation envelopes and partly outside , indicating that the fitted model does not reflect the strength of clustering sufficiently well .-function for a poisson process ( bold solid line ) , the mean of the inhomogeneous -function for the generated ( solid ) and simulated ( dashed ) patterns . ][ fig4 ] with the aim of assessing the performance of models with constructed covariates reflecting small scale inter - individual spatial behavior , we consider a number of simulated point patterns for three different scenarios : repulsion , clustering and small - scale clustering in the presence of large scale inhomogeneity . in all cases ,the local spatial structure can be clearly identified .the constructed covariate does not only take account of local spatial structures but also characterizes the spatial behavior .the functional form of the dependence of the intensity on the constructed covariate clearly reflects the character of the local behavior .this section presents only a small part of an extensive simulation study ; the results shown here are typical examples .we have run simulations from the same models as above with different sets of parameters and have obtained essentially the same results .further , fitting the model in equation ( [ simplemodel ] ) to patterns simulated from a homogeneous poisson process resulted in a nonsignificant functional relationship , that is , the modeling approach does not pick up spurious clustering or regularity .the approach allows us to fit models that take into account small - scale spatial behavior , regularity as well as clustering , in the context of log - gaussian cox processes , that is , as latent gaussian models . since these can be fitted using the inla approach , fitting is fast and exact .in addition , we avoid some of the typical problems that arise with gibbs process models , that is , we do not face issues of intractable normalizing constants , and regular as well as clustered patterns may be modeled .however , the simulations also show that the approach of using constructed covariates works clearly better with repulsive patterns than with clustered patterns .this is akin to similar issues with gibbs processes , where repulsive patterns are less problematic to model than clustered patterns .certainly , this is related to the fact that it is difficult to tell apart clustering from inhomogeneity [ ] .when working with constructed covariates the issues highlighted , that is , that local clustering may have been underestimated , have to be taken into account , especially in the interpretation of results .certainly , the constructed covariate in equation ( [ nearestn ] ) that we consider here is not the only possible choice .a covariate based on distance to the nearest point is likely to be rather `` short - sighted , '' so that other constructed covariates might be more suitable for detecting specific spatial structures . in particular , taking into account these limitations , it is not surprising that patterns simulated from models show less clustering than the original data .more general covariates such as the distance to the nearest point may be considered .other covariates , such as the local intensity or the number of points within a fixed interaction radius from a location , are certainly also suitable .a nice property of the given constructed covariate based on nearest - point distance is that it is parameter - free .for this reason , it is not necessary to choose explicitly the resolution of the local spatial behavior , for example , as an interaction radius .also , note that since the distance to the nearest point in point pattern for a location may be interpreted as a graph associated with , other constructed covariates based on different types of graphs [ ] may also be used as constructed covariates .similarly , an approach based on morphological functions may be used for this purpose .note that one could also consider constructed marks based on first or second order summary characteristics [ ] that are defined only for the points in the pattern and include these in the model .distinguishing spatial behavior at different spatial scales is clearly an ill - posed problem , since the behavior at one spatial scale is not independent of that at different spatial scales [ ] .the approach we take here will not always be able to distinguish clustering at different scales . however , different mechanisms that operate at very similar spatial scales are likely to be nonidentifiable by any method , irrespective of the choice of model or the constructed covariate. constructed covariates hence only provide useful results when the processes they are meant to describe operate at a spatial scale that is distinctly smaller than the larger scale processes in the same model . admittedly , the use of constructed covariates is of a rather subjective and ad hoc nature .clearly , in applications the covariates have to be constructed carefully , depending on the questions of interest ; different types of constructed covariates may be suitable in different contexts .however , similarly subjective decisions are usually made when a model is fitted that is purely based on empirical covariates , as these have been specifically chosen as potentially influencing the outcome variable , based on background knowledge .in addition , due to the apparent danger of overfitting , constructed covariates should only be used if there is an interest in the local spatial behavior in a specific data set and if there is reason to believe that small- and large - scale spatial behavior are operating at scales that are different enough to make them identifiable .[ rainforest ] in this example we consider a point pattern , where the number of points is potentially very large and several spatial covariates have been measured .the point pattern is assumed to depend on one or several ( observed or unobserved ) environmental covariates for which data exist . in the application that we have in mind the values of these have been observed in a few locations that are typically different from locations of the objects that form the pattern . in previousmodeling attempts the values of the covariates in the locations of the objects are then either interpolated or modeled separately so that ( estimated ) values are used for locations were the covariates have not been observed .however , these covariates are likely to have been collected with both sampling and measurement error . in the specific case we consider here ( see section [ rainforestex ] ) they concern soil properties , which are measured much less reliably than the topography covariates in models such as those in , .in addition , it is less clear for soil variables than for topography covariates if these influence the presence of trees , or whether the presence of trees impacts on the soil variables . whereas models in which the soil variables are considered fixed and not modeled alongside the pattern , the model we deal with here does not make any assumption on the direction of this influence . as a result, we suggest a joint model of the covariates along with the pattern that uses the original ( noninterpolated ) data on the covariates and accounts for measurement error .that is , we fit the model in equation ( [ general ] ) to and jointly fit a model to the covariates . the pattern and the covariatesare linked by joint spatial fields .an additional spatially structured effect is used to detect any remaining spatial structures in the pattern that can not be explained by the joint fields with the covariates . in the case of we fit the following model , where the pattern is modeled as and the covariates as and where and are the observed covariates in grid cells where the covariates have been measured and missing where they have not been measured . represents the function of the constructed covariate ( [ nearestn ] ) . and are spatially structured effects , that is , reflect a random field for each of the covariates and reflects spatial autocorrelation in the pattern unexplained by the covariates ; and are spatially unstructured fields used to account for measurement or sampling error .in addition to the spatial effect reflecting the empirical covariates , which are likely to have an impact on the larger scale spatial behavior , we use the constructed covariate to account for local clustering . in the applicationwe have in mind ( see section [ rainforestex ] ) this clustering is a result of seed - dispersal mechanisms operating on a much smaller spatial scale than that of the aggregation of individuals due to an association with environmental covariates .[ rainforestex ] some extraordinarily detailed multi - species maps are being collected in tropical forests as part of an international effort to gain greater understanding of these ecosystems [ ; ; ; ] .these data comprise the locations of all trees with diameters at breast height ( dbh ) 1 cm or greater , a measure of the size of the trees ( dbh ) , and the species identity of the trees . the data usuallyamount to several hundred thousand trees in large ( 25 ha or 50 ha ) plots that have not been subject to any sustained disturbance such as logging .the spatial distribution of these trees is likely to be determined by both spatially varying environmental conditions and local dispersal .recently , spatial point process methodology has been applied to analyze some of these data sets [ ; ] using nonparametric descriptive methods as well as explicit models [ ; ; ; ] . model the spatial pattern formed by a tropical rain forest tree species on the underlying environmental conditions and use the inla approach to fit the model .we analyze a data set that is similar to those discussed in the above references .since the spatial structure in a forest reflects dispersal mechanisms as well as association with environmental conditions , we include a constructed covariate to account for local clustering .the model is fitted to a data set from a 50 ha forest dynamics plot at pasoh forest reserve , peninsular malaysia .this study focuses on the species _ aporusa microstachya _ consisting of 7416 individuals [ figure [ patternrain](a ) ] .the environmental covariates have been observed in 83 locations that are distinct from the locations of the trees [ figure [ patternrain](b ) ] .the plot lies in a forest that has never been logged with very narrow streams on almost flat land .the data collected in 1995 are used here when the plot contained 320903 stems from 817 species .the species is the most common small tree on the plot .it is of interest if this species , as an aluminium accumulator , covaries with magnesium availability , as aluminium uptake might constrain its capacity to take up nutrient cations such as magnesium .in addition , its covariation with phosphorus is considered here as the element thought to be the nutrient primarily limiting forest productivity and individual tree growth in tropical forests [ burslem , personal communication ( february 2011 ) ] . [ cols="^,^ " , ] the estimated common spatial effect [ figure [ koalaresnew](a ) ] represents spatial autocorrelation present in the pattern and the marks which might be the result of related environmental processes such as nutrient levels in the soil .the estimated parameter value for and have opposite signs ( table [ tab2 ] ) .the negative sign for indicates that palatability is low where the trees are aggregated , which might have been caused by competition for soil nutrients in these areas .the positive sign for reflects that the koalas are more likely to be present in areas with higher intensity .recalling that the data have been accumulated over time , this might be due to the koalas being more likely to change from one tree to a neighboring tree where the trees are aggregated .the mean of the posterior density for the parameter in the final model is 1.38 , indicating a significant positive influence of palatability on the frequency of koala visits to the trees .the three unstructured terms are given in figures [ koalaresnew](b)(d ) .a slight trend in the residuals for the leaf marks may be observed in figure [ koalaresnew](c ) , with lower values toward the bottom left probably reflecting an inhomogeneity that can not be accounted for by the joint spatial effect .the example considered in this section is a marked cox process model , that is , a model of both the spatial pattern and two types of dependent marks , providing information on the spatial pattern at the same time as on the marks and their dependence . in cases where the marks are of primary scientific interest, one could view this approach as a model of the marks which implicitly takes the spatial dependence into account by modeling it alongside the marks .the model we use here is similar to approaches taken in , , .since our approach is very flexible , it can easily be generalized to allow for separate spatially structured effects for the pattern and the marks and to include additional empirical covariates ; these have not been available here .hence , using the approach considered here , we are able to fit easily a complex spatial point process model to a marked point pattern and to assess its suitability for a specific data set . marked point pattern data sets where data on marks are likely to depend on an underlying spatial pattern are not uncommon .within ecology , for instance , metapopulation data [ ] typically consist of the locations of subpopulations and their properties , and have a similar structure to the data set considered here .these data sets may be modeled using a similar approach and it is straightforward to fit related but more complex models , including empirical covariates or temporal replicates .similarly , marks are available for the rainforest data discussed in section [ sec4 ] .as mentioned there , a model that includes the marks of the trees may also be fitted using the approach discussed here .researchers outside the statistical community have become familiar with fitting a large range of different models to complex data sets using software available in ` r ` .this paper provides a very flexible framework for routinely fitting models to complex spatial point pattern data with little computational effort using models that account for both local and global spatial behavior .we consider complex data examples and demonstrate how marks as well as covariates can be included in a joint model .that is , we consider a situation where the marks and the covariates can be modeled along with the pattern and show that it is computationally feasible to do so. we can take account of local spatial structure by using a constructed covariate , which we discuss in detail in section [ constructcov ] .the two models discussed here indicate that our approach can be applied in a wide range of situations and is flexible enough to facilitate the fitting of other even more complex models .it is feasible to fit several related models to realistically complex data sets if necessary , and to use the dic to aid the choice of covariates .the posterior distributions of the estimated parameters can be used to assess the significance of the influence of different covariates in the models . through the use of a structured spatial effect and an unstructured spatial effect it is possible to assess the quality of the model fit .specifically , the structured spatial effect can be used to reveal spatial correlations in the data that have not been explained with the covariates and may help researchers identify suitable covariates to incorporate into the model .spatially unstructured effects may be used to account for and identify extreme observations such as locations where covariate values have been collected with a particularly strong measurement error . there is an extensive literature on descriptive and nonparametric approaches to the analysis of spatial point patterns , specifically on ( functional ) summary characteristics describing first and second order spatial behavior , in particular , on ripley s -function [ ] and the pair correlation function [ ] . in both the statistical and the applied literature thesehave been discussed far more frequently than likelihood based modeling approaches , and provide an elegant means for characterizing the properties of spatial patterns [ illian et al .( ) ] .a thorough analysis of a spatial point pattern typically includes an extensive exploratory analysis and in many cases it may even seem unnecessary to continue the analysis and fit a spatial point process model to a pattern . an exploratory analysis based on functional summary characteristics , such as ripley s -function or the pair - correlation function , considers spatial behavior at a multitude of spatial scales , making this approach particularly appealing .however , with increasing complexity of the data , it becomes less obvious how suitable summary characteristics should be defined for these , and a point process model may be a suitable alternative .for example , it is not obvious how one would jointly analyze the two different marks together with the pattern in the koala data set based on summary characteristics .however , as discussed in section [ koalas ] , it is straightforward to do this with a hierarchical model .in addition , most exploratory analysis tools assume the process to be first - order stationary or at least second - order reweighted stationary [ baddeley et al .( )]a situation that is both rare and difficult to assess in applications , in particular , in the context of realistic and complex data sets .the approach discussed here does not make any assumptions about stationarity but explicitly includes spatial trends into the model . in the literature , local spatial behaviorhas often been modelled by a gibbs process .large - scale spatial behavior may be incorporated into a gibbs process model as a parametric or nonparametric , yet deterministic , trend , while it is treated as a stochastic process in itself here .modeling the spatial trend in a gibbs process hence often assumes that an explicit and deterministic model of the trend as a function of location ( and spatial covariates ) is known [ ] . even in the nonparametric situation ,the estimated values of the underlying spatial trend are considered fixed values , which are subject neither to stochastic variation nor to measurement error .since it is based on a latent random field , the approach discussed here differs substantially from the gibbs process approach and assumes a hierarchical , doubly stochastic structure .this very flexible class of point processes provides models of local spatial behavior relative to an underlying large - scale spatial trend . in realistic applicationsthis spatial trend is not known .values of the covariates that are continuous in space are typically not known everywhere and have been interpolated .it is likely that spatial trends exist in the data that can not be accounted for by the covariates .the spatial trend is hence not regarded as deterministic but assumed to be a random field .this approach allows to jointly model the covariate and the spatial pattern as in the model used for the rainforest example data set . clearly , unlike gibbs processes , log gaussian cox processesdo not allow second order inter - individual interactions to be included in a model .in a situation where these are of primary interest , cox processes are certainly not suitable . in order to make model fitting feasible ,the continuous gaussian random field is approximated here by a discrete gauss markov random field . while this is computationally elegant, one might argue that this approximation is not justified and is too coarse , resulting in an unnecessary loss of information .clearly , since any model only has a finite representation in a computer , model fitting approaches often work with some degree of discretization . however , and more importantly , that there is an explicit link between a large class of covariance functions ( and hence the gaussian random field based on these ) and gauss markov random fields , clearly pointing out that the approximation is indeed justified .in addition , based on the results discussed in , the approaches taken in this paper may be extended to avoid the computationally wasteful need of having to use a regular grid [ ] . also mention the issue of complex boundaries structures that are particularly relevant for point process data sets where the observation window has been chosen to align with natural boundaries that may impact on pattern . while this is clearly not an issue for the rainforest data set since the boundaries have been chosen arbitrarily , the koala data set , however , has been observed in an observation window surrounded by a koala proof fence .this fence does probably not impact on the locations of the trees nor the leaf chemistry but might increase the frequency of koala visits near the fence .the approach in may be used to define varying boundary conditions for different parts of the data set , and hence allow for more realistic modeling for data sets with complicated boundary structures . in summary ,the methodology discussed here , together with the ` r ` library ` r - inla ` ( http://www.r-inla.org/ ) , makes complex spatial point process models accessible to scientists outside the statistical sciences and provides them with a toolbox for routinely fitting and assessing the fit of suitable and realistic point process models to complex spatial point pattern data .some of the ideas relevant to the rainforest data were developed during a working group on `` spatial analysis of tropical forest biodiversity '' funded by the natural environment research council and english nature through the nerc centre for population biology and uk population biology network .the data were collected by the center for tropical forest science and the forest institute of malaysia funded by the u.s .national science foundation , the smithsonian tropical research institution and the national institute of environmental studies ( japan ) .we would like to thank david burslem , university of aberdeen , and richard law , university of york , for introducing the rainforest data into the statistical community and for many in - depth discussions over the last few years .we also thank colin beale , university of york , and ben moore , james hutton institute , aberdeen , for extended discussions on the koala data .
this paper develops methodology that provides a toolbox for routinely fitting complex models to realistic spatial point pattern data . we consider models that are based on log - gaussian cox processes and include local interaction in these by considering constructed covariates . this enables us to use integrated nested laplace approximation and to considerably speed up the inferential task . in addition , methods for model comparison and model assessment facilitate the modelling process . the performance of the approach is assessed in a simulation study . to demonstrate the versatility of the approach , models are fitted to two rather different examples , a large rainforest data set with covariates and a point pattern with multiple marks . , .
at present , imaging of hard x - rays ( hxr ) and gamma rays ( gr ) is only feasible by selective absorption using different kinds of masks .an elegant and economic variant uses rotation collimators ( schnopper 1968 , willmore 1970 , skinner and ponman 1995 ) where a pair of absorbing grids rotates between the true scene and a spatially non - resolving detector ( fig .[ rmc_fig ] top ) . depending on whether a source is behind or between the grid bars , the observed flux is low or large . as rotation progresses , the true scene becomes thus encoded in a temporal _ modulation _ of the observed hxr / gr flux ( fig .[ rmc_fig ] bottom ) . in this process, the modulation frequency is not constant but varies with time : glancing passages of the grid bars produce slow modulation , and rippling passages yield fast modulation .the absolute value of the modulation frequency also depends on the offset of the source from the rotation axis , and the modulation amplitude depends on the source size compared to the grid period , in such a way that the amplitude is largest for a point source , and tends to zero if the source size exceeds the grid period .altogether , this results in a characteristic time series , from which the true scene can be reconstructed by suitable inverse methods ( skinner 1979 , prince et al 1988 , skinner and ponman 1995 , hurford et al 2002a ) . while such methods usually assume that the true scene does not depend on time , the present article deals with the complementary problem of estimating the true _ time _dependence of the ( spatially integrated ) scene , and distinguishing it from rotation modulation .this is called here the ` _ demodulation _ ' problem .the data which we envisage are obtained by the reuven ramaty high - energy solar spectroscopic imager ( rhessi , lin et al 2002 ; zehnder et al 2003 ) .this solar - dedicated space mission uses rotation modulation for hxr / gr imaging of solar eruptions .the rhessi instrument has 9 pairs of aligned hxr / gr - absorbing grids ( 9 ` subcollimators ' ) which are fixed on the rotating spacecraft .different subcollimators have different grid spacings , which produce sinusoidal spatial transmission patterns ( see section [ rhessi_sect ] for details ) . behind each subcollimatorthere is a germanium detector which records the arrival time and energy ( 3 kev - 17 mev ) of the incoming hxr / gr photons . from a statistical point of view, the photon arrival times form a poisson process with non - constant intensity .the change in intensity is due to rotation modulation , but may also have contributions from the time dependence of the true scene .the latter is highly interesting from a solar physics point of view , because it is related to particle acceleration in impulsive solar eruptions .the existence of temporal fluctuations of the true hxr scene down to time scales of hundred milliseconds has been confirmed by earlier observations ( dennis 1985 , machado et al 1993 , aschwanden et al 1993 ) . however , these time scales interfere with the rhessi rotation modulation which occurs on time scales of milliseconds to seconds . in order to disentangle time dependence of the true scene from rotation modulation the rhessi data must be demodulated .the most general outcome of this process would be the true scene as a function of both space and time .but the information contained in the photon counts is rather limited , and must be shared between spatial and temporal degrees of freedom of the true scene .we therefore restrict ourselves here to the simpler problem of estimating the spatially integrated true scene as a function of time .the motivation for this stems also from the wish to compare rhessi hxr data with spatially integrated broadband radio observations .both types of radiation are emitted by electrons of comparable energy , and there is some controversy in the field whether or not the two populations actually agree .the actual radio observations have a time resolution of 100 milliseconds , and collect radiation from the whole sun .the demodulation of rhessi data is , in general , an inverse problem .temporal and spatial variations of the true scene are entangled by the observation method .an exception only arises if the time scale of interest exceeds the rhessi rotation period 4s , so that demodulation can be replaced by a running average over .otherwise , a good estimator for the spatially integrated scene requires some a priori information on its spatial structure in order to outweigh or damp the rotational modulation .such information may either come from independent observations , or from rhessi itself . in the latter case , one may use standard rhessi imaging techniques ( hurford et al 2002a ) to obtain an estimate of the time - averaged true scene .these techniques assume that the true scene is independent of time . under this assumption, the true scene can be estimated after half a spacecraft rotation , when all possible grid orientations are cycled , and higher - quality results arise from multiples of .the aim of the present paper is to resolve time scales .we do not attempt here a fully general solution where the spatially true integrated scene is found at each point of time .instead , we will assume that the true scene can be considered as piecewise constant during time intervals of order of 100 ms .such intervals are shorter than most previously reported time scales , and they also agree with the time resolution of the comparative radio observations mentioned above .the present article describes an efficient linear estimator for the spatially- and -integrated scene .the construction of the estimator follows the lines of classical wiener filtering ( rybicki and press 1992 ) , but operates in time ( not frequency ) space .the paper is organized as follows .section [ rhessi_sect ] summarizes some relevant technicalities of the rhessi instrument .section [ method_sect ] presents our inverse theory approach including counting statistics ( sect .[ principle_sect ] ) and a priori information ( sect . [ prior_sect ] ) .the resulting estimator is then discussed in sections [ uniform_sect ] - [ mechanism_sect ] , and its performance ( sect .[ performance_sect ] ) and robustness against violation of the prior assumptions ( sect .[ robustness_sect ] ) are explored by monte carlo simulations .section [ flare_sect ] shows the algorithm at work on real data from a solar eruption .[ notation ] for an overview of notation .angular position = ( ) is measured in locally cartesian heliocentric coordinates , and the dependence on is therefore also termed ` spatial ' .one arc second ( 1 ) corresponds to 700 km on the solar disc .[ notation ] [ cols= " < , < " , ]we start with a brief description of the rhessi response to hxrs and grs , which defines the ` forward ' problem of converting the true scene into the observed counts .we shall only consider photons out of a fixed energy band , average all energy - dependent quantities over that energy band , and omit the energy dependence in the notation .the instantaneous transmission probabilities of rhessi s subcollimators , as a function of photon incidence direction , are called the modulation patterns .they may be visualized as the grids shadow on the sky plane if the detectors were operated in a transmitting mode . when expressed in terms of heliocentric cartesiancoordinates ( over the limited rhessi field of view , cartesian and angular coordinates are equivalent ) , the modulation patterns can be approximated by above , the , = 1 .. 9 , are the grid vectors which rotate clockwise with the spacecraft rotation period .all grid periods ( asec ) are small compared to the solar diameter ( 1920 ) and to the fields of view of the individual subcollimators ( 3600 ... 27.000 ) . the coefficients ( )describe the internal shadowing of the grids ; they depend on photon energy and only weakly on time as long as the source is in the central part of the field of view , which is the case for solar sources .the vector is the imaging ( optical ) axis .it generally varies with time , since it is not aligned with the rotation axis , which , in turn , may differ from the inertial axis . as a consequence , traces out a relatively complicated orbit on the solar disc , which is continuously monitored by the onboard aspect systems ( fivian et al 2002 , hurford et al 2002b ) , and the details of which need not concern us here . all vectors , and ly in the solar plane . in order to avoid detector and telemetry saturation ,mechanical attenuators can be inserted into the optical path , and photon counts can be decimated by a clocked veto : during one binary microsecond ( 1b = 2 ) , all events are accepted , during the next ( )b , they are all rejected . both reduction methodspreserve the statistical independence of the photon arriving times , and can be absorbed in the definition of the modulation patterns .while equation ( [ modpat ] ) represents the ideal instrumental response , the real observations suffer from several non - idealities such as detector saturation at high count rates , and sporadic breakdown of the whole detection chain .the latter is presumably caused by cosmic rays ( smith et al 2002 ) ; the resulting data gaps occur at random about once per second , and have durations of milliseconds to seconds ( see fig .[ rmc_fig ] bottom for an example ) .all non - idealities are combined on - ground in livetime measures 0 1 , which estimate the operational fraction of each detector in a given time bin .the effect of livetime is taken into account by multiplying the modulation patterns by .times with are discarded , which prevents detector saturation and redundant or undefined ( ) numerical operations .let denote the solar brightness distribution at position and time , and let the rhessi counts be grouped in time bins where = ( , ) labels subcollimator and time .the set may comprise all or only some of the 9 subcollimators , and different subcollimators may have different time bins .the observed counts in different bins are supposed to be statistically independent poisson variates with expectation values where we have written to extract the subcollimator index out of . non - solar background is neglected .the goal is to estimate spatially integrated true scene in the time interval , as would be observed in the absence of the subcollimators ( ) .the estimator for is searched in the linear form in the sequel we shall choose to make efficient , i.e. , unbiased and minimum - variance among all unbiased estimators . to this end , and for the sake of a concise notation , we first introduce the expectation operator of a general function of the true and of the observed counts .this is done within a bayesian framework where each true scene has assigned a prior probability , so that .one thus has where and depend on according to equations ( [ lambda ] ) and ( [ b ] ) .the expectation operator ( [ e ] ) involves an average over the counting statistics , followed by an average over the prior scenes , and we write to stress this sequence .if a quantity is independent of the observed counts then reduces to .so far , is still a formal object .let us accept this for the moment , and proceed with the design of the weights .a first natural request is that the estimator ( [ hat(b ) ] ) is unbiased , , requiring that this condition locates along the direction of . in order to find a unique solution we additionally require the variance to be minimal , andverify a posteriori that this condition is sufficient to ensure uniqueness of the solution .unbiasedness and minimum - variance constraints are combined by minimizing with respect to , where the lagrange multiplier must be adjusted to fulfill equation ( [ unbiased ] ) .the resulting minimum condition is or , after performing the average over counting statistics , here it was used that for independent counts .one may now see that equation ( [ min_cond2 ] ) has indeed a unique solution if only . under this condition ,the matrix is strictly positive definite : for all .since the prior average represents a sum of positively weighted s , the matrix is also positive definite and therefore invertible indeed , the condition number of can not exceed the condition number of .let us add at this point an interpretation of the matrix .for any fixed realization of the true scene , the fluctuations of the observed counts have two causes : instrumental modulation and counting noise .these two causes give rise to the two contributions and in the matrix . at large count rates ( , is dominated by , and at small count rates ( ) it is dominated by .the condition number of increases with increasing count rates and is bounded from above by .thus , higher count rates allow weaker - conditioned . we may interpret this by saying that the counting noise regularizes the problem of inverting modulation , and that the amount of regularization is given by the level of poisson fluctuations compared to the modulation amplitude .we turn now to the definition of ( eq . [ e ] ) .formally , the true brightness distribution is an element of a function space , and its a priori probability measure is a functional . in order to avoid technical complicationswhile retaining the basic probabilistic features , we make the following simplifying assumptions : the true brightness distribution is concentrated in a ( known ) spatial prior , where it may have ( unknown ) substructures which do not vary significantly during the time interval for which the unmodulated counts are to be estimated .thus for .now we recall that enters the problem only upon weighting with the modulation paterns ( eq . [ lambda ] ). therefore the detailed structure of on scales which are small compared to the period of the modulation patterns do not matter , and we may represent by a ( finite ) collection of point sources with the agreement that and that the spacing between the is not less than the finest resolvable scale .since we do not wish to introduce any bias into the brightness distribution , except for its localization in the prior region , we set = where is a pdf concentrated in the prior region .this definition of is insensitive to the amplitude of but sensitive to its support .applying reduces now to an elementary calculation , and equation ( [ min_cond2 ] ) becomes w_\nu = ( \tau - \zeta \beta^{-1 } ) \langle m_\mu \rangle \label{result}\ ] ] with equation ( [ result ] ) is our main result .the parameter 0 1 is a ` filling factor ' : = 0 corresponds to a single unresolved source , whereas 1 stands for a prior region densely covered with sources . if the diameter of the prior region exceeds the angular pitch , then 1 indicates the loss of modulation , while = 0 retains full modulation , yet at an unknown phase .note that is the quantity which actually is to be estimated .its occurrence in equation ( [ result ] ) is , however , unproblematic . on the right hand side , is absorbed in the lagrange multiplier . on the left hand side, occurs in the regularization only , where it may be replaced by a simpler estimate ( sect .[ uniform_sect ] ) without qualitatively changing the solution .since only affects the scaling of the right hand side vector in equation ( [ result ] ) , its adjustment for unbiasedness ( eq . [ unbiased ] ) is equivalent to re - scaling , which is easily implemented numerically .it remains to select the pdf .our choice is largely ad hoc , and must be justified by demonstrating that the resulting estimator remains efficient and well - behaved even if the prior assumptions are violated . this will be done in sections [ performance_sect ] and [ robustness_sect ] . herewe qualitatively motivate our choice .first of all , should be simple and depend on not more than a few parameters in order to facilitate operational data processing .one set of parameters which is provided by the rhessi data products is the centroid of the hxr emission derived from time - integrated ( ) rhessi imaging .it is natural to center on . in addition to the centroid we would like to specify the rough size of the prior region , but not any of its details , since these are not known on time scales as small as .the choice of reflects the uncertainty of the true scene , and may be based on rhessi imaging or on independent observations . in particular , we may chose such as to cover a solar ` active region ' ( howard 1996 ) derived from magnetic field observations . in any case , should not be unrealistically small ; simulations ( sect .[ robustness_sect ] ) suggest that is a reasonable , conservative , choice .owing to the inherent rotation symmetry of the rotation modulation observing principle , it is suggestive to choose an isotropic form for , although this is by no means a rigorous request . from a practical point of viewit is also important that the integrals in equation ( [ m2 ] ) and ( [ mm2 ] ) can be performed analytically as far as possible to save numerical operations .a choice which fulfills the above criteria is since the coefficients and in equation ( [ modpat ] ) are slowly varying with time , they may be replaced by discrete - time versions and .equations ( [ m2 ] , [ mm2 ] ) then become \langle m_\mu m_\nu \rangle & = & \sum_{s=\pm1 } \sum_{m , n=0}^{1 } \frac{a_{m\mu } a_{n\nu}}{2 } e^{-\frac{l^2}{2}(|m{\bf k}_{i(\mu)}|^2+|n{\bf k}_{i(\nu)}|^2)}\int_{\delta_\mu } \hspace{-2 mm } dt \int_{\delta_\nu } \hspace{-2 mm } dt ' \times \label{mmc } \\ & & \times e^{-l^2 smn { \bf k}_{i(\mu)}\hspace{-.5mm}(t ) \cdot { \bf k}_{i(\nu)}\hspace{-.5mm}(t ' ) } \cos \big ( m\phi_\mu \hspace{-.5mm}(t ) + sn\phi_\nu \hspace{-.5mm}(t ' )\big ) \nonumber\end{aligned}\ ] ] with . on time scales , mostly terms with = contribute to equation ( [ mmc ] ) .the time integrals in equations ( [ mc ] ) and ( [ mmc ] ) , which depend on the actual spacecraft motion , are evaluated numerically .rhessi detects individual photons , and their assignment to time bins is an important first step of the demodulation procedure .there are two , potentially conflicting , requests on the time bins . on the one hand, should resolve ( say , by a factor 10 ) the instantaneous modulation period \label{tau_mod}\ ] ] of a source at position .on the other hand , demodulation is only beneficial if there are sufficient counts to observe modulation against poisson noise .this requires , empirically , some 5 counts per time bin .in addition , the time bins should be integer fractions of in order to minimize roundoff error .we therefore adopt the following rule : where is the average count rate [ ct / s ] in subcollimator .the factors 5 and 10 in equation ( [ choice_of_delta_t ] ) are empirical . equations ( [ mc ] ) and ( [ mmc ] ) are evaluated by an extended trapezoidal rule with intermediate step size adapted to the modulation frequency and amplitude .the inversion of the matrix on the left hand side of equation ( [ result ] ) is performed by singular value decomposition ( golub and van loan 1989 ) with explicit control of condition number .let us start our discussion by verifying that the estimator ( [ hat(b ) ] ) remains well - behaved and meaningful in extreme cases where the prior is pointlike or flat , and where the observed flux tends to zero .the limit of infinite flux is discussed in sect .[ geom_sect ] . at very low count rates ( 1 ct/ ) , modulation is no longer observable , and one may thus expect that should reduce to a simple average , which is an efficient estimator for the pure counting noise problem . indeed , in the limit 0 ,equation ( [ result ] ) yields ( after adjusting to satisfy eq .[ unbiased ] ) equation ( [ uniform ] ) will be referred to as ` uniform average ' since it involves uniform weights ( =const ) .the last approximation in equation ( [ uniform ] ) holds if the modulation is fast ( ) or weak ( ) , so that the term with =1 in equation ( [ mc ] ) can be neglected .the uniform average provides a simple guess of , which is - hopefully improved by the more sophisticated estimator ( [ hat(b ) ] ) .we shall see in sect .[ performance_sect ] - [ robustness_sect ] that this is in fact the case .the uniform average ( [ uniform ] ) is also attained if = 0 or = 1 , both representing deterministic limits with completely localized or densely filled prior regions . in either case, the term reduces to , so that equation ( [ result ] ) has the solution = const , and reduces to .another situation of interest is where the prior becomes globally flat . for can derive from equations ( [ mc]-[mmc ] ) that and .thus equation ( [ result ] ) can be inverted by the sherman - morrison formula ( press et al 1998 ) , and one finds ( after adjusting ) from equation ( [ flat ] ) the uniform average ( [ uniform ] ) is recovered if .otherwise a correction arises which weakly varies with time ( sect .[ rhessi_sect ] ) .bins with large modulation amplitude are given less weight in equation ( [ flat ] ) , which is reasonable since their uncertainty is larger .in summary , we have found that the estimator ( [ hat(b ) ] ) has well - defined and meaningful analytic limits when the flux tends to zero ( ) , and when the prior is completely restrictive ( ) or completely nonrestrictive ( ) . in the limit of infinite count rates ,the counting noise becomes unimportant and equation ( [ result ] ) admits a purely geometrical interpretation .recalling the assumption for , one has that with and .therefore is a good estimator for for _ arbitrary _ if is independent of .the modulation patterns would then be canceled . however , the functions are neither orthogonal nor complete , so that can not be made constant by whatever choice of weights .instead , we may try to minimize the fluctuations of within the spatial prior , and therefore minimize .this yields . comparing this expression to equation ( [ result ] )we see that equation ( [ result ] ) minimizes the fluctuations of within the spatial prior under the maximum - modulating assumption that there is a single point source ( ) observed at infinite count rate ( ) .the matrix is generally ill - conditioned , and the weights therefore oscillate . at finite count rate , the excursions to large amplify the poisson noise , and thereby impair the estimator ( eq . [ hat(b ) ] ) .an optimum estimator must thus limit the oscillations of to a level which commensurates with the poisson noise .equation ( [ result ] ) provides a possible trade - off .[ phi_fig ] illustrates the resulting function for = 0 and different , using aspect data of february 26 2004 , subcollimators ( 3,4,5,6 ) , and = 0.2s .the time bins are ( 2.8 , 4.8 , 8.5 , 15)ms , and the spatial prior ( size = 40 ) is centered at the source position ( 245 , 340 ) estimated from time - integrated rhessi imaging . the black mask in fig . [ phi_fig ] indicates the 10% level of the spatial prior ( eq . [ prior ] ) . in order to assess the constancy of we consider the standard deviation and mean inside the black mask . in a noiseless world( ) , the modulation patterns would admit 0.0058 ( fig .[ phi_fig]a ) . taking the actual counting noise ( 6200 ct / s ) into account , 0.024 is still achievable ( fig .[ phi_fig]b ) . for comparison , the uniformly weighted ( =const ) case is also shown ( fig .[ phi_fig]b , 0.14 ) , which corresponds to the limit .as can be seen , the function becomes less efficient in canceling the modulation patterns when the count rate decreases .but , at the same time , the sensitivity of to poisson noise increases . the trade - off chosen by the present method is shown in fig .[ phi_fig]b . in a generic situation ( , =0 , )one may identify three mechanisms by which the weights operate .these are most easily discussed in terms of fourier modes of the modulation .first , if the modulation phase is resolved ( ) then modulation can ` actively ' be countersteered .secondly , if the modulation phase is not resolved but the instantaneous modulation frequency ( eq . [ tau_mod ] ) is known , then counts with can be suppressed because they do not allow a ` passive ' averaging .this suppression works for each subcollimator individually .( although does not explicitly show up in equation ( [ result ] ) , it is implicitly contained in the diagonal blocks of correlation function , see fig .[ mm_fig ] below ) .thirdly , the matrix couples different subcollimators , which regulates the relative weighting of different subcollimators .the different mechanisms are illustrated in figures [ cw_fig ] and [ mm_fig ] , showing simulated data of subcollimators ( 4,5,6,7 ) with = ( 885,161 ) , = 20 , = 0.2 , and a single intense ( 3 / s / subcollimator ) point source located at .the chosen geometry represents a solar limb source .the simulated counts are divided into disjoint intervals of duration = 0.25s ( fig .[ cw_fig ] ) , and time bins are equal ( = 0.0025s ) for better comparability .the outweighing of modulation is most clearly seen in the coarsest subcollimator ( fig .[ cw_fig ] , # 7 ) with = 0.16 , where = 122 is the period of subcollimator # 7 .the modulation is also though less efficiently outweighed in subcollimator # 6 ( = 0.28 ) , while the finest subcollimators # 5 ( = 0.49 ) and # 4 ( = 0.85 ) mostly operate in an averaging mode , with priods of low being suppressed .the different regimes manifest in different forms of the correlation ( fig .[ mm_fig ] ) . for 1the matrix approximately factorizes into ( fig .[ mm_fig ] , # 7 ) . with increasing ,the autocorrelation of a single subcollimator takes the form with decay time ( eq . [ mmc ] ) . here ,both and depend to first order on , which results in the ` chirping ' behaviour of fig .[ mm_fig ] , subcollimator # 4 .the chirp towards low modulation frequencies is associated with a drop of the corresponding weights ( fig .[ cw_fig ] top , time interval in dashed lines ) .the quality of the demodulation and its robustness against violation of prior assumptions have been explored by monte carlo simulations .figure [ performance_fig ] shows , as an example , the relative errors of simulated time intervals , together with uniform values and the relative poisson error with the total number of counts in . dots illustrate a subsample of the simulation ; graphs represent the full ensemble averages .the spatial prior is centered at = ( 420 , -630 ) and has size = 40 . the simulated brightness distribution is a superposition of 10 random sources with uniform relative amplitudes , gaussian positions ( mean , variance ) , uniform sizes ( 1 ) , and uniform intrinsic time scales ( 0.2s ) . the assumed filling factor is = 0.2 , whereas the true value scatters from 0.8 to 0.9 .data gaps are neglected . at low count rates ,the error is dominated by the poisson noise , and all three types of error coincide . at higher count rates ,the error becomes dominated by modulation , and the uniform average no longer improves with improving counting statistics .the estimator performs better , and reaches the poisson limit for integration times 8ms and count rates up to ct / s / subcollimator .the latter is , in fact , the practically relevant range higher count rates do not occur because the rhessi detectors need 8 to recover after each photon impact .one may thus be confident that at least with regard to the ( tolerant ) error measure the estimator works optimally for practical purposes .the degradation of at highest count rates and largest is attributed to the increased susceptibility to prior assumptions , and possibly also to numerical errors as the involved matrices become large and weak - conditioned .( condition numbers up to occur at ct / s / subcollimator . )extended simulations including different brightness distributions , data gaps , and values yielded similar results . an important practical feature of bayesian inverse methods is their robustness against violation of the prior assumptions .if a wrong prior is used , then the result will be degraded but it should not become ( much ) worse than if no prior was used at all ( i.e. , if the weights were uniform or the prior was flat , ) . we shall now demonstrate that this goal is met by our demodulation method .figure [ robustness_fig ] shows simulations performed at fixed count rate of ct / s / subcollimator and fixed integration time = 0.12s , but with varying size of the prior region , and with different offsets between the true and prior centroids . in each simulation , a prior centroid is chosen at random across the solar disc , and the true centroid is placed arc seconds away in random direction .the true source has several gaussian components , which are all concentrated in a narrow ( 3 ) region around the true centroid .the simulated imaging axis moves within the central ( ) region of the solar disc .while the cases reflect realistic uncertainties of rhessi observations , 50 being a conservative value , the case is overly pessimistic and is included here for demonstration purposes .the relative errors ( vertical axis ) are defined as in figure [ performance_fig ] ; the ` flat prior ' estimator is given by equation ( [ flat ] ) .dots again represent a subsample of the simulation , while curves represent averages over the full sample ( ) .if the prior centroid coincides with the true one ( =0 , black solid line ) , then the demodulation reaches the poisson limit for 5 ,i.e. , as long as the prior width does not exceed the true width by more than a factor .if the true and assumed centroids differ ( ) , then the demodulation does no longer reach the poisson limit , and degradation depends on the ratio . as can be seen ,all error curves collapse to the optimum ( ) one when , indicating that discrepancies between true and prior centroids are tolerated up to the size of the prior region . except for very large and unrecognized prior errors ( =500 , 20 ) , the demodulation performs better than the uniform average , and approaches the flat prior estimate as .the flat prior , in turn , performs better than the uniform average .the solar diameter is 1920 , so that effectively represents a trivial prior , which only indicates that the photons came from the sun . as a practical implication , we learn from figure [ robustness_fig ] ( and from similar simulations ) that , for realistic uncertainties ( say , ) , demodulation is beneficial compared to the uniform average if , and it is beneficial compared to the flat prior if .a recent solar eruption , to which also figures [ rmc_fig][phi_fig ] refer , occurred on february 26 , 2004 .rhessi has observed the whole eruption , and figure [ flare_fig ] shows a part of the impulsive rise phase , where most temporal fine structures are expected .the observed counts of subcollimators ( 1,3,4,5,6,7,8,9 ) are divided into disjoint intervals of duration = 0.12s , while subcollimator 2 is rejected due to increased background ( smith et al .the centroid = and size = 30 of the spatial prior are taken from a long - exposure ( 10s ) rhessi imaging and in agreement with the simulations of sect . [ robustness_sect ] .the filling factor is assumed to be = 0.4 .the average count rate is about 4000 ct / s / subcollimator , and energies from 5 to 15 kev are used . in order to remove some arbitrariness of the time binning and to assess the quality of the demodulation we proceed as follows . by assumption ,the true scene is approximated as piecewise constant in time .if this assumption was true then a second demodulation with intervals of equal duration but shifted by should yield a similar result . by comparing with we may thus gain an estimate on the accuracy of the demodulation , and by considering may remove some arbitrariness of the time binning .figure [ flare_fig ] ( top panel ) shows the estimator , together with the uniform average as the simplest possible guess .panel b ) shows the discrepancy between and its time - shifted version . as can be seen, the relative discrepancy is throughout small ( 6% ) .the pure poisson error is also shown for comparison ( gray line ) .the residuals exceed the pure poisson error , and this excess is due to uncertainties of the weights .the latter are caused by the principal reasons discussed in sect .[ geom_sect ] , but also have contributions from the uncertainty of the prior centroid and -size , as well as from the approximate form of equation ( [ modpat ] ) and its energy - averaged coefficients , possible errors of the aspect data , and from violation of the piecewise constant - in - time approximation of the true scene . a complete disentangling of different sources of errors is difficult and somewhat speculative .however , we may empirically compare the residuals to the residuals ( fig .[ flare_fig]c ) and ( fig .[ flare_fig]d ) .this shows a clear order of residual amplitudes b ) c ) d ) , in agreement with the simulations. we may thus be confident that the flat prior estimate improves the uniform average , and that the demodulation improves the flat prior estimate .all residuals are centered about zero , in agreement with unbiasedness . by comparing simulation results with the residuals of fig .[ flare_fig ] b - d ) , and with families of similar real - data demodulations with varying ( not shown ) , we conclude that the demodulation error is in the order of the residuals ( fig . [ flare_fig]b ) . as may be seen from figure [ flare_fig]a ) ,many of the excursions of especially towards low count rates ( data gaps ) are absent in .they are therefore most likely to be instrumental .not all of the data gaps can , however , be removed by the demodulation : see , e.g. , before 01:53:50 . here, the collective dropout of several detectors during inhibits successful compensation .we have developed a unbiased linear bayes estimator for the photons arriving in front of the rhessi optics , which applies in situations where imaging information is less in demand than ( spatially integrated ) temporal evolution .the prior assumptions involve time - independence of the true brightness distribution during , and an a gaussian a priori pdf for the source density on the solar disc .the estimator minimizes the expected quadratic deviation of true and retrieved unmodulated counts , while enforcing agreement of their expectation values .non - overlapping time intervals are independent . geometrically , the algorithm tries to cancel the spatial transmission patterns ( modulation patterns ) of the rhessi optics by a suitable linear combination of patterns belonging to the time interval .the degree to which canceling is beneficial depends on the counting noise , and the algorithm constructs a trade - off between poisson- and modulational uncertainties of the estimator .monte carlo simulations show that the mean relative error of the demodulation reaches the poisson limit , and demonstrate robustness against violation of the prior assumptions .an application to a solar eruption is also discussed .the present method is limited in several ways .first , any non - solar background is neglected .secondly , the use of sharp time intervals brings along the computational advantage that the data can be split and the results merged in the end , but at the cost of possible artifacts at interval boundaries .the use of larger and smoothly tapered time intervals would help , but is computationally demanding .the condition number of a positive definite matrix is defined as the ratio of its largest eigenvalue to its smallest eigenvalue .we first ask for simple bounds on the condition number of the matrix with positive .since is symmetric all its eigenvalues ly between the minimum and maximum of the rayleigh quotient with ( euclidian norm ) .the minumum of is bounded by , while the maximum is bounded by .therefore , . holds .equation ( [ cond_eb ] ) then follows by induction over with .( assuming that the true scenes may be labeled by discrete labels .we shall not prove this assertion , but call it plausible in view of the finite resolution of the modulation patterns . )equation ( [ ineq ] ) is then easily verified by direct calculation :
the rhessi experiment uses rotational modulation for x- and gamma ray imaging of solar eruptions . in order to disentangle rotational modulation from intrinsic time variation , an unbiased linear estimator for the spatially integrated photon flux is proposed . the estimator mimics a flat instrumental response under a gaussian prior , with achievable flatness depending on the counting noise . the amount of regularization is primarily given by the modulation - to - poisson levels of fluctuations , and is only weakly affected by the bayesian prior . monte carlo simulations demonstrate that the mean relative error of the estimator reaches the poisson limit , and real - data applications are shown .
fundamental discoveries in science have always resulted in accompanying technological developments , not only through the possibilities provided by new phenomena , materials and processes but , perhaps most important of all , through the change in mindset driven by these discoveries . herei explore new technological possibilities related to the development of , essentially , a quantum theory of gravity ; a new theory that unifies the phenomena of space and time with quantum phenomena .this synthesis has arisen not through yet another extension of our current mindset but by the development of a profoundly different way of comprehending and modelling reality at its deepest levels. the follow - on new technology relates to the possibility of synthetic quantum systems and their use in a new class of quantum computers .the characterisation of this class suggests , at this very early stage , that these will be unlike both conventional classical and currently envisaged quantum computers , but will have many characteristics in common with biological neural networks , and may be best suited for artificial intelligence applications . indeed the realisation that the phenomena of synthetic quantum systems is possible may amount to a discovery also relevant to our understanding of biological neural networks themselves .but first we must review the nature of and the need for fundamental changes of the mindset prevailing in the physical sciences .these changes relate to the discovery that we need to comprehend reality as a complex semantic information system which only in part can be approximately modelled by syntactical information systems .physics , until recently , has always used syntactical information systems to model reality .these are a development of the euclideanisation of geometry long ago .such systems begin with undefined ` objects ' , represented by symbols , together with _ a priori _ rules of manipulation of these symbols ; these rules being a combination of mathematical manipulations together with ` the laws of physics ' expressed in mathematical notation .this is the game of logic .but logic has only a limited connection to the phenomena of time , for logic is essentially non - process : it is merely symbol manipulation . in physics timehas always been modelled by geometry .this non - process model matches the notion of order , but fails to match the notion of past , present and future . for this reason physicshas always invoked a metarule to better match this model with the experienced phenomena of time .this metarule involves us imagining a point moving at uniform rate along the geometrical time line . despite the success of this model ,its limitations have led to enormous confusion in physics and elsewhere , particularly when reality and models of reality are not clearly distinguished .for example it is often asserted that time _ is _ geometrical ; it _ is _ the 4th dimension .the more successful a model is the more likely is this confusion to arise ; and also the stronger is the urge to resist an examination of failures of this model .one consequence of the refusal to examine other models , particularly of time , has been the failure to find a model that unifies the phenomena of space , time and the quantum . as well limitations of self - referential syntactical information systems were discovered by gdel .the famous incompleteness theorem asserts that there exists truths which are unprovable within such an information system .chaitin has demonstrated that the unprovable truths arise from a structural randomness in that there is insufficient structure for such truths to be compressed into the axioms of the system .this structural randomness is related to but is outside a non - process syntactical system .this is different from the randomness observed in quantum measurement processes , which is randomness of events in time ; nevertheless there are suggestive similarities .the quantum randomness is also beyond the syntactical formalism of the quantum theory ; quantum theory is described by entirely deterministic mathematics ( which is by its nature is non - process ) and the randomness is invoked via the born metarule .as for the geometrical model of time , these metarules are outside of the syntax , but must not be inconsistent with it .the analogies between the quantum measurement randomness and the structural randomness associated with self - referential syntax systems has suggested that reality may be a self - referential system subject to intrinsic informational limitations .this has led to the development of the _ process physics _modelling of reality , see also .this model uses a non - geometric process model for time , but also argues for the importance of relational or semantic information in modelling reality .semantic information refers to the notion that reality is a purely informational system where the information is internally meaningful .the study of a pure semantic information system is by means of a subtle bootstrap process .the mathematical model for this has the form of a stochastic neural network .non - stochastic neural networks are well known for their pattern recognition abilities .the stochastic behaviour is related to the limitations of syntactical systems , but also results in the neural network being innovative in that it creates its own patterns .the neural network is self - referential , and the stochastic input , known as self - referential noise ( srn ) , acts both to limit the depth of the self - referencing and also to generate potential order .such information has the form of self - organising patterns which also generate their own ` rules of interaction ' .hence the information is ` content addressable ' , rather than is the case in the usual syntactical information modelling where the information is represented by symbols . inthe process physics space and quantum physics are emergent and unified , and time is a distinct non - geometric process .quantum phenomena are caused by fractal topological defects embedded in and forming a growing three - dimensional fractal process - space .this amounts to the discovery of a quantum gravity model .as discussed in refs. the emergent physics includes limited causality , quantum field theory , the born quantum measurement metarule , inertia , time - dilation effects , gravity and the equivalence principle , and black holes , leading in part to an induced einstein spacetime phenomenology . in particular this new physics predicts that michelson dielectric - mode interferometers can reveal absolute motion relative to the quantum foam which is space. analysis of existing experimental dielectric - mode interferometer data confirms that absolute motion is meaningful and measurable .these are examples of an effective syntactical system being induced by a semantic system . herewe explore one technological development which essentially follows as an application of the quantum theory of gravity .the key discovery has been that self - referentially limited neural networks display quantum behaviour .it had already been noted by peru that the classical theory of non - stochastic neural networks had similarities with deterministic quantum theory , and so this development is perhaps not unexpected , atleast outside physics .this suggests that artificial or synthetic quantum systems ( sqs ) may be produced by a stochastic self - referentially limited neural network , and later we explore how this might be achieved technologically .this possibility could lead to the development of robust quantum computers .however an even more intriguing insight becomes apparent , namely that the work that inspired the classical theory of biological neural networks may have overlooked the possibility that sufficiently complex biological neural networks may in fact be essentially quantum computers .this possibility has been considered by kak and others , see works in pribram . as well we draw attention to the work of freeman _et al _ where it is argued that biocomplexity requires the development of new mathematical models having the form of complex stochastic dynamical systems driven and stabilised by noise of internal origin through self - organising dynamics .these conclusions are based on extensive work on the biodynamics of higher brain function .analgous equations here , see eqn.(1 ) , also arise but by using entirely different arguments dealing with and addressing deep problems in the traditional modelling of reality within physics .of course the common theme and emerging understanding is that both reality and biocomplexity entail the concept of internal or semantic information , and presumably there are generic aspects to this , which it now seems , have been independently discovered within physics and neurobiology .of course dynamical systems like ( 1 ) dealing with reality as a whole presumably entail and subsume the phenomena of biocomplexity .( 150,60 ) ( 10,10)(8,7)1 ( 30,30)(27,27)2 ( 48,0)(45,-3)3 ( 15,15)(1,1)10(30,12) ( 5,15)(-1,1)10 ( 12,3)(1,-4)3 ( 35,25)(1,-2)9.5 ( 33,37)(1,2)7 ( 37,32)(3,1)15 ( 55,3)(3,1)15 ( 34,-15)(1,1)9 ( 95,15)(94,12)i ( 100,20)(1,1)10 ( 90,20)(-1,1)10 ( 25,-22)(a ) ( 115,-22)(b ) ( 210,-22)(c ) ( 145,15)(144,12)i ( 150,20)(1,1)10 ( 140,20)(-1,1)10 ( 95,9)(8,20)[b ] ( 96,-1)(1,0)1.0 ( 115,15) ( 0,50)(40,0 ) ( 155,165)(3,-5)60 ( 155,165)(-3,-5)60 ( 115,100)(3,-5)42 ( 195,100)(-3,-5)21 ( 135,160 ) * * ( 225,160 ) * * ( 225,100 ) * * ( 225,60 ) * * ( 225,25 ) * * ( 155,165 ) ( 115,100 ) ( 195,100 ) ( 95,65 ) ( 135,65 ) ( 175,65 ) ( 215,65 ) ( 155,30 ) here we briefly describe a model for a self - referentially limited neural network and in the following section we describe how such a network results in emergent quantum behaviour , and which , increasingly , appears to be a unification of space and quantum phenomena . process physics is a semantic information system and is devoid of _ a priori _ objects and their laws and so it requires a subtle bootstrap mechanism to set it up .we use a stochastic neural network , fig.1a , having the structure of real - number valued connections or relational information strengths ( considered as forming a square matrix ) between pairs of nodes or pseudo - objects and . in standard neural networks the network information resides in both link and node variables , with the semantic information residing in attractors of the iterative network .such systems are also not pure in that there is an assumed underlying and manifest _ a priori _ structure .the nodes and their link variables will be revealed to be themselves sub - networks of informational relations . to avoid explicit self - connections , which are a part of the sub - network content of , we use antisymmetry to conveniently ensure that , see fig.1b . at this stage we are using a syntactical system with symbols and , later , rules for the changes in the values of these variables .this system is the syntactical seed for the pure semantic system .then to ensure that the nodes and links are not remnant _ a priori _ objects the system must generate strongly linked nodes ( in the sense that the for these nodes are much larger than the values for non- or weakly - linked nodes ) forming a fractal network ; then self - consistently the start - up nodes and links may themselves be considered as mere names for sub - networks of relations . for a successful suppressionthe scheme must display self - organised criticality ( soc) which acts as a filter for the start - up syntax .the designation ` pure ' refers to the notion that all seeding syntax has been removed .soc is the process where the emergent behaviour displays universal criticality in that the behaviour is independent of the individual start - up syntax ; such a start - up syntax then has no ontological significance . to generate a fractal structurewe must use a non - linear iterative system for the values .these iterations amount to the necessity to introduce a time - like process .any system possessing _ a priori _ ` objects ' can never be fundamental as the explanation of such objects must be outside the system .hence in process physics the absence of intrinsic undefined objects is linked with the phenomena of time , involving as it does an ordering of ` states ' , the present moment effect , and the distinction between past and present .conversely in non - process physics the presence of _ a priori _ objects is related to the use of the non - process geometrical model of time , with this modelling and its geometrical - time metarule being an approximate emergent description from process - time . in this wayprocess physics arrives at a new modelling of time , _ process time _ , which is much more complex than that introduced by galileo , developed by newton , and reaching its high point with einstein s spacetime geometrical model .the stochastic neural network so far has been realised with one particular scheme involving a stochastic non - linear matrix iteration . the matrix inversion then models self - referencing in that it requires all elements of to compute any one element of .as well there is the additive srn which limits the self - referential information but , significantly , also acts in such a way that the network is innovative in the sense of generating semantic information , that is information which is internally meaningful .the emergent behaviour is believed to be completely generic in that it is not suggested that reality is a computation , rather it appears that reality is essentially very minimal and having the form of an order - disorder information system . to be a successful contender for the theory of everything ( toe ) process physics must ultimately prove the uniqueness conjecture : that the characteristics ( but not the contingent details ) of the pure semantic information system are unique .this would involve demonstrating both the effectiveness of the soc filter and the robustness of the emergent phenomenology , and the complete agreement of the later with observation .the stochastic neural network is modelled by the iterative process where are independent random variables for each pair and for each iteration , chosen from some probability distribution . here is a parameter the precise value of which should not be critical but which influences the self - organisational process .we start the iterator at , representing the absence of information . withthe noise absent the iterator would converge in a deterministic and reversible manner to a constant matrix .however in the presence of the noise the iterator process is non - reversible and non - deterministic .it is also manifestly non - geometric and non - quantum , and so does not assume any of the standard features of syntax based physics models .the dominant mode is the formation of a randomised and structureless background ( in ) .however this noisy iterator also manifests a self - organising process which results in a growing three - dimensional fractal process - space that competes with this random background - the formation of a ` bootstrapped universe ' .the emergence of order in this system might appear to violate expectations regarding the 2nd law of thermodynamics ; however because of the srn the system behaves as an open system and the growth of order arises from selecting implicit order in the srn .hence the srn acts as a source or negentropy , and the need for this can be traced back to gdel s incompleteness theorem .this growing three - dimensional fractal process - space is an example of a prigogine far - from - equilibrium dissipative structure driven by the srn . from each iterationthe noise term will additively introduce rare large value .these , which define sets of linked nodes , will persist through more iterations than smaller valued and , as well , they become further linked by the iterator to form a three - dimensional process - space with embedded topological defects . to see this consider a node involved in one such large ; it will be connected via other large to a number of other nodes and so on , and this whole set of connected nodes forms a connected random graph unit which we call a gebit as it acts as a small piece of geometry formed from random information links and from which the process - space is self - assembled .the gebits compete for new links and undergo mutations .indeed , as will become clear , process physics is remarkably analogous in its operation to biological systems .the reason for this is becoming clear : both reality and subsystems of reality must use semantic information processing to maintain existence , and symbol manipulating systems are totally unsuited to this need , and in fact totally contrived . to analyse the connectivity of such gebits assume for simplicity that the large arise with fixed but very small probability , then the geometry of the gebits is revealed by studying the probability distribution for the structure of the random graph units or gebits minimal spanning trees with nodes at links from node ( ) , see fig.1c , this is given by \propto \frac{p^{d_1}}{d_1!d_2! .... d_l!}\prod_{i=1}^{l-1 } ( q^{\sum_{j=0}^{i-1}{d_j}})^{d_{i+1}}(1-q^{d_i})^{d_{i+1}},\ ] ] where , is the total number of nodes in the gebit and is the maximum depth from node . to find the most likely connection pattern we numerically maximise $ ] for fixed with respect to and the .the resulting and fit very closely to the form ; see fig.2a for and .the resultant values for a range of and are shown in fig.2b .this shows , for below a critical value , that indicating that the connected nodes have a natural embedding in a 3d hypersphere ; call this a base gebit . above that value of , the increasing value of indicates the presence of extra links that , while some conform with the embeddability , are in the main defects with respect to the geometry of the .these extra links act as topological defects . by themselvesthese extra links will have the connectivity and embedding geometry of numbers of gebits , but these gebits have a ` fuzzy ' embedding in the base gebit .this is an indication of fuzzy homotopies ( a homotopy is , put simply , an embedding of one space into another ) .the base gebits arising from the srn together with their embedded topological defects have another remarkable property : they are ` sticky ' with respect to the iterator . consider the larger valued within a given gebit , they form tree graphs and most tree - graph adjacency matrices are singular ( det ) .however the presence of other smaller valued and the general background noise ensures that det is small but not exactly zero .then the matrix has an inverse with large components that act to cross - link the new and existing gebits .if this cross - linking was entirely random then the above analysis could again be used and we would conclude that the base gebits themselves are formed into a 3d hypersphere with embedded topological defects .the nature of the resulting 3d process - space is suggestively indicated in fig.2c . over ongoing iterationsthe existing gebits become cross - linked and eventually lose their ability to undergo further linking ; they lose their ` stickiness ' and decay .hence the emergent space is 3d but is continually undergoing replacement of its component gebits ; it is an informational process - space , in sharp distinction to the non - process continuum geometrical spaces that have played a dominant role in modelling physical space .if the noise is ` turned off ' then this emergent dissipative space will decay and cease to exist .we thus see that the nature of space is deeply related to the logic of the limitations of logic , as implemented here as a self - referentially limited neural network .relative to the iterator the dominant resource is the large valued from the srn because they form the ` sticky ' gebits which are self - assembled into the non - flat compact 3d process - space .the accompanying topological defects within these gebits and also the topological defects within the process space require a more subtle description .the key behavioural mode for those defects which are sufficiently large ( with respect to the number of component gebits ) is that their existence , as identified by their topological properties , will survive the ongoing process of mutation , decay and regeneration ; they are topologically self - replicating .consider the analogy of a closed loop of string containing a knot - if , as the string ages , we replace small sections of the string by new pieces then eventually all of the string will be replaced ; however the relational information represented by the knot will remain unaffected as only the topology of the knot is preserved . in the process - spacethere will be gebits embedded in gebits , and so forth , in topologically non - trivial ways ; the topology of these embeddings is all that will be self - replicated in the processing of the dissipative structure .[ sect : sqs ] to analyse and model the life of these topological defects we need to characterise their general behaviour : if sufficiently large ( i ) they will self - replicate if topologically non - trivial , ( ii ) we may apply continuum homotopy theory to tell us which embeddings are topologically non - trivial , ( iii ) defects will only dissipate if embeddings of ` opposite winding number ' ( these classify the topology of the embedding ) engage one another , ( iv ) the embeddings will be in general fractal , and ( iv ) the embeddings need not be ` classical ' , ie the embeddings will be fuzzy . to track the coarse - grained behaviour ofsuch a system has lead to the development of a new form of quantum field theory : quantum homotopic field theory ( qhft ) .this models both the process - space and the topological defects .qhft has the form of an iterative functional schrdinger equation for the discrete time - evolution of a wave - functional where the configuration space is that of all possible homotopic mappings ; maps from to with the set of all possible gebits ( the topological defects need not be s ) .the time step in eqn.(3 ) is relative to the scale of the fractal processes being explicitly described , as we are using a configuration space of prescribed gebits . at smaller scaleswe would need a smaller value of .clearly this invokes a ( finite ) renormalisation scheme .eqn.(3 ) , without the qsd term , would be called a ` third quantised ' system in conventional terminology . depending on the ` peaks ' of and the connectivity of the resultant dominant mappings such mappings are to be interpreted as either embeddings or links ; fig.2c then suggests the dominant process - space form within showing both links and embeddings .the emergent process - space then has the characteristics of a quantum foam .note that , as indicated in fig.2c , the original start - up links and nodes are now absent .contrary to the suggestion in fig.2c , this process space can not be embedded in a _finite _ dimensional geometric space with the emergent metric preserved , as it is composed of infinitely nested or fractal finite - dimensional closed spaces .the form of the hamiltonian can be derived from noting that the emergent network of larger valued behaves analogous to a non - linear elastic system , and that such systems have a skymionic description ; see ref. for discussion .there are additional quantum state diffusion ( qsd ) terms which are non - linear and stochastic ; these qsd terms are ultimately responsible for the emergence of classicality via an objectification process , but in particular they produce wave - function(al ) collapses during quantum measurements .the iterative functional schrdinger system can be given a more familiar functional integral representation for , if we ignore the qsd terms .keeping the qsd leads to a functional integral representation for a density matrix formalism , and this amounts to a derivation of the decoherence process which is usually arrived at by invoking the born measurement metarule .here we see that ` decoherence ' arises from the limitations on self - referencing . in the above we have a deterministic and unitary evolution , tracking and preserving topologically encoded information , together with the stochastic qsd terms, whose form protects that information during localisation events , and which also ensures the full matching in qhft of process - time to real time : an ordering of events , an intrinsic direction or ` arrow ' of time and a modelling of the contingent present moment effect .so we see that process physics generates a complete theory of quantum measurements involving the non - local , non - linear and stochastic qsd terms .it does this because it generates both the ` objectification ' process associated with the classical apparatus and the actual process of ( partial ) wavefunctional collapse as the quantum modes interact with the measuring apparatus .indeed many of the mysteries of quantum measurement are resolved when it is realised that it is the measuring apparatus itself that actively provokes the collapse , and it does so because the qsd process is most active when the system deviates strongly from its dominant mode , namely the ongoing relaxation of the system to a 3d process - space .this is essentially the process that penrose suggested ; namely that the quantum measurement process is essentially a manifestation of quantum gravity .the demonstration of the validity of the penrose argument of course could only come about when quantum gravity was _ derived _ from deeper considerations , and not by some _ad hoc _ argument such as the _ quantisation _ of einstein s classical spacetime model .again we see that there is a direct link between gdel s theorem on the limitations of self - referencing syntactical systems and the quantum measurement process .the mappings are related to group manifold parameter spaces with the group determined by the dynamical stability of the mappings .this gauge symmetry leads to the flavour symmetry of the standard model .quantum homotopic mappings or skyrmions behave as fermionic or bosonic modes for appropriate winding numbers ; so process physics predicts both fermionic and bosonic quantum modes , but with these associated with topologically encoded information and not with objects or ` particles ' .in previous sections was a description of a fundamental theory of reality that uses a self - referentially limited neural network scenario inspired by the need to implement subsymbolic semantic information processing , resulting , in particular , in a quantum theory of gravity .the neural network manifests complex connectionist patterns that behave as linking and embedded gebits ; with some forming topological defects . the latter are essentially the quantum modes that at a higher level have been studied as quantum field theory . rather than interfering with this emergent quantum modebehaviour the intrinsic noise ( the srn ) is a _ sine qua non _ for its emergence ; the srn is a source of negentropy that refreshes the quantum system . herei discuss the possible application of this effect to the construction of quantum computers ( qc ) which use synthetic quantum systems ( sqs ) .quantum computers provide the means for an exponential speed - up effect compared to classical computers , and so offer technological advantages for certain , albeit so far restricted , problems .this speed - up results from the uniquely quantum characteristics of entanglement and the quantum measurement process , both of which now acquire explanation from process physics .this entanglement allows the parallel unitary time evolution , determined by a time - dependent programmed hamiltonian , of superpositions of states , but results in the ` output ' being encoded in the complexity of the final wavefunction ; the encoding is that of the amplitudes of this wavefunction when expanded into some basis set .these individual amplitudes are not all accessible , but via a judicious choice of quantum measurement , determined by the problem being studied , the required information is accessed . in these early daysthe construction of simple quantum computers all use naturally occurring quantum systems , such as interacting individual atoms embedded in a silicon lattice .these are both difficult to construct and sensitive to environmental noise .they will play a key role in testing the concepts of quantum computation .the decoherence caused by this environmental noise can be partly compensated by error - correction codes .however as first noted by kitaev quantum computers that use topological quantum field systems would achieve a fault - tolerance by virtue of the topological excitations being protected from local errors . from process physics we now see that this process is also the key to the very origin of quantum behaviour .this all suggests that a robust and large scale quantum computer might ultimately be best constructed using a stochastic neural network architecture which exhibits topological quantum field behaviour ; a synthetic quantum system ( sqs ) .such a sqs would essentially manifest synthetic entanglement and synthetic collapse to achieve the apparent advantages of quantum computation , and the inherent robustness would follow from the non - local character of topological modes . indeed the actual ` information ' processing would be achieved by the interaction of the topological modes .there are a number of key questions such as ( i ) is such a sqs indeed possible ?( ii ) how could such a system could be ` constructed ' ? ( iii ) how could it be programmed ?( iv ) how is information to be inserted and extracted ?i shall not tackle here the fundamental question of whether the phenomenon of a sqs is possible , for to answer that question will require a long and difficult analysis .but assuming the final answer is yes , we shall try here to at least characterise such a system .we first note that such a synthetic quantum system based quantum computer may be very different in its area of application compared to the quantum computers being currently considered . this new class of qcwould best be considered as being non - symbolic and non - algorithmic ; indeed it might best be characterised as a semantic information computer .the property of being non - symbolic follows from the discussions in the previous sections in relation to reality itself , and that information in a sqs qc has an internal or semantic meaning .semantic information or knowledge , as it might best be described , would be stored in such a qc in the form of non - local topological states which are preserved by the topological character of such states , although more static and primitive information could be preserved in the ` classical ' structure of the neural network .such a sqs would be driven by essentially the negentropy effect of thermal noise , with the effective ` iterator ' of the system selecting special patterns from this noise ; this noise essentially acting as a pseudo - srn .so a sqs qc is essentially a ` room temperature ' qc which , combined with its topological features , would make such a system inherently robust .the noise also manifests a synthetic wave - functional collapse mechanism , essentially a synthetic - qsd term in the time evolution equation analogous to eqn.(3 ) .the effect of these collapses is that above a certain threshold the superposition property , namely the hamiltonian term in eqn.(3 ) , would be over - ridden by a non - local and non - linear collapse process ; it is from this process that the sqs qc would be inherently non - algorithmic , since the collapse is inherently random in character .of course as in the more ` conventional ' qc this non - algorithmic ` measurement ' process may be exploited to extract useful and desired outcomes of the ` computation ' .this directed outcome can only be achieved if the exact character of the ` measurement ' can be programmed , otherwise the collapse will result in novel and creative outcomes ; essentially the generalisation and linking of semantic information already in the qc . for this reasonthis class of qc may represent a form of artificial intelligence , and may be best suited to generalisation and creativity , rather than just achieving the exponential speed - up for certain analytical problems such as number factoring .it is unlikely that such a synthetic quantum system computer would be fabricated by conventional ` directed ' construction technologies either old or new .first such a computer would necessarily have an enormous number of components in order to achieve sufficient complexity and connectedness that topological quantum modes could be emergent .there also needs to be plasticity so that the collapse process can result in permanent changes to the neural network connections , for it must be remembered that the sqs does not operate by means of symbol manipulation , but by the interaction and self - interaction of internal states that arise from actual connectivity patterns within the network .it is also not clear in what manner the sqs would be manifested .whereas for reality itself the previous sections suggested that connection was sufficient , in the sqs we could also envisage the possibility that the connectivity is manifested by other modes such as pulse timing or signal phasing .for these reasons such a sqs qc would probably be constructed by means of the self - assembly of enormous numbers of active components , with node and linkage elements , whose individual survival depends of their involvement or activity level .that is , the system could be overconnected initially and then sqs status achieved by a thinning out process .to have such an enormous number of components then stipulates that we need very small components and this suggests nanoscale chemical or molecular electronic self - assembly procedures .indeed the close connection with biological neural networks suggests that we are looking at a nanobiology approach , and that the self - organisation process will be biomimetic .the programming of such a sqs qc also would have to be achieved by subtle and indirect means ; it is very unlikely that such a system could be programmed by the preparation of the initial connection patterns or indeed by an attempt to predetermine the effective hamiltonian for any specified task .rather , like the construction of the sqs , the programming would be achieved by means of plasticity and the biasing of internal interactions ; that is via essentially biased self - programming .this is obvious from the role of semantic information in such a sqs ; these systems decide how information is represented and manipulated , as the memory process is essentially content addressing .that is , the information is accessed by describing aspects of the required information until the appropriate topological patterns are sufficiently excited and entangled that a collapse process is activated .to bias such self - programming means that the sqs must have essentially complex sensors that import and excite generic pattern excitations , rather than attempt to describe the actual connectivity .indeed the very operation of such a sqs appears to involve an level of ignorance of its internal operation . if we attempt to directly probe its operation we either observe confused and unintelligible signalling or we cause collapse events that are unrelated to the semantic information embedded in the system .rather the best we can probably do is exchange information by providing further input and monitoring the consequent output , and proceed in an iterative manner .that a self - referentially limited neural network approach is perhaps capable of providing a deep modelling of reality should not come as a surprise , nor is it in itself mystical or perplexing .with hindsight we can now say how `` else could it have been ? ''physics for some time has been moving in the direction that reality is somehow related to information , and various ` informational interpretations ' of quantum theory were advanced ; but the information here , as encoded in the wavefunction , was always thought to be about the observer s knowledge of the state of the system , and in itself may have had no ontological significance . for this reason the early discoveries of quantum theory were quickly interpreted as amounting to limits on the observer s knowledge of ` particles ' such as where they are and what momentum they have , but rarely whether such point - like entities actually existed . in this way physicists hung to some of the oldest western science ideas of matter being ` objects ' in a ` geometrical space ' .quantum phenomena were telling us a different message , but one which has been ignored for some 75 years .of course reality can not be about objects and their laws ; that methodology is only suitable for higher level phenomenological descriptions , and its success at these levels has misled us about its suitability at deeper levels .the nature of reality must be internally meaningful and always self - processing , the stuff of reality continues to exists not because the ` production process ' is finished , but because these systems are self - perpetuated by this self - processing ; at this deepest level there are no prescribed laws or entities .the closest analogy to this idea of ` internally meaningful ' information is that of the semantic information of the human mind as experienced through our consciousness .the processing of this semantic information is massively parallel and by all accounts non - local .consciousness involves as well memory , self - modelling and self - referencing and a shifting focus of attention .the recognition of consciousness as a major scientific problem has finally occurred , and the subject is attracting intense debate and speculation .it is suggested here that the difficulty science has had in dealing with this phenomena is that western science has been entrenched in a _ non - process _ modelling of reality , and trapped in the inherent limitations of syntax and logic .however process physics is indicating that reality is non - syntax and experiential ; all aspects of reality including space itself are ` occasions of actual experience ' , to use whitehead s phrasing .only information and processes that are internally meaningful can play any role in reality ; reality is not about symbols and their syntax , although they have pragmatic use for observers .so process physics reveals reality to be , what is called , panexperiential .to the extent that these processes result in characteristic and describable outcomes we have emergence of non - process syntactical language .but in general the processes correspond to our experiences of time , and match in particular the experience of the ` present moment ' or the ` now ' ; although because of the subtleties of communication in such a system it is not known yet whether the historical records of separated individuals can be uniquely correlated or labelled .this is the problem of establishing simultaneity that einstein first drew to our attention . in process physicsit has not yet been determined whether or not the non - local processes , say those associated with epr connections , enable the determination or not of an absolute frame of reference and so absolute simultaneity .the panexperientialism in process physics suggests that at some level of complexity of emergent systems that such systems may be self - aware , not of the individual sub - processes , but of generalised processes at the higher level ; because the processes at this level amount to self - modelling and to other characteristics of consciousness .such experiences can not occur in a symbol manipulating system . in process physicswe see that one of the key emergent and characterising modes in a semantic information system is quantum behaviour .it is suggested that such a behaviour may also arise in a sufficiently complex synthetic neural network subject to pseudo - srn , resulting in synthetic quantum systems .of course one system that may be manifesting such behaviour is that of our brains .simple neural networks evolved to deal with the processing and identification of various external signals , particularly those essential to the survival of the system .these advantages would result in species of systems with ever larger neural networks , all behaving in the classical mode .but with increasing complexity a new phenomena may have emerged , namely that of the synthetic quantum system , particularly if its ` semantic information processing ' was considerably enhanced , as now quantum computation theory is suggesting . hence we are led to the speculative suggestion that ` mind ' may be emergent synthetic quantum system behaviour .that mind my be connected to quantum behaviour has been considered by many , see refs. . in particular the enormous number of synapses in the dendritic network has attracted much speculation .of particular relevance is that the synapses are noisy components . from the viewpoint of synthetic quantum systemsthis synaptic noise would behave as a pseudo - srn and so a source of negentropy .we have explored here some novel technological spin - offs that might conceivably arise from the development of the quantum theory of gravity in the new process physics .this process physics views reality as a self - referentially limited neural network system that entails semantic information growth and processing , and it provides an explanation for much if not ultimately all of the fundamental problems in physics . in particular process physics provides an explanation for the necessity of quantum phenomena , and suggests that such phenomena may emerge synthetically whenever the conditions are appropriate .these conditions may include those of the noisy neural networks that form in part our brains , and the emergent synthetic quantum system behaviour may in fact be what we term our ` mind ' .but at the technological level emergent synthetic quantum system behaviour may ultimately lead to the development of semantic information or knowledge processing quantum computers which exploit the enhanced processing possible in synthetic quantum systems with synthetic entanglement and a synthetic quantum measurement process . indeed , because of the similarity of these quantum computers with our biological neural networks and their possible manifestation of synthetic quantum behaviour , this new class of quantum computers may display strong artificial intelligence if not even consciousness .these synthetic quantum system computers may need to be essentially ` grown ' rather than constructed by laying down fixed structures . to achieve thiswe would expect to mimic biological neural networks , since by arising naturally they probably represent the simplest manifestation of the required effect .such a strong ai quantum computer would thus represent a major technological target for the emerging field of nanotechnology , and indeed it would constitute a ` smart ' nanostruture . while reality demands a fractal structure for the deep reasons discussed in the text ,synthetic quantum computers need not be fractal , atleast as regards their manifest structure and operation .of course being a part of reality they share in the deep underlying fractal system .cahill and c.m .klinger , _ self - referential noise as a fundamental aspect of reality _ , _ proc .2nd int .conf . on unsolved problems of noise and fluctuations ( upon99 )_ , eds .d. abbott and l. kish , 1999 , * vol .511 , * p. 43, american institute of physics , new york , 2000 , gr - qc/9905082
so far proposed quantum computers use fragile and environmentally sensitive natural quantum systems . here we explore the new notion that synthetic quantum systems suitable for quantum computation may be fabricated from smart nanostructures using topological excitations of a stochastic neural - type network that can mimic natural quantum systems . these developments are a technological application of process physics which is an information theory of reality in which space and quantum phenomena are emergent , and so indicates the deep origins of quantum phenomena . analogous complex stochastic dynamical systems have recently been proposed within neurobiology to deal with the emergent complexity of biosystems , particularly the biodynamics of higher brain function . the reasons for analogous discoveries in fundamental physics and neurobiology are discussed . keywords : process physics , self - referential noise , neural network , synthetic quantum system pacs : 05.40.-a , 05.45.df , 05.65.tb , 12.60.-i
data is usually generated by mixing several components of different structures .these structures are often compressible , and are able to provide semantic interpretations of the data content . in addition , they can reveal the difference and similarity among data samples , and thus produce robust features playing vital roles in supervised or unsupervised learning tasks .two types of structures have drawn lots of research attentions in recent years : 1 ) in compressed sensing , a sparse signal can be exactly recovered from its linear measurements at a rate significant below the nyquist rate , in sparse coding , an over - complete dictionary leads to sparse representations for dense signals of the same type ; 2 ) in matrix completion , a low - rank matrix can be precisely rebuilt from a small portion of its entries by restricting the rows ( samples ) to lie in a subspace . in dimension reduction ,low - rank structure has been broadly leveraged for exploring the geometry of point cloud .although sparse and low - rank structures have been studied separately by a great number of researchers for years , the linear combination of them or their extensions is rarely explored until recently .intuitively , fitting data with either sparse or low - rank structure is mature technique but is inevitably restricted by the limited data types they can model , while recent study shows that the linear mixture of them is more expressive in modeling complex data from different applications .a motivating example is robust pca ( rpca ) , which decomposes the data matrix as .the low - rank part summarizes a subspace that is shared by all the samples and thus reveals the global smoothness , while the sparse part captures the individual differences or abrupt changes among samples . a direct application of robust pca is separating the sparse moving objects from the low - rank background in video sequence .another interesting example is morphological component analysis ( mca ) , which decompose the data into two parts that have sparse representations on two incoherent over - complete dictionaries , i.e. , the first part has a very non - sparse representation on the dictionary of the second part , and vise versa .this requirement suggests that the two parts are separable on their sparse representations .note that both rpca and mca can only work on data whose two building parts are incoherent , i.e. , the content of one part can not be moved to the other part without changing either of their structures ( low - rank , sparse , dictionary , etc . ) .this incoherence condition could be viewed as a general extension of the statistical independence supporting independent component analysis ( ica ) blindly separating non - gaussian source signals .it leads to the identifiability of the structures in theory , and is demonstrated to be fulfilled on a wide class of real data .however , new challenges arises when many recent studies tend to focus on big data with complex structures .firstly , existing algorithms are computationally prohibitive to processing these data .for instance , the update of low - rank part in rpca and in its extensions invoke a full singular value decomposition ( svd ) per iterate , while mca requires challenging or minimization per sample / feature and previously achieved incoherent dictionaries / transform operators encouraging sparse representations . thus they suffer from a dramatic growth in time complexity when either feature dimensions or data samples increase . in previous methods , the structured information such as low - rank and sparse propertiesare always achieved at the price of time - consuming optimization , but are rarely leveraged for the purpose of improving the scalability .recent progresses in randomized approximation and rank - revealing algorithms shed some light on the speedup of the robust pca typed algorithms : the subspace of the low - rank part can be estimated from random sampling of its columns / rows or projections of its columns / rows on a random ensemble with bounded precision . however , straightforward invoking this technique in rpca problem needs to apply it to the updated residual matrix per iterate and thus may lead to costly computation . besides , determining the rank of the low - rank part is not a trivial problem in practice . secondly , the simple low - rank , sparse and sparse representation assumptions can not fully capture the sophisticated relation , individuality and sparsity of data samples with complex structures . while low - rank structure summarizes a global linear relationship between data points , the nonlinear relationship , local geometry and correlated functions are more common in big data and more expressive for a much wider class of structures .moreover , the sparse matrix is simply explained by random noises on random positions in the past , but current studies reveal that it may have rich structured information that could be the central interests of various applications . for instance , the sparse motions captured by rpca on video sequence data includes immense unexplored information favored by object tracking and behavior analysis .furthermore , although the sparse representation is more general than sparse features , its quality largely relies on whether the given dictionary or transform operator fits the nature of data well .but this is difficult to evaluate when the data is of large volume and in general type .thirdly , two building parts are not sufficient to cover all the mixtures of incoherent structures in big data .one the one hand , dense noise is an extra component that has to be separated from the low - rank and sparse parts in many cases where the exact decomposition does not hold .this noisy assumption has been considered in stable pcp , drmf and other theoretical studies , and its robustness and adaptiveness to a broad class of data has also been verified .but efficient algorithm for the noisy model lacks . on the other hand , further decomposing the low - rank or sparse part to multiple distinguishable sub - components is potential to tell locally spatial or temporal relations within each identifiable structure and differences between them , which usually play pivot roles in supervised and unsupervised learning tasks .although it appeals to be a natural extension to the two - part model in rpca , how to formulate a proper decomposition model for learning problems and develop a practical algorithm are challenging .we start this paper by studying a novel low - rank and sparse matrix decomposition model `` go decomposition ( godec ) '' , which takes an extra dense noisy part into account and casts the decomposition into alternating optimization of low - rank and sparse . in order to overcome the computational burden caused by the large volume of data ,we propose two acceleration strategies in designing the decomposition algorithms : the first is `` bilateral random projection ( brp ) '' based fast low - rank approximation that results in a randomized update of the low - rank part or its nonlinear variant , this technique is based on recently developed random matrix theories that show a few random projections of a matrix is able to reveal its associated principle subspace ; the other is a frank - wolfe typed optimization scheme called `` greedy bilateral ( greb ) '' paradigm that updates the left and right factors of the low - rank matrix variable in a mutually adaptive and greedy incremental manner .we show the two strategies generates considerably scalable algorithms for low - rank and sparse matrix decomposition .moreover , both strategies have provable performance guarantee given by rigorous theoretical analysis ( appendix i and ii ) . in order to deal with the complicated structures that can not be captured by the sum mixture of low - rank and sparse matrices , we proposes three variants of godec more expressive and general for learning from big data .the first variant `` shifted subspace tracking ( sst ) '' is developed for motion segmentation from raw pixels of video sequence .sst further analyzes the unexplored rich structure of the sparse part of godec , which could be seem as a sum mixture of several motions with distinct appearance and trajectories .sst unifies detection , tracking and segmenting multiple motions from complex scenes in a simple matrix factorization model .the second variant `` multi - label subspace ensemble ( mse ) '' extends the low - rank part of godec to the sum of multiple low - rank matrices defined by distinguishable but correlated subspaces .mse provides a novel insight into the multi - label learning ( ml ) problem .it addresses this problem by jointly learning inverse mappings that map each label to the feature space as a subspace , and formulating the prediction as finding the group sparse representation of a given sample on the ensemble of subspaces .there are only subspaces needed to be learned , and the label correlations are fully used via considering correlation among subspaces .the third variant `` linear functional godec ( lingodec ) '' learns scoring functions of users from their ratings matrix and features of scored items .it extends the low - rank part of godec to , where represents the linear functions and is constrained to be low - rank , while the rows of contain the features of items in the training set .in addition , the sparse part is able to detect the advertising effects or anomaly of users ratings on specific items .lingodec formulates the collaborative filtering problem as supervised learning , and thus avoids time - consuming completion of the whole matrix when only a new item s scores ( a new row ) are needed to be predicted .the rest of this paper is organized as following : section 2 introduces godec ; section 3 proposes the two acceleration strategies for processing large - scale data ; section 4 proposes the three variants of godec and their practical algorithms ; section 5 shows the experimental results of all the proposed algorithms on different application problems and justifies both the effectiveness and efficiency of them .the rows of all data matrices mentioned in this paper represents the samples and the columns denote the features .in rpca , pcp recovers and from by minimizing sum of the trace norm of and the norm of . it can be proved that the solution to this convex relaxation is the exact recovery if indeed exists and and are sufficiently incoherent .that is , obeys the incoherence property and thus is not sparse , while has nonzero entries uniformly selected at random and thus is not low - rank .popular optimization algorithms such as augmented lagrangian multiplier , accelerated proximal gradient method and accelerated projected gradient method have been applied .but full svd as a costly subroutine is required to be repeatedly invoked in any of them . despite the strong theoretical guarantee of robust pca, the exact decomposition does not always hold for real data matrix due to extra noise and complicated structure of that does not following bernoulli - gaussian distribution .thus a more adaptive model is preferred , where approximates and is the dense noise .we then study the approximated `` low - rank+sparse '' decomposition of a matrix , i.e. , in this section , we develop `` go decomposition '' ( godec ) to estimate the low - rank part and the sparse part from by solving the following optimization problem , which aims at minimizing the decomposition error : we propose the nave godec algorithm at first and will study how to achieve an highly accelerated version in the next section .the optimization problem of godec ( [ e : ls_app ] ) can be solved by alternatively solving the following two subproblems until convergence : although both subproblems ( [ e : raw_ls ] ) have nonconvex constraints , their global solutions and exist .let the svd of a matrix be and or stands for the largest singular value of ; is the projection of a matrix to an entry set . in particular , the two subproblems in ( [ e : raw_ls ] ) can be solved by updating via singular value hard thresholding of and updating via entry - wise hard thresholding of , respectively , i.e. , the main computation in the nave godec algorithm ( [ e : raw_lss ] ) is the svd of in the updating sequence .svd requires flops , so it is impractical when is of large size , and more efficient algorithm is needed to be developed later .godec alternatively assigns the -rank approximation of to and assigns the sparse approximation with cardinality of to .the updating of is obtained via singular value hard thresholding of , while the updating of is obtained via entry - wise hard thresholding of .the term `` go '' is owing to the similarities between / in the godec iteration rounds and the two players in the game of go . except the additional noisy part and faster speed ,the direct constraints to the rank of and the cardinality also makes godec different from rpca minimizing their convex polytopes .this makes the rank and cardinality controllable , which is preferred in practice . because prior information of these two parameters can be applied and lots of computations might be saved .in addition , godec introduces an efficient matrix completion algorithm , in which the cardinality constraint is replaced by a fixed support set . convergence and robustness analysis of godecis given in appendix i based on theory of alternating projection on two manifolds .we firstly introduce the bilateral random projections ( brp ) based low - rank approximation and its power scheme modification .brp reduces the time consuming svd in nave godec to a closed - form approximation merely requiring small matrix multiplications .however , we need to invoke more expensive power scheme of brp when the matrix spectrum does not have dramatic decreasing .moreover , the rank needs to be estimated for saving unnecessary computations .thus we propose greedy bilateral sketch ( grebske ) , which augments the matrix factors column / rows - wisely by selecting the best rank - one directions for approximation .it can adaptively determines the rank by stopping the augmenting when error is sufficiently small , and has accuracy closer to svd .given bilateral random projections ( brp ) of an dense matrix ( w.l.o.g , ) , i.e. , and , wherein and are random matrices , is a fast rank- approximation of .the computation of includes an inverse of an matrix and three matrix multiplications .thus , for a dense , floating - point operations ( flops ) are required to obtain brp , flops are required to obtain .the computational cost is much less than svd based approximation . in order to improve the approximation precision of in ( [ e : lr_app ] )when and are standard gaussian matrices , we use the obtained right random projection to build a better left projection matrix , and use to build a better . in particular , after , we update and calculate the left random projection , then we update and calculate the right random projection . a better low - rank approximation will be obtained if the new and are applied to ( [ e : lr_app ] ) .this improvement requires additional flops of in brp calculation .when singular values of decay slowly , ( [ e : lr_app ] ) may perform poorly .we design a modification for this situation based on the power scheme . in the power scheme modification, we instead calculate the brp of a matrix , whose singular values decay faster than .in particular , . both and the same singular vectors .the brp of is : according to ( [ e : lr_app ] ) , the brp based rank approximation of is : in order to obtain the approximation of with rank , we calculate the qr decomposition of and , i.e. , the low - rank approximation of is then given by : ^{\frac{1}{2q+1}}q_2^t.\ ] ] the power scheme modification ( [ e : mlr_app ] ) requires an inverse of an matrix , an svd of an matrix and five matrix multiplications .therefore , for dense , flops are required to obtain brp , flops are required to obtain the qr decompositions , flops are required to obtain . the power scheme modification reduces the error of ( [ e : lr_app ] ) by increasing .when the random matrices and are built from and , additional flops are required in the brp calculation .thorough error bound analysis of brp and its power scheme is given in appendix ii .since brp based low - rank approximation is near optimal and efficient , we replace svd with brp in nave godec in order to significantly reduce the time cost .we summarize godec using brp based low - rank approximation ( [ e : lr_app ] ) and power scheme modification ( [ e : mlr_app ] ) in algorithm 1 . when , for dense , ( [ e : lr_app ] ) is applied .thus the qr decomposition of and in algorithm 1 are not performed , and is updated as . in this case ,algorithm [ alg : godec ] requires flops per iteration .when integer , ( [ e : mlr_app ] ) is applied and algorithm 1 requires flops per iteration .[ alg : godec ] * initialize * , , the major computation in nave godec is the update of the low - rank part , which requires at least a truncated svd . although the proposed randomized strategy provides a faster and svd - free algorithm for godec , how to determine the rank of and the cardinality of is still an unsolved problem in real applications .in fact , these two parameters are not easy to determine and could lead to unstable solutions when estimated incorrectly .noisy robust pca methods such as stable pcp , godec and drmf usually suffer from this problem .another shortcoming of the randomized strategy is that the time complexity is dominated by matrix multiplications , which could be computationally slow on high - dimensional data .in this part , we describe and analyze a general scheme called `` greedy bilateral ( greb ) '' paradigm for solving optimizing low - rank matrix in mainstream problems . in greb, the low - rank variable is modeled in a bilateral factorization form , where is a tall matrix and is a fat matrix .it starts from and respectively containing a very few ( e.g. , one ) columns and rows , and optimizes them alternately .their updates are based on observation that the object value is determined by the product rather than individual or .thus we can choose a different pair producing the same but computed faster than the one derived by alternating least squares like in irls - m and als .in greb , the updates of and can be viewed as mutually adaptive update of the left and right sketches of the low - rank matrix .such updates are repeated until the object convergence , then a few more columns ( or rows ) are concatenated to the obtained ( or ) , and the alternating updates are restarted on a higher rank . here , the added columns ( or rows ) are selected in a greedy manner .specifically , they are composed of the rank- column ( or row ) directions on which the object decreases fastest .greb incrementally increases the rank until when is adequately consistent with the observations .[ alg : greb ] * initialize * ( and ) greb s greedy strategy avoids the failures brought by possible biased rank estimation .moreover , greedy selecting optimization directions from to is faster than updating directions in all iterates like in lmafit and .in addition , the lower rank solution before each rank increment is invoked as the `` warm start '' of the next higher rank optimization and thus speed up convergence .furthermore , its mutually adaptive updates of and yields a simple yet efficient svd - free implementation . under greb paradigm ,the overall time complexity of matrix completion is ( -sampling set , -matrix size , -rank ) , while the overall complexities of low - rank approximation and noisy robust pca are .an improvement on sample complexity can also be justified .an theoretical analysis of greb solution convergence based on the result of geco is given in appendix iii . in the following , we present greb by using it to derive a practical algorithm `` greedy bilateral smoothing ( grebsmo ) '' for godec .it can also be directly applied to low - rank approximation and matrix completion [ ] .we summarize general greb paradigm in algorithm [ alg : greb ] , and then present the detailed grebsmo algorithm .in particular , we formulate godec by replacing with its bilateral factorization and regularizing the norm of s entries : note the regularization is a minor modification to the cardinality constraint in ( [ e : ls_app ] ) .it induces soft - thresholding in updating , which is faster than sorting caused by cardinality constraint in godec and drmf . alternately optimizing , and in ( [ equ : grebsmo ] )immediately yields the following updating rules : where is an element - wise soft thresholding operator with threshold such that \times [ n]\right\}.\ ] ] the same trick of replacing the pair with a faster computed one is applied and produce the above procedure can be performed in flops for and . in grebsmo , ( [ equ : grebsmoa ] )is iterated as a subroutine of greb s greedy incremental paradigm . in particular , the updates in ( [ equ : grebsmoa ] ) are iterated for times or until the object converging , then rows are added into as the new directions for decreasing the object value . in order to achieve the fastest decreasing directions , we greedily select the added rows as the top right singular vectors of the partial derivative we also allow to approximate row space of the singular vectors via random projections .the selected rows maximize the magnitude of the above partial derivative and thus lead to the most rapid decreasing of the object value , a.k.a ., the decomposition error .grebsmo repeatedly increases the rank until a sufficiently small decomposition error is achieved .so the rank of the low - rank component is adaptively estimated in grebsmo and does not relies on initial estimation .although the two strategies successfully generate efficient low - rank and sparse decomposition capable to tackle large volume problem of big data , the complicated structures widely existing in big data can not be always expressed by the sum of low - rank and sparse matrices and thus may still lead to the failure of rpca typed models .therefore , we address this problem by developing several godec s variants that unravel different combination of incoherent structures beyond low - rank and sparse matrices , where the two strategies can be still used to achieve scalable algorithms .sst decomposes of godec into the sum of several matrices , each of whose rows are generated by imposing a smooth geometric transformation sequence to the rows of a low - rank matrix .these rows store moving object in the same motion after aligning them across different frames , while the geometric transformation sequence defines the shared trajectories and deformations of those moving objects across frames . in the following ,we develop an efficient randomized algorithm extracting the motions in sequel , where the low - rank matrix for each motion is updated by brp , and the geometric transformation sequence is updated in a piece - wise linear approximation manner .we consider the problem of motion segmentation from the raw video data . given a data matrix that stores a video sequence of frames , each of which has pixels and reshaped as a row vector in , the goal of sst framework is to separate the motions of different object flows , recover both their low - rank patterns and geometric transformation sequences .this task is decomposed as two steps , background modeling that separates all the moving objects from the static background , and flow tracking that recovers the information of each motion . in this problem, stands for the entry of a vector or the row of a matrix , while signifies the entry at the row and the column of a matrix. the first step can be accomplished by either godec or grebsmo .after obtaining the sparse outliers storing multiple motions , sst treats the sparse matrix as the new data matrix , and decomposes it as , wherein denotes the motion , stands for the sparse outliers and stands for the gaussian noise .the motion segmentation in sst is based on an observation to the implicit structures of the sparse matrix . if the trajectory of the object flow is known and each frame ( row ) in is shifted to the position of a reference frame , due to the limited number of poses for the same object flow in different frames , it is reasonable to assume that the rows of the shifted exist in a subspace . in other words , after inverse geometric transformation is low - rank .hence the sparse motion matrix has the following structured representation =l(i)\circ\tau(i).\ ] ] the invertible transformation denotes the 2-d geometric transformation ( to the reference frame ) associated with the motion in the frame , which is represented by . to be specific , the row in is after certain permutation of its entries .the permutation results from applying the nonlinear transformation to each nonzero pixel in such that , where could be one of the five geometric transformations , i.e. , translation , euclidean , similarity , affine and homography , which are able to be represented by , , , and free parameters , respectively .for example , affine transformation is defined as = \left [ \begin{array}{cc } \rho\cos\theta & \rho\sin\theta \\-\rho\sin\theta & \rho\cos\theta \\ \end{array } \right ] \left [ \begin{array}{c } x \\y \\\end{array } \right]+ \left [ \begin{array}{c } t_x \\ t_y \\\end{array } \right],\ ] ] wherein is the rotation angle , and are the two translations and is the scaling ratio .it is worth to point out that can be any other transformation beyond the geometric group .so sst can be applied to sparse structure in other applications if parametric form of is known .we define the nonlinear operator as therefore , the flow tracking in sst aims at decomposing the sparse matrix ( obtained in the background modeling ) as in sst , we iteratively invoke times of the following matrix decomposition to greedily construct the decomposition in ( [ e : sstmodel ] ) : in each time of the matrix decomposition above , the data matrix is obtained by former decomposition . in order to save the computation and facilitate the parameter tuning, we cast the decomposition ( [ e : submodel ] ) into an optimization similar to ( [ e : ls_app ] ) , flow tracking in sst solves a sequence of optimization problem of type ( [ e : sstopt ] ) .thus we firstly apply alternating minimization to ( [ e : sstopt ] ) .this results in iterative update of the solutions to the following three subproblems , the first subproblem aims at solving the following series of nonlinear equations of , albeit directly solving the above equation is difficult due to its strong nonlinearity , we can approximate the geometric transformation by using piece - wise linear transformations , where each piece corresponds to a small change of defined by . thus the solution of ( [ e : tau ] ) can be approximated by accumulating a series of .this can be viewed as an inner loop included in the update of .thus we have linear approximation where is the jacobian of with respect to the transformation parameters in .therefore , by substituting ( [ e : tauapp ] ) into ( [ e : tau ] ) , in each linear piece can be solved as the update of starts from some initial , and iteratively solves the overdetermined linear equation ( [ e : tauequ ] ) with update until the difference between the left hand side and the right hand side of ( [ e : tau ] ) is sufficiently small .it is critical to emphasize that a well selected initial value of can significantly save computational time .based on the between - frame affinity , we initialize by the transformation of its adjacent frame that is closer to the template frame , another important support set constraint , , needs to be considered in calculating during the update of .this constraint ensures that the object flows or segmented motions obtained by sst always belong to the sparse part achieved from the background modeling , and thus rules out the noise in background . hence , suppose the complement set of to be , each calculation of follows a screening such that , the second subproblem has the following global solution that can be updated by brp based low - rank approximation ( [ e : lr_app ] ) and its power scheme modification , wherein denotes the inverse transformation towards .the svds can be accelerated by brp based low - rank approximation ( [ e : ls_app ] ) .another acceleration trick is based on the fact that most columns of are nearly all - zeros .this is because the object flow or motion after transformation occupies a very small area of the whole frame .therefore , the update of can be reduced to low - rank approximation of a submatrix of that only includes dense columns .since the number of dense columns is far less than , the update of can become much faster .the third subproblem has a global solution that can be obtained via soft - thresholding similar to the update of in grebsmo , [ a : sst ] a support set constraint should be considered in the update of as well .hence the above update follows a postprocessing , note the transformation computation in the update can be accelerated by leveraging the sparsity of the motions .specifically , the sparsity allows sst to only compute the transformed positions of the nonzero pixels .we summarize the sst algorithm in algorithm [ a : sst ] .mse provides a novel insight into the multi - label learning ( ml ) problem , which aims at predicting multiple labels of a data sample .most previous ml methods focus on training effective classifiers that establishes a mapping from feature space to label space , and take the label correlation into account in the training process . because it has been longly believedthat label correlation is useful for improving prediction performance .however , in these methods , both the label space and the model complexity will grow rapidly when increasing the number of labels and simultaneously modeling their joint correlations .this usually makes the available training samples insufficient for learning a joint prediction model .mse eliminates this problem by jointly learning inverse mappings that map each label to the feature space as a subspace , and formulating the prediction as finding the group sparse representation of a given sample on the ensemble of subspaces . in the training stage ,the training data matrix is decomposed as the sum of several low - rank matrices and a sparse residual via a randomized optimization .each low - rank part defines a subspace mapped by a label , and its rows are nonzero only when the corresponding samples are annotated by the label . the sparse part captures the rest contents in the features that can not be explained by the labels .the training stage of mse approximately decomposes the training data matrix into .for the matrix , the rows corresponding to the samples with label are nonzero , while the other rows are all - zero vectors .the nonzero rows denote the components explained by label in the feature space .we use to denote the index set of samples with label in the matrix and , and then the matrix composed of the nonzero rows in is represented by . in the decomposition , the rank of is upper bounded , which indicates that all the components explained by label nearly lies in a linear subspace .the matrix is the residual of the samples that can not be explained by the given labels . in the decomposition ,the cardinality of is upper bounded , which makes sparse .if the label matrix of is , the rank of is upper bounded by and the cardinality of is upper bounded by , the decomposition can be written as solving the following constrained minimization problem : therefore , each training sample in is decomposed as the sum of several components , which respectively correspond to multiple labels that the sample belongs to .mse separates these components from the original sample by building the mapping from the labels to the feature space . for label , we obtain its mapping in the feature space as the row space of .although the rank constraint to and cardinality constraint to are not convex , the optimization in ( [ e : ms ] ) can be solved by alternating minimization that decomposes it as the following subproblems , each of which has the global solution : the solutions of and in the above subproblems can be obtained via hard thresholding of singular values and the matrix entries , respectively .note that both svd and matrix entry - wise hard thresholding have global solutions .in particular , is built from the first largest singular values and the corresponding singular vectors of , while is built from the entries with the largest absolute value in , i.e. , =u\lambda v^t ; \\s=\mathcal { p}_{\phi}\left(x-\sum\limits_{j=1}^kl^j\right ) , \phi:\left|\left(x-\sum\limits_{j=1}^kl^j\right)_{{r , s}\in{\phi}}\right|\neq0 \\ { \rm~and~ } \geq \left|\left(x-\sum\limits_{j=1}^kl^j\right)_{{r , s}\in{\overline{\phi}}}\right| , |\phi|\leq k. \end{array } \right.\ ] ] the projection represents that the matrix has the same entries as on the index set , while the other entries are all zeros .the decomposition is then obtained by iteratively solving these subproblems in ( [ e : mssub ] ) according to ( [ e : mssolution ] ) . in this problem , we initialize and as in each subproblem , only one variable is optimized with the other variables fixed .similar to godec , brp based acceleration strategy can be applied to the above model and produces the practical training algorithm in algorithm [ a : msetraining ] . in the training ,the label correlations is naturally preserved in the subspace ensemble , because all the subspaces are jointly learned . since only subspacesare learned in the training stage , mse explores label correlations without increasing the model complexity .[ a : msetraining ] initialize and according to ( [ e : msinitial ] ) , qr decomposition for , in the prediction stage of mse , we use group _ lasso _ to estimate the group sparse representation of a test sample on the subspace ensemble ] , ] and ] and $ ] .assume , wherein is the noise corresponding to , we have =\overline l+\delta.\end{aligned}\ ] ] thus the normal space of manifold is since the tangent space is the complement space of the normal space , by using the normal space of in ( [ e : normalmn ] ) and the normal space of given in ( [ e : normaln ] ) , we can verify by substituting the above results into ( [ e : cosmn ] ) , we obtain hence we have the last equivalence is due to in ( [ e : normalmn ] ) .thus where the diagonal entries of and are composed by eigenvalues of and , respectively .the last inequality is obtained by considering the case when and have identical left and right singular vectors . because infers , we have since in theorem [ t : linearcov ] can be selected as any constant that is strictly larger than , we can choose . in theorem [ t : asymptpro ], the cosine is directly used .therefore , the asymptotic and convergence speeds of will be slowed by augmenting , and vice versa .however , the asymptotical improvement and the linear convergence will not be jeopardized unless . for general that is not normalized onto the sphere , should be replaced by . for the variable , we can obtain an analogous result via an analysis in a similar style as above . for general without normalization, the asymptotic / convergence speed of will be slowed by augmenting , and vice versa , wherein the asymptotical improvement and the linear convergence will not be jeopardized unless .this completes the proof .theorem [ t : acspeed ] reveals the influence of the low - rank part , the sparse part and the noise part to the asymptotic / convergence speeds of and in godec .both and are the element - wise hard thresholding error of and the singular value hard thresholding error of , respectively .large errors will slow the asymptotic and convergence speeds of godec . since and , the noise part in and can be interpreted as the perturbations to and and deviates the two errors from .thus noise with large magnitude will decelerate the asymptotical improvement and the linear convergence , but it will not ruin the convergence unless or . therefore , godec is robust to the additive noise in and is able to find the approximated decomposition when noise is not overwhelming .we analyze the error bounds of the brp based low - rank approximation ( [ e : lr_app ] ) and its power scheme modification ( [ e : mlr_app ] ) . in low - rank approximation , the left random projection matrix is built from the left random projection , and then the right random projection matrix is built from the left random projection . thus and .hence the approximation error given in ( [ e : unblock ] ) has the following form : \right\|.\ ] ] the following theorem [ t : deterministicbound ] gives the bound for the spectral norm of the deterministic error .[ t : deterministicbound ] * ( deterministic error bound ) * given an real matrix with singular value decomposition , and chosen a target rank and an ( ) standard gaussian matrix , the brp based low - rank approximation ( [ e : lr_app ] ) approximates with the error upper bounded by see section [ s : proof ] for the proof of theorem [ t : deterministicbound ] .if the singular values of decay fast , the first term in the deterministic error bound will be very small .the last term is the rank- svd approximation error .therefore , the brp based low - rank approximation ( [ e : lr_app ] ) is nearly optimal .[ t : deterministicboundpower ] * ( deterministic error bound , power scheme ) * frame the hypotheses of theorem [ t : deterministicbound ] , the power scheme modification ( [ e : mlr_app ] ) approximates with the error upper bounded by see section [ s : proof ] for the proof of theorem [ t : deterministicboundpower ] . if the singular values of decay slowly , the error produced by the power scheme modification ( [ e : mlr_app ] ) is less than the brp based low - rank approximation ( [ e : lr_app ] ) and decreasing with the increasing of . the average error bound of brp based low - rank approximation is obtained by analyzing the statistical properties of the random matrices that appear in the deterministic error bound in theorem [ t : deterministicbound ] .[ t : averageerrorbound ] * ( average error bound ) * frame the hypotheses of theorem [ t : deterministicbound ] , see section [ s : proof ] for the proof of theorem [ t : averageerrorbound ]. the average error bound will approach to the svd approximation error if and .the average error bound for the power scheme modification is then obtained from the result of theorem [ t : averageerrorbound ] .[ t : averageerrorboundpower ] ( * average error bound , power scheme * ) frame the hypotheses of theorem [ t : deterministicbound ] , the power scheme modification ( [ e : mlr_app ] ) approximates with the expected error upper bounded by ^{1/(2q+1)}.\end{aligned}\ ] ] see section [ s : proof ] for the proof of theorem [ t : averageerrorboundpower ] . compared the average error bounds of the brp based low - rank approximation with its power scheme modification , the latter produces less error than the former , and the error can be further decreased by increasing .the deviation bound for the spectral norm of the approximation error can be obtained by analyzing the deviation bound of in the deterministic error bound and by applying the concentration inequality for lipschitz functions of a gaussian matrix .[ t : deviationerrorbound ] ( * deviation bound * ) frame the hypotheses of theorem [ t : deterministicbound ] .assume that . for all ,it holds that except with probability .see section [ s : proof ] for the proof of theorem [ t : deviationerrorbound ] .the following lemma and propositions from will be used in the proof .[ l : conjugate ] suppose that .for every , the matrix .in particular , [ p : range ] suppose .then , for each matrix , it holds that and that .[ p : inversepurtubation ] suppose that .then [ p : blocknorm ] we have for each partitioned positive semidefinite matrix .\ ] ] the proof of theorem [ t : deterministicbound ] is given below .since an orthogonal projector projects a given matrix to the range ( column space ) of a matrix is defined as , the deterministic error ( [ e : newunblock ] ) can be written as by applying proposition [ p : range ] to the error ( [ e : errorproj ] ) , because , we have where ( v_1^ta_1)^\dagger\lambda_1^{-2}= \left [ \begin{array}{c } i \\ h \\\end{array } \right ] .\ ] ] thus can be written as \ ] ] for the top - left block in ( [ e : isubpnblock ] ) , proposition [ p : inversepurtubation ] leads to .for the bottom - right block in ( [ e : isubpnblock ] ) , lemma [ l : conjugate ] leads to .therefore , \ ] ] by applying lemma [ l : conjugate ] , we have \end{aligned}\ ] ] according to proposition [ p : blocknorm ] , the spectral norm of is bounded by by substituting ( [ e : rawbound ] ) into ( [ e : projnm ] ) , we obtain the deterministic error bound .this completes the proof . the following proposition from be used in the proof .[ p : powernorm ] let be an orthogonal projector , and let be a matrix . for each nonnegative , the proof of theorem [ t : deterministicboundpower ] is given below .the power scheme modification ( [ e : mlr_app ] ) applies the brp based low - rank approximation ( [ e : lr_app ] ) to rather than . in this case , the approximation error is according to theorem [ t : deterministicbound ] , the error is upper bounded by the deterministic error bound for the power scheme modification is obtained by applying proposition [ p : powernorm ] to ( [ e : unblockpower ] ) .this completes the proof .the following propositions from will be used in the proof .[ p : sgt ] fix matrices , , and draw a standard gaussian matrix . then it holds that [ p : pesudoinvgaussian ] draw an standard gaussian matrix with . then it holds that the proof of theorem [ t : averageerrorbound ] is given below .the distribution of a standard gaussian matrix is rotational invariant . since 1 ) is a standard gaussian matrix and 2 ) is an orthogonal matrix , is a standard gaussian matrix , and its disjoint submatrices and are standard gaussian matrices as well .theorem [ t : deterministicbound ] and the hlder s inequality imply that we condition on and apply proposition [ p : sgt ] to bound the expectation w.r.t . , i.e. , the frobenius norm of can be calculated as \\ \notag&={\rm trace}\left[\left(\left(\lambda_1v_1^ta_1\right)\left(\lambda_1v_1^ta_1\right)^t\right)^{-1}\right].\end{aligned}\ ] ] since 1 ) is a standard gaussian matrix and 2 ) is a diagonal matrix , each column of follows -variate gaussian distribution .thus the random matrix follows the inverted wishart distribution .according to the expectation of inverted wishart distribution , we have \\ \notag & = { \rm trace}~\mathbb e\left[\left(\left(\lambda_1v_1^ta_1\right)\left(\lambda_1v_1^ta_1\right)^t\right)^{-1}\right]\\ & = \frac{1}{p-1}\sum\limits_{i=1}^r\lambda_i^{-2}.\end{aligned}\ ] ] we apply proposition [ p : pesudoinvgaussian ] to the standard gaussian matrix and obtain therefore , ( [ e : exaa ] ) can be further derived as by substituting ( [ e : exaafinal ] ) into ( [ e : exsubl ] ) , we obtain the average error bound this completes the proof .the proof of theorem [ t : averageerrorboundpower ] is given below . by using hlder s inequality and theorem [ t : deterministicboundpower ] ,we have we apply theorem [ t : averageerrorbound ] to and and obtain the bound of , noting that . by substituting ( [ e : tildexsubl ] ) into ( [ e : powerxsubl ] ) , we obtain the average error bound of the power scheme modification shown in theorem [ t : averageerrorboundpower ] .this completes the proof .the following propositions from will be used in the proof .[ p : lipschitzconcentration ] suppose that is a lipschitz function on matrices : draw a standard gaussian matrix . then [ p : gaussiannormdeviation ] let be a standard gaussian matrix where . for all , the proof of theorem [ t : deviationerrorbound ] is given below .according to the deterministic error bound in theorem [ t : deterministicbound ] , we study the deviation of .consider the lipschitz function , its lipschitz constant can be estimated by using the triangle inequality : hence the lipschitz constant satisfies .we condition on and then proposition [ p : sgt ] implies that \leq&\left\|\lambda_2 ^ 2\right\|\left\|\left(v_1^ta_1\right)^\dagger\right\|_f\left\|\lambda_1^{-1}\right\|_f+\\ \notag&\left\|\lambda_2 ^ 2\right\|_f\left\|\left(v_1^ta_1\right)^\dagger\right\|\left\|\lambda_1^{-1}\right\|.\end{aligned}\ ] ] we define an event as according to proposition [ p : gaussiannormdeviation ] , the event happens except with probability applying proposition [ p : lipschitzconcentration ] to the function , given the event , we have according to the definition of the event and the probability of , we obtain therefore , since theorem [ t : deterministicbound ] implies , we obtain the deviation bound in theorem [ t : deviationerrorbound ] .this completes the proof .it is not direct to analyze the theoretical guarantee of greb due to its combination of alternating minimization and greedy forward selection .hence , we consider analyzing its convergence behavior by leveraging the results from geco analysis .this is reasonable because they share the same objective function yet different optimization variables . in particular, the risk function in geco is , where .it can be seen that the variable in geco is able to be written as without any loss of generality .therefore , for the same selection of , we can compare the objective value of geco and greb at arbitrary step of their algorithm .this results in the following theorem .assume is a -smooth function according to geco and , and is the objective function of greb . given a rank constraint to and a tolerance parameter .let is the solution of greb .then for all matrices with we have . according to lemma 3 in geco ,let , where is the value of at the beginning of iteration and fulfills , we have at the end of iteration , the objective value of greb equals , while geco optimizes over the support of ( i.e. , optimizes when fixing and ) .we use the same notation to denote the variable in iteration .this yields at the beginning of iteration , both geco and greb computes the direction along which the object declines fastest. however , geco adds both and to the ranges of and , while greb only adds to and then optimizes when fixing . because the range of in greb is optimized rather than previously fixed , we have )\leq\\ & \min\limits_\eta f(\lambda^{(i)}+\eta e^{u , v } ) .\end{array}\ ] ] plug ( [ equ : ith ] ) and ( [ equ : i1th ] ) into ( [ equ : gecoiequ ] ) , we gain a similar result : )\geq \frac{\epsilon_i^2(1-\tau)^2}{2\beta\|a\|_{tr}^2}.\ ] ] following the analysis after lemma 3 in geco , we can immediately obtain the results of the theorem .the theorem states that greb solution is at least close to optimum as geco .note when sparse is alternatively optimized with in greb scheme , such as grebcom , the theorem can still holds .this is because after optimizing in each iteration of grebcom , we have , which enforces the objective function degenerates to that of geco , which is .
learning big data by matrix decomposition always suffers from expensive computation , mixing of complicated structures and noise . in this paper , we study more adaptive models and efficient algorithms that decompose a data matrix as the sum of semantic components with incoherent structures . we firstly introduce `` go decomposition ( godec ) '' , an alternating projection method estimating the low - rank part and the sparse part from data matrix corrupted by noise . two acceleration strategies are proposed to obtain scalable unmixing algorithm on big data : 1 ) bilateral random projection ( brp ) is developed to speed up the update of in godec by a closed - form built from left and right random projections of in lower dimensions ; 2 ) greedy bilateral ( greb ) paradigm updates the left and right factors of in a mutually adaptive and greedy incremental manner , and achieve significant improvement in both time and sample complexities . then we proposes three nontrivial variants of godec that generalizes godec to more general data type and whose fast algorithms can be derived from the two strategies : 1 ) for motion segmentation , we further decompose the sparse ( moving objects ) as the sum of multiple row - sparse matrices , each of which is a low - rank matrix after specific geometric transformation sequence and defines a motion shared by multiple objects ; 2 ) for multi - label learning , we further decompose the low - rank into subcomponents with separable subspaces , each corresponds to the mapping a single label in feature space . then the prediction can be effectively conducted by group lasso on the subspace ensemble ; 3 ) for estimating scoring functions of each user in recommendation system , we further decompose the low - rank as , where the rows of is the linear scoring functions and the rows of are the items represented by available features . empirical studies show the efficiency , robustness and effectiveness of the proposed methods in real applications . low - rank and sparse matrix decomposition , bilateral random projection , greedy bilateral paradigm , multi - label learning , background modeling , motion segmentation , recommendation systems
cognitive radio ( cr ) technology is a key enabler for the opportunistic spectrum access ( osa ) model , a potentially revolutionary new paradigm for dynamic sharing of licenced spectrum with unlicensed devices . in this operational modea cognitive radio acts as a spectrum scavenger .it performs spectrum sensing over a range of frequency bands , dynamically identifies unused spectrum , and then operates in this spectrum at times and/or locations when / where it is not used by incumbent radio systems .opportunistic spectrum access can take place both on a temporal and a spatial basis .in temporal opportunistic access a cognitive radio monitors the activity of the licensee in a given location and uses the licensed frequency at times that it is idle .an example of this is the operation of cognitive radio in the radar and umts bands. in spatial opportunistic access cognitive devices identify geographical regions where certain licensed bands are unused and access these bands without causing harmful interference to the operation of the incumbent in nearby regions .currently cognitive radio is being intensively researched for opportunistic access to the so - called tv white spaces ( tvws ) : large portions of the vhf / uhf tv bands which become available on a geographical basis after the digital switchover . in the us the fcc ( federal communications commission ) proposed to allow opportunistic access to tv bands already in 2004 .prototype cognitive radios operating in this mode were put forward to fcc by adaptrum , i , microsoft , motorola and philips in 2008 .after extensive tests the fcc adopted in november 2008 a second report and order that establishes rules to allow the operation of cognitive devices in tvws on a secondary basis . furthermore , in what is potentially a radical shift in policy , in its recently released digital dividend review statement the uk regulator , ofcom , is proposing to `` allow licence exempt use of interleaved spectrum for cognitive devices . ''furthermore ofcom states that `` we see significant scope for cognitive equipments using interleaved spectrum to emerge and to benefit from international economics of scale '' .more recently , on february 16 2009 , ofcom published a new consultation providing further details of its proposed cognitive access to tv white spaces . with both the us and uk adapting the osa model , andthe emerging 802.22 standard for cognitive access to tv bands being at the final stage , we can expect that , if successful , this new paradigm will become mainstream among spectrum regulators worldwide .however , while a number of recent papers have examined various aspects of cognitive radio access to tvws in the united states , there is currently very little quantitative information on the _ global _ spectrum opportunities that may result if cr operation in tv bands becomes acceptable in other countries in the world . to bridge this gap ,we present in this paper a quantitative analysis of tv white spaces availability for cognitive radio access in the united kingdom . using accurate digital tv ( dtv ) coverage maps together with a database of dtv transmitters ,we develop a methodology for identifying tvws frequencies at any location in the uk .we use our methodology to investigate the variations in tvws as a function of the location and transmit power of cognitive radios , and examine how constraints on adjacent channel emissions of cognitive radios may affects the results .our analysis provides a realistic view on the potential spectrum opportunity associated with cognitive radio access to twvs in the uk , and presents the first quantitative study of the availability and frequency composition of twvs outside the united states .the rest of this paper is organised as follows . in sectionii we discuss in detail the operation of cognitive radio devices in the vhf / uhf tv bands .this is followed by a description of our methodology for estimating twvs frequencies .section iii presents results of our study of the availability of tvws in locations in the uk and analyses the implications of our findings .we conclude this paper in section v.broadcast television services operate in licensed channels in the vhf and uhf portions of the radio spectrum .the regulatory rules in most countries prohibit the use of unlicensed devices in tv bands , with the exception of remote control , medical telemetry devices and wireless microphones . in most developed countries regulators are currently in the process of requiring tv stations to convert from analog to digital transmission .this _ digital switchover _ is expected to be completed in the us in 2009 and in the uk in 2012 .a similar switchover process is also underway or being planned ( or is already completed ) in the rest of the eu and many other countries around the world .after digital switchover a portion of tv analogue channels become entirely vacant due to the higher spectrum efficiency of digital tv ( dtv ) .these cleared channels will then be reallocated by regulators to other services , for example through auctions .in addition , after the dtv transition there will be typically a number of tv channels in a given geographic area that are not being used by dtv stations , because such stations would not be able to operate without causing interference to co - channel or adjacent channel stations .these requirements are based on the assumption that stations operate at maximum power .however , a transmitter operating on a vacant tv channel at a much lower power level would not need a great separation from co - channel and adjacent channel tv stations to avoid causing interference .low power unlicensed devices can operate on vacant channels in locations that could not be used by tv stations due to interference concerns .these vacant tv channels are known as tv white spaces , or interleaved spectrum in the parlance of the uk regulator .opportunistic operation of cognitive radios in tv bands , however , is conditioned on the ability of these devices to avoid harmful interference to licensed users of these bands , which in addition to dtv include also wireless microphones . in november 2008, the fcc adopted a report setting out rules allowing licence - exempt cognitive devices to operate in tv white spaces . in summarythese rules require cognitive devices to use a combination of spectrum sensing and geolocation .the devices must be able to sense both tv signals and wireless microphones down to dbm , and must also locate their position to within meters accuracy and then consult a database that will inform them about available spectrum in that location .devices without geolocation capabilities are also allowed if they are transmitting to a device that has determined its location .cognitive devices that use sensing alone are allowed in principle .however , the fcc states that such devices will be `` subject to a much more rigorous approval process '' .the fundamental reason why tvws have attracted much interest is an exceptionally attractive combination of bandwidth and coverage .signals in tv bands , travel much further than both the wifi and 3 g signals and penetrate buildings more readily .this in turn means that these bands can be used for a very wide range of potential new services , including last mile wireless broadband in urban environments , broadband wireless access in rural areas , new types of mobile broadband and wireless networks for digital homes .furthermore , in the case of the uhf bands , the wavelength of signals in these bands is sufficiently short such that resonant antennas with sufficiently small footprint can be used which are acceptable for many portable use cases and handheld devices .[ cols="^,^ , < " , ]in this paper we presented a methodology for estimating the uk tv white spaces for opportunistic access by cognitive radios . using our methodology we examined the availability of this spectrum and its channel composition in uk population centres .our analysis shows on average mhz of tvws is available for access by low - power cognitive radios .we found , however , that in many locations this considerable bandwidth is fragmented into many non - adjacent channels .consequently , we conclude that the availability of novel pooling techniques , such as nc - ofdm is crucial for effective utilisation of tvws , in particular for future high bandwidth applications . finally , we examined the effect of constraints on adjacent channel interference imposed by regulators / standards on tvws , and showed that such constraints drastically reduce the availability of this spectrum .most future use scenarios of tvws will involve multiple cognitive devices operating within the same geographical region .some of our future work will focus on new methodologies for estimation and control of _ aggregated _ cognitive radio interference in such scenarios .we are also working on improved tvws estimation methods for high power use cases .the author is grateful to brian butterworh ( uk free.tv ) for providing access to his dtv coverage data , and to keith briggs ( bt ) , yang fang and anjum pervez ( south bank university ) for their valuable contributions .m. a. stuzra and f. ghazvinian , can cognitive radio technology operating in the tv white spaces completely protect licensed tv broadcasting ? ,working paper no 16 , wireless future program , new america foundation .
cognitive radio is being intensively researched for opportunistic access to the so - called tv white spaces ( tvws ) : large portions of the vhf / uhf tv bands which become available on a geographical basis after the digital switchover . using accurate digital tv ( dtv ) coverage maps together with a database of dtv transmitters , we develop a methodology for identifying tvws frequencies at any given location in the united kingdom . we use our methodology to investigate variations in tvws as a function of the location and transmit power of cognitive radios , and examine how constraints on adjacent channel interference imposed by regulators may affect the results . our analysis provides a realistic view on the spectrum opportunity associated with cognitive devices , and presents the first quantitative study of the availability and frequency composition of twvs outside the united states .
in this paper , we present a two - dimensional model of a porous medium that consists of a squared container partially filled with a liquid and where the pore throats are formed by fixed circular grains that are uniformly distributed .we analyse in probabilistic terms the structure of the stationary interfaces that separate the liquid from air .we consider several parameter regimes where , essentially , either gravity or capillarity dominates . in this model the form of the interfaces , both at the pore scale and at the macro scale ,is determined by the balance between gravity and capillary forces . for a given volume of liquid ,these interfaces are the union of single interfaces ( see ) that meet the solid matrix at given contact angles following young s law ( see e.g. ) .the quasi - stationary system seems to evolve by means of a sequence of local minimizers of the energy .the energy for steady states is the sum of surface energy and potential gravitational energy .the contact angle conditions can be encoded in the different interfacial energies .one of the most relevant features in these flows is the existence of a large number of equilibria that are local minimizers of the total energy of the system . in quasi - stationary situations , if the volume of the liquid phase changes slowly , transitions occur between different minimizers .understanding the evolution of such a system would require the description of the global motion of the interface that explains the jumps between these multiple local energy minimizers .a first step towards this goal consists in understanding the structure of the minimizers .the aim of this paper is thus to describe the structure of equilibrium interfaces for different parameter regimes in the model described above .there are several characteristic lengths in the problem , namely , the distance between grains , their size , as well as the macroscopic size of the system . on the other hand , there are other length scales ( the capillary length and the inverse of the bond number ) that , as we shall see , give the distances where surface tension and gravity balance ( typically , surface tension dominates for small distances and gravity for large ones ) .this study is motivated by the experiments presented in furuberg - maloy - feder and .the authors reproduced haines jumps ( see ) in a slow drainage process in a container consisting of two plates separated by thin cylinders that act as obstacles or ` grains ' .we recall that haines jumps are abrupt changes of the pressure in the liquid phase due to sudden changes in the geometry of the interface . from the mathematical point of viewthese jumps seem to correspond to a fast transition from one equilibrium state to a different one in the capillarity equations .we mention that there are several mathematical studies related to the study of multiphase flow in porous media .however , much of the mathematical analysis has focused on the qualitative study of semi - empirical macroscopic descriptions of these flows ( see e.g. ) .there are also rigorous results concerning the derivation of such macroscopic laws from the flows taking place at the pore scale .not surprisingly , such attends are restricted to single - phase flow ( see the chapter of , ) or simplified models mimicking essential features of multi - phase flow ( e.g. , , , ) .although our model falls into the second category , we believe that one can analyse simple , but not phenomenological , models in probabilistic terms to get a better understanding of the effect that processes at the pore scale have in the whole system . in this regard , our work is also related to for example ( and the references therein ) , where the propagation of fronts through a random field of obstacles is analysed in relation to hysteresis . in the next section we set up the mathematical model and discern the parameter regimes under consideration .we outline the contents of the paper at the end of this introduction .we consider a two dimensional rectangular container of length that contains uniformly distributed circular grains . if the average distance between grains is ( understood as the average of the distances of every center to its nearest neighbour ) , then . we then let represent the density of grains , i.e. satisfies . we assume , for simplicity , that all grains are of equal size and we denote by their radius. the container is partially filled with a liquid so that an interface forms separating the liquid from the air .we assume that such an interface splits the domain in two regions , i.e. it does not intersect itself and there are no isolated bubbles of either of the fluid phases .let us denote by the curve that joins grains and forms the interface , that is the union of capillary(-pressure ) curves joining every two grains that meet the interface and that we call _ elemental components _ of .a sketch of this setting is shown in figure [ basic : setting ] .let us fix the canonical basis such that and , then points in the opposite direction to the gravitational field .we shall denote the coordinates of a point by in this basis .every elemental component of , , satisfies the capillary equation ( e.g. ) that must be complemented with young s condition at the grains and/or the walls of the container .we shall assume that the contact angle is the same at all solid surfaces .here denotes the signed curvature of each that forms , is the ( constant ) density of the liquid , is the gravity constant and denotes surface tension . here is the pressure associated to the volume fraction of the domain ( including grains ) , , occupied by the liquid . then we take as unit of length the average distance between the grains of the system , .up - scaling the length variables with , we obtain that satisfies the non - dimensional equation subject to a contact angle boundary condition that we specify below .we obtain the non - dimensional number , which is the bond number and measures the balance between gravitational forces and surface tension in this units of length . for simplicity, we use the same notation for the non - dimensional variables .also we let and be the non - dimensional size of the domain ( i.e. ) and grain radius ( i.e. ) , respectively .this , in particular , means that , with this notation , in ( [ s1e1 ] ) .we observe that depends on the configuration of grains , so if we let ] with , where grains are uniformly distributed with an average distance between them .we indicate the position of a center by the variable and write its boundary in the form .we take as the unit normal vector to pointing in the direction of air , and the tangent vector to such that has the same orientation as the canonical basis ( i.e. that ) .let us denote by the outer normal vector to at a point where and intersect ( i.e. is a contact point ) . let us denote by the value of at .then , the young condition is defined as follows : if , where ] to ] from to the tangent vector where is the arc - length . then multiplying ( [ s1e1 ] ) by and integrating once we get where is a constant of integration that can have either sign or be zero , but clearly .then solving ( [ s1e1 ] ) , with given initial conditions , amounts to solving subject to the same initial conditions , together with ( [ a1e1 ] ) and a compatible constant .the are the obvious symmetries of equation ( [ s1e1 ] ) , ( for any ) and , that will be used to simplify the computations below . using ( [ t : beta : rela ] ) and the definition of arc - length , one can write ( [ a1e1 ] ) as \left ( dx_{1}\right ) ^{2}=\left ( c - b\frac{\bar{x}_{2}^{2}}{2}\right ) ^{2}\left ( d\bar{x}_{2}\right ) ^{2 } \label{a1e2}\ ] ] the fact that we are taking squares to go from ( [ s1e1 ] ) to ( [ a1e2 ] ) means that the set of solutions of the latter is larger than that of the former . in the construction onehas to impose that the resulting curves are indeed solutions of ( [ s1e1 ] ) , as we shall see .we also notice that all solutions must have .below , we find the solutions of ( [ s1e1 ] ) by gluing together pieces of curve solutions that are graphs of a function of the form and that solve ( [ a1e2 ] ) .we then integrate ( [ a1e2 ] ) after taking the positive or negative square root of ; we use the equations: observe that , for such graphs , is characterised by there are solutions , however , that can not be constructed the way explained above .these are the solutions with , that is the curves ( air is on top of the liquid ) and ( air is below the liquid ) .the former corresponds to a solution of ( [ a1e1 ] ) with , and the latter to a solution of ( [ a1e1 ] ) with .next , we identify the points of the solution curves where either or , i.e. the values of where either the left or the right - hand side of ( [ a1e2 ] ) vanishes . for simplicity, we use the following notation : and [ hori : vert : points ] the points at which either or are as follows : 1 . if , when then , and when then either or , except if , that only occurs at .2 . if , then there is no value of such that , and as before , occurs at .the next lemma describes the way solutions of ( [ s1e1 ] ) can be obtained by gluing solutions of ( the branch equations ) ( [ dx1:dx2 ] ) at points with either vertical or horizontal tangent vector .[ glue : rule ] for , given the initial condition with and for ( [ s1e1 ] ) , this specifies the branch equation of ( [ dx1:dx2 ] ) that the corresponding solution ( [ s1e1 ] ) satisfies .if this solution reaches a point with then it can only be continued further by solving the same branch of ( [ dx1:dx2 ] ) . on the other hand ,if the curve reaches a point with then it can only be continued further by solving the branch equation with opposite sign .a solution stops being a graph of a function of the form when , seen as a curve , .it is easy to see that , this never occurs at a point with . at those pointswe can further continue the solution curve by gluing smoothly with another solution curve that solves ( [ dx1:dx2 ] ) with the opposite sign ( notice that these curves , as graphs , have the opposite orientation , and hence the change of sign in ( [ dx1:dx2 ] ) implies that they have the same curvature at that point ) .on the other hand , at points with , the solution is still a graph , and in principle , one could glue it to a solution that solves ( [ dx1:dx2 ] ) with the opposite sign . however , if this happens at a point with , then would change sign , which contradicts ( [ s1e1 ] ) .if , this happens at a point with , then keeps its sign , but this also contradicts ( [ s1e1 ] ) .we now construct pieces of curve solutions from which one can recover complete solutions of ( [ s1e1 ] ) ( i.e. defined for all ) by the symmetries , for any , and .the next lemma gives these pieces for every by distinguishing the cases , and .[ funda : piece ] given the initial condition , with and , and for ( [ s1e1 ] ) , the elemental component of the solution curve is obtained by integrating ( [ dx1:dx2 ] ) , with the minus sign , up to a or up to .namely , 1 . for ,the curve starts with and reaches for some value of , for which and .the curve continues with to a point where for some other value of , for which and .2 . for ,the curve starts with and reaches , for which and .the curve continues with and has an asymptote at , that is with and as .3 . for ,the curve starts with and reaches at some value of , for which and .the curve continues with to a point where at some other value of , for which and .4 . for ,the curve starts with and reaches at some value of , for which and , if , and and , otherwise .we only prove _ ( i ) _ , thus , let us assume that .we first observe that due to the fact that the maximum value that can take is the initial one , the direction of the initial tangent vector implies that away from , but close to , the initial point .then the curve solution can be seen as a graph of a function of the form that solves ( [ dx1:dx2 ] ) with the minus sign .then the solution is convex as long as ( see ( [ s1e1 ] ) , that implies ) .in particular , this curve reaches a point with ( i.e. with for some value of ) . in order to prove that the curve also reaches a point with ( i.e. with for some value of ), we construct a curve ending up at such a point , and by a similar argument this curve also passes through a point with .then , adjusting the initial value of so that both curves intersect at such a point ( which they do tangentially ) , and using the uniqueness provided by lemma [ glue : rule ] imply that they are the same solution curve .all the other cases are straight forward by convexity of the solution curve .we show in figures [ cbigger1 ] ( for ) , [ cless1pos ] ( for ) , [ ce1 ] ( for ) , [ ce0 ] ( for ) and [ cng ] ( for ) the pieces described in lemma [ funda : piece ] .the next lemma describes how complete curves are obtained from these pieces .+ [ complete : curves ] for each , the pieces of solution curves obtained in lemma [ funda : piece ] give rise to complete curves , as follows : 1 .there are three additional pieces of curve solutions obtained by the symmetries and .2 . in all cases above the curves obtained can be glued together smoothly , translating as necessary in , at the points where , and at points with ( choosing pieces with the same tangent vector at the point ) .[ upside : down ] 1 .the complete curves that we obtain are open and periodic except for a particular value of for which the complete curves are closed : by continuity , there exists a value of , say ( actually , see ) , such that the corresponding elemental component satisfies that .in particular , if , the curve obtained in lemma [ complete : curves ] has , globally , increasing as , whereas the corresponding curve if has , globally , decreasing as .+ physically , one can interpret the solution curves with as describing a situation in which air lies below the liquid , and _vice versa _ for the ones with .it is also worth mentioning that the complete curve obtained for ] .if , takes all values in ] for some ] .this means that there are some restrictions on the contact surface for grains that lie near the horizontal if air lies below the interface .we give the solutions of ( [ a1e1 ] ) with explicitly in the next lemma , see also figure [ ce1 ] [ lemc1 ] the solutions of ( [ a1e1 ] ) with are explicitly given by the horizontal line oriented with , and by two one - parameter families of solution branches: where the parameter is a real number and is given by moreover , satisfies the following properties : and .observe also that at and that , and .regarding convexity ; with . [ c1:complete ] 1 .we notice that the functions and can be extended to , with and , but the derivative at those points is not finite .we also observe that gluing together the branch and with , and the same value of we obtain a complete planar curve .similarly , and with ] .its minimum there is achieved at , computing its value gives the result .we now give the estimate on the horizontal length from points at height to points at height .[ c : estimates:2 ] for any the following holds where and decrease for , , as and tends to a positive constant as .we now adapt these estimates to actual curves that solve ( [ s1e1 ] ) subject to a young condition on the grains connected . beforethat we distinguish two types of solution curves .[ def : lr : rl ] let and be the first and second , following orientation , contact points of a solution curve .we say that goes from left to write if and write .similarly , we say that goes from right to left if , and we write .we also denote by the angle that forms with the horizontal at the contact point . with thiswe define the parameter moreover , we shall indicate with a super - index that lies above ( since it then has positive curvature ) and with a super - index that lies below ( negative curvature ) .when crosses ( its curvature changes sign across ) we indicate it with the super - index . [single : distance ] let and be centers of two grains connected by a solution of ( [ s1e1 ] ) with a young condition at contact points and ( following orientation ) and that solves ( [ a1e1 ] ) with , then : 1 .if , 2 .if , we write ( euclidean norm ) , then clearly and we can write : and using ( [ a1e1 ] ) then which gives the right - hand side of ( [ centers : above ] ) and ( [ centers : above2 ] ) . on the other hand if then and if then and the rest of the proof follows from lemmas [ c : estimates ] and [ c : estimates:2 ] and the fact that as .the following lemmas are proved in a similar way than lemma [ single : distance ] .[ single : distance2 ] let and be centers of two grains connected by a solution of ( [ s1e1 ] ) with a young condition at contact points and ( following orientation ) and that solves ( [ a1e1 ] ) : 1 . if , then 2 .if , then we recall that for the complete curve does not intersect , so the estimates in lemma [ complete : curves ] have to be summed up depending on the number of pieces of the curve ( see remark [ upside : down]_(ii ) _ ) : [ single : distance3 ] let and be centers of two grains connected by a solution of ( [ s1e1 ] ) with a young condition and that solves ( [ a1e1 ] ) with , then it is necessarily of the form and satisfies where is the number of elemental components of .in this section we shall give the main results and prove some of them . before we do this, we complete the problem setting by describing its probabilistic features in detail .we observe first that the probability of having at least centers in ^{2} ] that are distributed homogeneously and independently . in particular , there are centers in ^{2} ] .we assume that the grains have the same size , where their radius .notice that we allow situations of overlapping grains .then , we assign the following probability measure to : the probability space is , where is the -algebra of borel sets of . for simplicity of notation , we introduce the parameter that relates the three ( non - dimensionalised ) lengths . as mentioned in the introduction, is relevant in the cases where . in the other cases we use to simplify notation; it will indicate that the solutions depend on , and . for every we denote by the interface solution of ( [ s1e1 ] ) and the young condition .we recall that such an interface connects the domain to the domain and that is the union of non - intersecting elemental components that are solutions of ( [ s1e1 ] ) and the young condition between a pair of grains .in particular , each component of solves ( [ a1e1 ] ) with a particular value of , and we denote by the maximum for each such .we index the elemental components of any following its orientation and let be this set of indexes if the interface connects grains .we shall write that a center with if it is connected ( by the young conditions ) to the elemental components and of .we also denote by .we shall need a number of results .the first one is a consequence of the lemmas [ single : distance ] , [ single : distance2 ] and [ single : distance3 ] , and will be used in sections [ regime:1 ] and [ regime:2 ] : [ corol1 ] for every , and , there exists a constant , such that if a satisfies that then there exists a constant such that every pair of centers of grains joined by , and , and contained in , satisfy .in fact , . in many of the probability results we will use stirling estimates ( see e.g. ) .namely , thus we can write for any , with : we have the following lemma : [ comb : stirling ] given , , , with then with and for some . moreover , at , reaches its maximum , attaining its minimum at either or ( depending on the value of and ) for large enough. the estimates follow from ( [ bino : stirling ] ) .first , let with where we use the estimates on ( [ fact : stirling ] ) .the estimate on the other factor of ( [ bino : stirling ] ) follows by a calculus exercise : it is easy to check that has a maximum at if with . also since and there must be at least two relative minima , one in and one in .it is easy then to check that in and in , where and are the inflexion points .this means that there are two minima one is attained in and the other in . since for all and with , the statement follows .the subcases of this regime can be treated similarly .namely , the following theorem holds .[ theo : case1 ] for all there exists such that for all there exists with and such that for any , the solutions satisfy with where the positive constant is as in corollary [ corol1 ] . for simplicity of notationwe introduce the parameter that satisfies as .we first compute the probability of having a configuration without centers in the strip .we call the set of such configurations , then , clearly, on the other hand if there exist and a such that all its points satisfy then , by corollary [ corol1 ] , every two consecutive grains , and , joined by , satisfy where ( observe that as ) .the number of grains contained in is at least .this implies that the probability of such an event tends to as .we first observe that if we denote by the set of configurations for which the horizontal solutions exist ( no grains , therefore , are connected ) , then where is the set of configurations with at least one grain in the strip around the horizontal reference height of width , thus and . in the current limitthis means that as . if we know let be the set of configurations that have at least a solution with exactly one grain connected to the walls , then where is the set of configurations that have at least one center at a distance smaller than or equal than to the horizontal reference line ( recall that the supremum of the distance to the horizontal that can be reached is precisely for the solution with different from the horizontal line ) .then as and thus also as .in the next theorem we obtain that in general as for all and that an upper bound for such behaviour is given by a poisson distribution .first let us make precise the definition of the sets of configurations : let be the set of configurations for which there is at least one solution such that .[ theo : case2 ] if , there exists and sufficiently large such that moreover , for all , there exists a such that for all there exists such that , and that for all the solutions satisfy .first , we observe that the probability of having at least connections is the sum of the probabilities of having exactly connections with .then we distinguish between the configurations that have connections outside the strip around the horizontal reference line of width and those that do not .we first show that the probability of having connections outside the given strip tends to . on the other hand ,the probability of having an interface connecting exactly grains that lies inside a strip of width is less than that of having grains in the strip of that same width. then we compute this probability .we recall the notation that we use throughout this proof .first , we treat the case .let be the set of configurations that have a solution , with , that intersects grains with .in that case , there are at least three centers , , with and , by corollary [ corol1 ] , for some of order .the probability of such events is: let be the complementary event of .then , as , and this together with remark [ c1:complete ] ( ii ) shows the second statement of the theorem .we let the set of configurations that have exactly grains in a strip of width .then , the probability measure of is where we have used that , in the current case , . in a strip of width andthe maximum number of centers expected is proportional to . to see this, we let be the set of configurations for which the number of particles in the strip of width is larger or equal to .then using lemma [ comb : stirling ] we can estimate each term ( except the last one which is ) as follows : where is as in lemma [ comb : stirling ] , and then , as , in particular , as if and as if . summing up all the termsimplies that as at an exponential rate .finally , we write where denotes the set of configurations for which the number of particles in the strip is less than . then , as we have seem the first two terms tend to as , and the third tends to as .the case is analogous but working on a strip of width . in this regimewe shall prove that the probability of having configurations with solutions that separate from the horizontal reference line a distance of order one tends to zero in a regime where is not too large compare to some increasing function of .we prove the result in the case , the other cases can be shown similarly , as we remark at the end of this section .let us give two auxiliary lemmas , that will be needed in the proof of the main result of this section .the first one is a technical lemma : [ sqrt : growth ] let and a sequence such that and .then , there exist positive constants and independent of , such that the proof follows by induction , multiplying the inequalities by and and summing up the results , the constants and satisfy and . the next lemma is a geometric result , which we prove in appendix [ sec : proofgeolemma ] .[ geometric : lemma ] if is a solution with components joining ( ) grains , , ordered following the orientation of , then for some positive constants and independent of , and .next , we define the configurations that may allow solution interfaces that separate a distance of order one from the reference horizontal line. the centers of such configurations that might be joined by a solution interface must satisfy certain conditions , according to corollary [ corol1 ] .we gather these in the following definition .[ bcs ] we let the union of the horizontal reference line to the left boundary and to right boundary be denoted by [ centers : gamma ] for given constants and of order one , and a given in the current regime , we let be the set of configurations of particles such that for every there exists and an injective map so that the centers satisfy the following conditions : 1 .2 . 3 .if then 4 .. moreover , they satisfy ( [ geometric ] ) .we observe that condition _( ii ) _ is suggested by the statement of corollary [ corol1 ] and the values of , in ( [ x2:mim ] ) and ( [ x2:med : min ] ) .condition _ ( iii ) _ corresponds is also suggested by corollary [ corol1 ] .finally , condition _( iv ) _ implies that the sequence starts either near the horizontal or near and ends either near the horizontal or near .we can now prove the following theorem : [ theo : case3 ] let denote the set of all configurations such that there exists a satisfying for some of order one .then , if as , as .definition [ centers : gamma ] implies that .let us then get an estimate on .we denote by and for every and every we let be the set configurations for which there exists a family of centers satisfying the conditions of definition [ centers : gamma ] .observe that these sets of configurations are not , in general , disjoint .then and we estimate its probability as follows where we have reordered the first indexes by setting with for every , and in the last inequality we have simply counted the number of elements in each and also estimated the measure of the elements with by the total measure .now , we consider two types of configurations in each , the ones for which all points satisfy that and the ones for which there exists at least one . in the first case , we have , due to the properties in definition [ centers : gamma ] , that where we abuse notation by referring to the vertical lines and by and , respectively , and denoting the distance of points to these lines with the norm symbol .observe that then .let us denote by and then with where and will be determined bellow .we first estimate . for large enough so that , we have that and this tends to zero provided that in the current limit , and , then for large enough it is clear that and so ( [ theta : cond1 ] ) holds and as . in order to estimate , for a define a subset of the points in as follows .first observe that , by the definition of , there is a , such that , on the other hand there exist a that .then we can define the first point of the subset by taking and ( the last point of the subset ) . for each , we choose the points as follows , and this definition , the subset satisfies: and also there exist a such that .we can apply lemma [ sqrt : growth ] with to obtain an estimate on , namely , so we then estimate the second term in ( [ ies ] ) as follows , here we have indexed the elements of by with . also , in the last inequality , we have used that the elements of satisfy and lemma [ sqrt : growth ] .we recall that .then , using the inequality of arithmetic and geometric means and lemma [ geometric : lemma ] , we obtain we rewrite then , for all and large enough we recall that and using ( [ fact : stirling ] ) we can conclude that tends to zero as provided that largest exponential growing factor is controlled by the fastest exponentially decreasing one , that is if as , and this implies the result. theorem [ theo : case3 ] can be formulated in the case by replacing by when .the proof is analogous , except for the proof of lemma [ geometric : lemma ] . in this case ,the conclusion is simpler to achieve , by realising that the longest part of an interface joining the least number of grains is one crossing , all other parts are away from this one a distance of order , applying corollary [ corol1 ] the lemma follows .we only consider here the case , and to simplify we assume that for some of order one .the reason for this is that we expect the maximum difference in height in a full solution to be of order . indeed ,if reaches distances from much larger than , the elemental components of on that region would have a very small radius of curvature , and could not join grains at an average distance of order one . in this sectionwe further restrict the height of the possible s to the interval ] there are configurations with probability close to one , such that the interface solution is close to such curve in some suitable sense . before we make precise this theorem ,let us give the main idea of the proof and establish some settings and definitions .we make a partition of the domain into small squares . for any given smooth curve without self - intersections and compatible with the prescribed volume condition, we show that the probability that a solution interface satisfies for every that has tends to one as .this means that all curves compatible with the fixed volume condition can be approximated by an interface .the proof of that the probability of if tends to zero , will be formulated as a percolation problem , see below for the details .we can now state the main theorem of this section , but first we define the notion of compatible curves . [comp : curve ] let and with we say that a curve without self - intersections \rightarrow [ 0,1]^2 ] , ] and the area under is equal to .we remark that we compare real interface solutions to compatible curves in the rescaled domain \times[v-\varepsilon_0,v+\varepsilon_0] ] be a curve compatible with .then , there exist and such that for all and there exists , such that as , and and a with , as such that and that for any , there exists a solution interface with contact angle ] into small squares of equal size . in each of such squares that intersect we formulate a percolation - like problem .[ q : def ] let \times[(v - \varepsilon_{0})l , ( v + \varepsilon_{0})l]\,.\ ] ] let , depending on , such that and such that , .we define to be a partition of into squares of size such that if , then if , unless and have one edge or one vertex in common .we let for simplicity .clearly , .we also observe that this definition requires .we now consider a subdivision of each in smaller sub - squares of size of order one .we assume that the size is exactly one , for that we either need to assume that or to rescale the domain .we henceforth adopt the first choice without loss of generality .[ sr : def ] assume that let and as in definition [ q : def ] , then we let where is a partition of into closed squares of size one that cover and such that their interior sets are disjoint .moreover , for a and for each , we let be a closed squared with centered in . observe that increases as , but , and this value equals the average distance between centers ( and is of the same order of the minimal radius of curvature in this domain ) .then we can guarantee that if there are grains in neighbouring s , then there is a connection by a solution of ( [ s1e1 ] ) . in this way the proof reduces to estimating the probability of finding grains in the s that can form a connection between the given portions of that intersect .this problem is reminiscent of a site percolation problem in two dimensions ( see e.g. , ) , although the probabilities assigned to the sites , as we explain below , are weakly correlated .in fact , we shall restrict the connections to grains that fall in adjacent s .this is because we have to guarantee that elemental components do not intersect each other , this is possible by reducing , as we shall see . but first , we have the following result that guarantees the existence of s joining grains in adjacent sites .this result is independent of .[ q : construct ] 1 .let and such that with one edge in common .then if there exist and then , there exists a such that for all there exists a solution of ( [ s1e1 ] ) that joins and .there exists and with as , such that if , and have a vertex in common the arc that joins a grain with center in to a grain with center in do not intersect the arc thar joins the grain to a grain with center ._ proof of ( i ) : _ observe that the centers and satisfy that . on the other hand , ,in particular the radius of curvature of a solution joining these grains , say , has .if there is a solution that joins and , then the variation of the curvature of , ( where is the difference of the maximum curvature and the minimum curvature of ) .this means that such a is very close to a circle with large radius of curvature .on the other hand , there are straight lines that join and with any prescribed contact angle .indeed , there are two parallel lines that are tangential to and .one has contact angle and the other has contact angle .then for any given contact angle , there is a segment parallel to these ones that joins and with that contact angle .this is also true for circumferences of large enough radius , being the smallest such circumference the one that has the two grains inscribed at the largest possible distance ; i.e. one with diameter .we recall that where ] .we also denote by the union of sites that intersect the boundary of .finally , the sites of ^{2} ] .the following definition sets when open and closed sites are connected , see e.g. for similar definitions in two - dimensional site percolation .[ conn ] let , we say that two sites , are 1 .* adjacent * , and we write , if and .* connected * if there exists a finite set of distinct sites , , ... , , such that every two such consecutive sites are adjacent , and and . a set with this propertyis called a * chain*. 3 .* -adjacent * , and we write , if and .* -connected * if there exists a finite set of distinct sites , , ... , , such that every two such consecutive sites are -adjacent , and and . a set with this propertyis called a * - chain*. the next definition gives the concept of regions being _ connected _ and -_connected_. let and for and be topologically connected regions of . then , for a given , we say that 1 . is * connected * to if there exists a chain that intersects them . is * -connected * to if there exists a -chain that intersects them .we give a probability space to the configurations of sites induced by : we endow where with the structure of a probability space .we define the probability as follows , for any we set : for example , let then and let , then we observe , that in contrast to the classical site percolation scenario , the probabilities of two different sites of being open are not mutually independent , because for finite the number of grains is finite , and that influences the presence of finding more or less grains in different sites . on the other hand , since for all then ] .let be the set of site configurations such that and are -connected , then for sufficiently large then , taking large enough we obtain that , thus from ( [ est : mub ] ) we get we write for certain value $ ] , then this and the definition of ( definition [ sr : def ] ) concludes the proof .we can now show theorem [ percolation : theo ] .lemma [ q : construct ] implies that we can determine the probability of a intersecting through portions of that intersect by computing the probability of finding grain centers in the sub - boxes .this lemma also give the restriction in the contact angle .we let and and be as in theorem [ percol ] and assume that and .let such that if then satisfies that .for every , let also , such that for and with length in as in theorem [ percol ] .then , by theorem [ percol ] , i.e. for all compatible with , . on the other hand that , thus taking with we obtain the result with [ [ acknowledgements ] ] acknowledgements + + + + + + + + + + + + + + + + the authors acknowledge support of the hausdorff center for mathematics of the university of bonn as well as the project crc 1060 the mathematics of emergent effects , that is funded through the german science foundation ( dfg ) .this work was also partially supported by the spanish projects dges mtm2011 - 24 - 10 and mtm2011 - 22612 , the icmat severo ochoa project sev-2011 - 0087 and the basque government project it641 - 13 .we divide the proof in several steps .first we recall that we are assuming a rough estimate on the total length of a solution that joins centers is achieved by thinking of an interface having the longest possible length .we show that this worst - case scenario is given by an interface having as many as possible folds elongating nearly horizontally from one vertical boundary to the other .it is clear that the longest elemental components of such interface will occur near and that their union connects very few grains , since for such pieces is close to ( see lemma [ c : estimates ] ) .taking into account that the volume is fixed and that the grains are uniformly distributed ( and do not see gravity ) there must be folds on both sides of .we can distinguish two types of interfaces , those that zigzag starting above and those starting below it .as we did in definition [ def : lr : rl ] we also distinguish different types of elemental components and parts of the interface .first we distinguish those parts above , that we indicate with a subindex ( they have positive curvature ) , those below , that we indicate with a superindex since ( negative curvature ) , and those that cross , that have curvature changing sign across , and that we indicate with the superindex . then , we can define , skipping the dependency on for simplicity , and the set of indexes , and are defined in the obvious way , they are disjoint and .we distinguish , also in definition [ def : lr : rl ] , two types of elemental components , those that , following its orientation , go from left to right ( with subscript ) , and those that go from right to left ( subscript ) . 1 .there exists a set of consecutive levels that contains .one of these levels , that we denote by , must cross , in the sense that if is the first and the last contact point connected by , then and lie on opposite sides of .there exists and with , such that where with and with are horizontal levels of , indexed in decreasing order with proximity to . 1 .type i : the level has elemental components of the form and crosses from positive to negative curvature .type ii : the level has elemental components of the form and crosses from positive to negative curvature .type iii : the level has elemental components of the form and crosses from negative to positive curvature .type iv : the level has elemental components of the form and crosses from negative to positive curvature . observe that there exists at most three consecutive levels in : , one above it , and one below it , .the levels and do not cross in the sense that does , but individual elemental components of them can .this is clear from the orientation of the elemental components and definition [ def : zigzag ] ( specify in at least one case of the types above ) .we shall first derive lemma [ geometric : lemma ] for zigzag interfaces in which has three levels .then we prove that given the set of interfaces connecting grains , then these interfaces maximise the length .observe that such s connect at least centers , a pathological case given by a composed of just three elemental components , each being a level of : , oscillating around , , just above , and .1 . and are odd numbers 3 .the index sets are disjoint with and also are disjoint with .these sets might be empty .4 . if connects grains then and if is of type and and if is of type . .the dashed line represents , the solid lines represent the interface . in the examples shown ,the central horizontal level , , has three parts ( , that crosses , , above , and , below ) each represented by a straight line . in the examples of type i andiii there is just one horizontal level above and below , and in the examples of type ii and iv there are two above and two below.,scaledwidth=60.0% ] assume there exists an and a that is a zig - zag interface and connects grains where and has three levels in .it is clear any other interface that connects grains is necessarily of the same length or shorter , the length of the interface in a strip of the size of being at most of the order of , outside the strip the length between connections of grains is of the same order as for a zigzag interface , according to definition [ def : level ] _( ii ) _ and corollary [ corol1 ] . we can now prove the result for zigzag interfaces that join grains near with the maximal possible length .let us assume that is of type . then , it has three levels in with three elemental components of the form whose length is of the order of . let where , and are as in lemma [ basic : zigzag : iandiii ] .let us make abuse of notation by saying that , so that we index in this group only the pieces with length of order , and the other pieces of smaller order of length we index them in is they are above the of the former , and in if they are below .we then let .we recall that lemmas [ c : estimates ] and [ c : estimates:2 ] imply that the other elemental components or and in have horizontal length comparable to the horizontal of elemental components in the even horizontal layers .let us get an estimate on the horizontal distances . observe that on the other hand , since the absolute value of the curvature of the pieces increases with the distance to the horizontal , for each we have . then , we can now estimate the horizontal levels using ( [ par : impar ] ) and ( [ par ] ) , this gives where the horizontal size of and of is estimated by .it is clear that the maximal distance between connected centers in the levels is achieved at an elemental component with in an elemental component of the form . although we can only guarantee that , we observe that the horizontal distances between centers are uniformly bounded .namely , if , lemma [ single : distance ] _( ii ) _ implies if then lemma [ single : distance2 ] ( _ ( ii ) _ ) implies and if , then lemma [ single : distance3 ] implies
we consider a two - dimensional model of a porous medium where circular grains are uniformly distributed in a squared container . we assume that such medium is partially filled with water and that the stationary interface separating the water phase from the air phase is described by the balance of capillarity and gravity . taking the unity as the average distance between grains , we identify four asymptotic regimes that depend on the bond number and the size of the container . we analyse , in probabilistic terms , the possible global interfaces that can form in each of these regimes . in summary , we show that in the regimes where gravity dominates the probability of configurations of grains allowing solutions close to the horizontal solution is close to one . moreover , in such regimes where the size of the container is sufficiently large we can describe deviations from the horizontal in probabilistic terms . on the other hand , when capillarity dominates while the size of the container is sufficiently large , we find that the probability of finding interfaces close to the graph of a given smooth curve without self - intersections is close to one . _ _ 2010 mathematics subject classification:__35r35 , 76s05 ; 76t10 , 76m99 , 60c05 . _ keywords:_-gravity interface ; two - dimensional porous medium ; probabilistic asymptotic analysis
recent years have witnessed a fast development of complex networks .a network is a set of items that are called vertices with connections between them , which are named as edges .many natural and man - made systems can be described as networks .such paragons can not be numbered that biological networks including protein - protein interaction networks and metabolic network ; social networks such as movie actor collaboration and scientific collaboration networks ; technological networks like power grids , www and the internet at the autonomous system ( as ) level .a major endeavor in academics is to discover the common properties shared by many real networks and the specific features owned by a certain type of networks .a great number of measurements to reveal the structural features of networks are applied .the degree distribution , as one of the most important global measurements , has attracted increasing attention since the awareness of the scale - freeness .clustering coefficient is a local measurement that characterizes the loop structure of order three .another significant measurement is the average distance .a network is considered to be small - world if it has large clustering coefficient but short average distance . except for the properties mentioned above ,there are many other measurements such as degree - degree correlation , betweenness centrality and so forth .moreover , some statistical measurements borrowed from physics such as entropy , and novel metrics such as modularity also play important roles in characterizing networks . not only the statistical features but also the dynamical evolution of networks the current research interest has focused on . a mess of models have been proposed to reveal the origins of the impressive statistical features of complex networks .there are also many evolving models developed for some certain type of networks such as the internet at the as level , the social networks and so forth .however the prosperous development of measurements sets a barrier for evaluating different evolving models .the traditional idea is that : if the network generated by a model resembles the target network in terms of some statistical features usually selected by the authors themselves , the model is claimed as a proper description of the real evolution . butthis methodology seems to be puzzling .first , unselected statistical properties are entirely ignored so no one knows whether the model is sufficient to describe them as well .secondly , the authors tend to select the metrics that support their models .therefore , it is impossible to give a fair remark that which model is better .thirdly , it is difficult to quantify the extent to which the models resemble the real evolving mechanisms . inspired by the link prediction approaches and likelihood analysis ,we propose a method that tries to fairly and objectively evaluate different models .link prediction aims at estimating the likelihood of non - existing edges in a network and try to dig out the missing edges .the evolution of networks involves two processes - one is the addition or deletion of nodes and another one is the changing of edges between nodes . in principle the rules of the additions of edges of a model can be considered as a kind of link prediction algorithm and here lies the bridge between link prediction and the mechanism of evolving models . the present paper is organized as follows. we will give a general description of our method in section ii .section iii introduces the data and explains how to use our method to evaluate evolving models in details with the as - level internet being an example network .the results obtained by our method are shown in section iv .we draw the conclusion and give some discussion in the last section .in this section , we will give a general description about our method to evaluate evolving models .it is believed that an evolving model is a description of the evolving process of a network in reality .an evolving model describes the evolving mechanism of a real network or a class of networks . given two snaps of one network at time and ( ) , as well as an evolving model , we can in principle calculate the likelihood that the network starting from the configuration at time will evolves to the configuration at under the rules of the given model .we say a model is _better _ than another one if the likelihood of the former model is greater than that of the latter one. however , how to calculate such likelihood is still a big challenge .inspired by the like prediction algorithms , we can calculate the likelihood of the addition of an edge according to a given evolving model . in a short duration of time, each edge s generation can be thought as independent to others and the sequence of generations can be ignored .thus the likelihood mentioned above is the product of the newly generated edges likelihoods .denote by the network and the set of edges at time step .the new edges generated at the current time step is .the probability that node is selected as one end of the newly generated edge is where is the set of parameters applied by the model .then the likelihood of a new monitored edge is eq .( [ eq : b ] ) is applicable only when and are both old nodes .if or is newly generated , we set or . in order to make comparison between different models, is normalized by , where is the set of nonexisting edges( ) . given different parameters , the values of may be different , resulting in different likelihoods of the target network .the parameters corresponding to the maximum likelihood are intuitively considered to be the optimal set of parameters for the evaluated model . in a word ,a network s likelihood can be calculated if the evolution data and the corresponding model are given . andif there are several candidate models , our method could judge them by comparing the corresponding likelihoods : the model giving higher likelihood according to the target network is more favored .captypefigure .[ fig : figure1 ]captypefigure in this paper we focus on the models of the as - level internet . two popular models - generalized linear preferential model ( glp ) and telaviv network generator ( tang ) - will be evaluated by our method .the well - known barabsi - albert ( ba ) and erds - rnyi ( er ) models are also analyzed as two benchmarks .the data sets we utilize here are collected by the _ routeviews project _we use the data of jun .2006 and dec . 2006 . some nodes and edges in jun .2006 disappear in the record of dec .2006 . although an autonomous system might be canceled , rarely does it happen during a short time span .therefore we assume that the nodes and edges in jun .2006 will not disappear in dec .2006 . that is to say that the network configuration in jun .2006 is a subgraph of that in dec . 2006 .we merge the network of jun .2006 into that of dec .2006 to make a set substraction between the two sets to obtain the newly generated edges and nodes . the basic information of the processed data set of dec .2006 and two original data sets is shown in table [ tab : table1 ] .captypetable c | c | c time & # nodes & # edges + 2006.06 & 22960 & 49545 + 2006.12 & 24403 & 52826 + 2006.12 ( processed ) & 25103 & 59268 + [ tab : table1 ] now we will describe how to calculate the likelihood of each newly - generated edge in terms of the four models .( i ) * glp model * - this model starts from a few nodes . at each time step , with the probability , one new node is added and edges are generated between the new node and old ones and with the probability , edges are generated among the existing nodes .the ends of new edges are selected following the rule of generalized linear preferential attachment as in which . in our methodif the end of a new edge is selected among the existing nodes , then is calculated by the eq .( [ eq : c ] ) .otherwise , if the end itself is a new node , is .so the likelihood of a new edge connecting two existing nodes and is the likelihood of an edge generated between a new node and an existing node is when a new edge connects two new nodes and , its likelihood is ( ii ) * tang model * - this model applies a super linear preferential mechanism , say this model also starts with a few nodes and at each time step a new node is generated with one edge connecting to one of the existing nodes that is selected with the probability described in eq .( [ eq : g ] ) .the remaining edges are added between the existing nodes . for these nodes ,one end is selected according to eq .( [ eq : g ] ) , while the other one is selected randomly .hence the likelihood of a new edge between existing nodes is where is the current size of the monitored network .( [ eq : h ] ) takes a geometric mean due to the fact that either or could be the one selected randomly .the cases involving new nodes are managed in the same way as that for the glp model .( iii ) * ba model * - the ba model also starts from a small graph and at each time step a new node associated with edges is added .the probability that the existing node is selected is note that the original ba model can not deal with the situation where edges are generated between two existing nodes .we thus generalize the ba model as if one edge is generated between two existing nodes , one node is selected preferentially following the eq .( [ eq : i ] ) and another one is selected randomly .therefore the likelihood of an edge between two existing nodes and is calculated as the likelihood of an edge connecting a new node and an old one is the likelihood of a new edge generated between two new nodes is as discussed above .( iv ) * er model * - the mechanism of this model is that when one edge is generated , both its ends are selected in a random fashion .the likelihood of one edge between two old nodes is the calculation of other two types of edges is similar to that of glp .note that ba is a special case equivalent to the glp model when .it is also obvious that the er model is a special case of the tang model when .captypetable c|c|c model & maximum likelihood & optimum parameters + glp & & + tang & & + er & & n / a + ba & & n / a + [ tab : table2 ] the likelihoods of the four evolving models with different parameters are shown in figure [ fig : figure1 ] .the maximum likelihoods as well as the corresponding parameters are listed in table [ tab : table2 ] .the maximum likelihoods of both specific internet models ( glp and tang ) are greater than those of the ba model and the er model .notice that the ba and er model are parameter - free and thus represented by two straight lines in figure [ fig : figure1 ] .our results suggest that subject to the mimicking of the as - level internet evolution , the glp model is better than the tang model , and the tang model is better than the ba model , of course , the er model performs the worst .a puzzling point is that the optimal parameters corresponding to the maximum likelihoods are far from the ones suggested in the original literature .we next devise an experiment to demonstrate that the parameters obtained by our method are more advantageous than the original ones .traditionally , an evolving model starts from a small network with a few nodes . in this experiment ,we respectively use the glp and tang models to drive the network evolution starting from the configuration of jun .2006 , ending with the same size of the configuration of dec . 2006 .according to the refs . and the data , and .then we analyze some statistical features of the newly generated part including the average degree , the density of interaction and the fraction of leaves .we find that the performance of the glp model is better than the tang model with the same kind of parameters in the three cases , demonstrating that our evaluating method is reasonable .for both the two models , the statistical features obtained by the optimum parameters suggested by us resemble the real data better than those obtained by using the original parameters .the comparisons are shown in figure [ fig : figure2 ] .thousands of network models are put forward in recent ten years . some of them aim at uncovering mechanisms that underlie general topological properties like scale - free nature and small - world phenomenon , others are proposed to reproduce structural features of specific networks , such as the internet , the world wide web , co - authorship networks , food webs , protein - protein interacting networks , metabolic networks , and so on . besides the prosperity , we are worrying that there is no unified method to evaluate the performance of different models , even if the target network is given beforehand .instead of considering many structural metrics , this paper reports an evaluating method based on likelihood analysis , with an assumption that a better model will assign a higher likelihood to the observed structure .we have tested our method on the real internet at the as level , and the results suggest that the glp model outperforms the tang model , and both models are better than the ba and er models .this method can be further applied in determining the optimal parameters of network models , and the experiment indicates that the parameters obtained by our method can better capture the structural characters of newly - added nodes and links .the main contributions of this work are twofold . in the methodology aspect, we provide a starting point towards a unified way to evaluate network models . in the perspective aspect , we believe for majority of real evolutionary networks , the driven factors and the parameters will vary in time . for example , recent empirical analysis suggests that before and after the year 2004 , the internet at the as level grows with different mechanisms . to find out a single mechanisms that drives a network from a little baby to a giant may be an infeasible task .in fact , in different stages , a network could grow in different ways , or in a hybrid matter with changing weight distribution on several mechanisms .once , the research focus has shifted from analyzing static models to evolutionary models . in the near future , it may shift from the evolutionary models to the evolving of the evolutionary models themselves . in principle , the current method could capture the tracks of not only the network evolution , but also the mechanism evolution . hopefully this work could provide some insights into the studies on network modeling .we acknowledge a. wool for the codes of the tang model .this work is supported by the national natural science foundation of china under grant no .11075031 and the fundamental research funds for the central universities
many models are put forward to mimic the evolution of real networked systems . a well - accepted way to judge the validity is to compare the modeling results with real networks subject to several structural features . even for a specific real network , we can not fairly evaluate the goodness of different models since there are too many structural features while there is no criterion to select and assign weights on them . motivated by the studies on link prediction algorithms , we propose a unified method to evaluate the network models via the comparison of the likelihoods of the currently observed network driven by different models , with an assumption that the higher the likelihood is , the better the model is . we test our method on the real internet at the autonomous system ( as ) level , and the results suggest that the generalized linear preferential ( glp ) model outperforms the tel aviv network generator ( tang ) , while both two models are better than the barabsi - albert ( ba ) and erds - rnyi ( er ) models . our method can be further applied in determining the optimal values of parameters that correspond to the maximal likelihood . experiment indicates that the parameters obtained by our method can better capture the characters of newly - added nodes and links in the as - level internet than the original methods in the literature .
collective decisions among small groups of humans in a crisis are difficult to predict because of the inherent complexity of human behaviors and interactions .disasters and other threat situations often display noisy and fragile individual performance that can lead to non - optimal collective outcomes .groups can exhibit a tendency to make riskier collective decisions after group discussion than individuals would alone , and social interactions can lead to a `` mob mentality '' that hinders evacuation and may result in injury and violence .associated spatiotemporal clustering of departure times can lead to traffic congestion and delays . quantifying synergies between broadcast information involving the objective status of a threat and concurrent social factors that may inhibit or accelerate action is critical to devising effective strategies for ensuring the safety of populations in crisis situations .challenges to coordinating collective action when a population is at risk may be exacerbated by a variety of influences , including the desire to locate a missing family member or panic of an individual group member , each of which may derail action for the entire group .alternatively , desirable outcomes occur when group action exceeds the sum of its parts . identifying and promotingeffective teams has been investigated in the context of emergent leadership , wisdom of crowds , performance stability , and division of labor .central questions include the following : to what extent does individual behavior predict that individual s performance in a group ? how does the size , diversity , communication , and decision mechanism of a group impact their collective performance ? towhat extent can individual performance differences within a group be deduced from other ( social network ) measurable behaviors ? this paper isolates and quantifies tensions in decision making that arise when individuals act cohesively , as in families , teams , or squads .we compare the behavior of individuals acting alone with their behavior in groups , whose actions are linked through a network of influence or communication in a crisis situation . in this paper , we employ a combination of experiments in behavioral network science , analysis of social network usage , and artificial neural network models to quantify the consequences of different protocols for decision making among groups that are part of a larger population subject to an impinging crisis .our results link features of individual human behavior with observable behaviors of groups and thereby address critical issues that arise in identifying individual differences that constrain or promote achievement of group objectives .the insights gained from investigating collective behavior in evacuation scenarios through behavioral experiments and data - driven modeling can inform the design of public policies for collective action in stressful and uncertain situations . * our work in context . * our experimental approach of quantifying specific behavioral drivers in a controlled setting constitutes an intermediate between surveys conducted in the aftermath of disasters , which can be difficult to execute on a large scale and often qualitative , and controlled human - subject experiments of isolated cognitive tasks . in this controlled setting , all information observed by participants prior to their decisions is recorded , facilitating data - driven modeling of decision making .our approach capitalizes on prior work developing mathematical models of information flow on social networks to isolate drivers of collective dynamics and performing behavioral experiments to quantify individual decision making in crisis situations .the focus of these previous studies has been on evacuate - or - not responses to simulated natural disasters studied through a model - experiment - model paradigm , where initial theoretical studies predict effects of decision making under various social and economic pressures , followed by experimental tests of such predictions to quantify those effects .the analysis of such experiments enables data - driven models which in turn inform the design of subsequent experiments . a theoretical component of this previous work identified a decision model for evacuation behavior that described the heterogeneous and non - optimal behavior of human subjects observed empirically in behavioral experiments which formed the basis for the experiment described in this paper .the model was used to quantify individual differences in decision dynamics and relate those differences to psychometric measurements of personality and risk attitudes .this model was derived from detailed empirical observations , and as such stands in sharp contrast to a set of models typically used in numerical simulations or large - scale , data - driven studies that treat decisions as random , optimal , or based on a threshold applied to a state variable representing opinion , which is updated by an assumed interaction rule ( e.g. , ) . in our current study , we probe mesoscale decision and behavior dynamics by dividing members of a community into cohesive groups .each group must act as a unit and make a forced evacuate - or - not decision in the face of a disaster , which will either hit or miss the community according to a time - evolving likelihood .building on the earlier work involving tradeoffs between broadcast and social information on individual decision making , this project focuses specifically on isolating novel features that arise when decisions impact a group .our experimental platform adapts the framework of kearns et al . , who have conducted a series of `` behavioral network science '' ( bns ) experiments that have focused on collective problem solving tasks , such as abstract graph coloring problems or economic investment games . in both competitive and cooperative tasks and scenarios , human subjects in these previous bns studieshave been shown to `` perform remarkably well at the collective level '' .in contrast to the work of kearns et al . , our investigations involve decision making in a threat scenario where we expect both heterogeneous behaviors across individual subjects and non - optimality of behavior at both the individual and collective levels . * experimental overview . *the experiment described in this paper places participants in a sequence of simulated disaster scenarios , each of which involves an approaching threat ( e.g. , wildfire or hurricane ) that may impact their community within an uncertain window of time .the status of the threat is described by a time - dependent likelihood based on an underlying stochastic process .resource limitations are represented by a finite number of available spaces in an evacuation shelter .participants are at risk of losing a portion of their initial wealth based on their action and the disaster outcome . in each scenario, individuals make a binding evacuate - or - not decision based on the information available .in some scenarios , participants act as individuals , in which case , once the decision to evacuate is made , the individual action ( evacuation to the shelter ) follows immediately , providing sufficient shelter space is available . in other scenarios ,the same individuals are grouped into teams . in these cases ,group action is determined by the collective decisions of the team subject to availability of shelter space and a forced action protocol .the action protocols used in this study include majority - vote ( the group evacuates when the majority has decided ) , first - to - go ( the group evacuates when the first member decides ) , and last - to - go ( the group evacuates when the entire group achieves consensus ) .this experimental design enables us to quantitatively compare and contrast individual and group decision making , as well as the effectiveness of different protocols for combining individual decisions into collective group action .for example , under the first - to - go protocol , in which a `` go '' decision made by one individual within a group forces the entire group to evacuate , behaviors observed in individual humans without a group influence ( such as tending to evacuate earlier than is optimal ) would result in poor population - level performance : a single individual s early evacuation commits the entire group to action , resulting in collective loss of temporal and financial resources . for optimal group performance in a first - to - go scenario, individuals must instead modify their behavior , either by reducing evacuation rates , increasing disaster likelihood thresholds for evacuating , or by deferring to other members of the group to make the decisions .finally , based on the results of the behavioral experiment , we develop individual data - driven artificial neural network models for decision making which incorporate individual and group disaster scenarios .heterogeneity in the decision models is linked to individual differences , identified using two alternate means . the first is based directly on the experimental data .the second is determined by social network ( specifically facebook ) use .the two methods produce quantitatively similar decision models and overall accuracy , suggesting that social network use may be predictive of individual behavior in a group setting .differences in personality traits and risk perception can influence decision making , specifically in scenarios involving evacuation from natural disasters .common assessments of personality include the big five inventory questionnaire , a questionnaire based on five characteristics ( extraversion , agreeableness , conscientiousness , neuroticism , and openness ) often used in psychological research , and common assessments of risk perception include the domain - specific risk - attitude scale . furthermore , the rapid development and widespread adoption of social networks such as facebook and twitter has lead to increasing interest in identifying personality - related factors from online profiles and social media activity .previous studies have commonly focused on developing predictive personality profiles from individual behavior on social media platforms , and it has been shown that computer models can outperform humans in assessing personality .the experiment detailed in this paper facilitates a rich investigation of the differences between group and individual behaviors and the sub - optimality of human behavior .a further exploration of collective decision dynamics is described in , which expands the analysis of the empirical data presented here and uses a complementary but methodologically different approach to modeling decision making . in , a few - parameter decision model with an explicit functional form motivated by empirical datais used to model individual decisions , simulate group behavior , and determine optimal decision strategies with bayesian inference . in this paper , our inherently multiscale results link characteristics of individual humans , observed behaviors of groups , and outcomes for the population as a whole , thereby addressing issues that arise in identifying individual differences that constrain or promote group objectives .we obtain information about the role of cohesive units in decision making , which is correlated with individual differences in the population , and which identifies emergent leaders as individuals willing to make decisions for the group , as well as followers who avoid initiating group action .the results have direct relevance to complex , real - world threat scenarios as well as coordination and control of civilian populations at risk .on march 13 , 2015 , we conducted a controlled behavioral experiment at the university of california , santa barbara ( ucsb ) in order to quantitatively characterize the decision - making behavior of both individuals and groups in stressful situations . in our experiment , 50 participants each decided if and when to evacuate from an impending virtual natural disaster ( fig [ fig : overview]a ) .all participants provided written informed consent , and the experimental protocol was approved by the institutional review board of ucsb .our experimental design is based on that of carlson et al . but differs primarily with the addition of groups . in both experiments , 50 individuals participated in several scenarios ( trials ) where the strike likelihood of an impending virtual disaster was broadcast at regular intervals , along with the availability of beds in an evacuation shelter .participants used this information to decide if and when to evacuate .they were staked a number of points before each trial , and points were deducted from the initial stake based on the success of their actions . while participants in the previous study made decisions as individuals , our experiment also includes scenarios where participants were randomly assigned to groups which act according to a specified protocol .this experiment involved 144 trials , each lasting 20 to 60 half - second time steps . during each trial , participants were provided with _ disaster _information about the progression and likelihood of the disaster hitting the community and the occupation status of an evacuation shelter via an individualized computer interface , an example of which is shown in fig [ fig : overview]b .participants began each trial in the _ at home _ state and , upon evacuating from the disaster , entered the _ in shelter _ state . at the beginning of each trial , participants were each staked 10 points . the number of points deducted from this stake was a function of whether or not a participant evacuated successfully and whether or not the disaster struck , as shown in the loss matrix in table [ table : lossmat ] .participants were ranked after each trial by their cumulative score and were awarded monetary compensation at the end of the experiment based on their final score over all trials .& disaster hits & disaster misses + at home & -10 & 0 + in shelter & -6 & -2 + + & disaster hits & disaster misses + at home & 0 & 10 + in shelter & 4 & 8 + in this experiment , we focus on the effects of both broadcast information and _ social _ information on decision making . in some trials, participants could only evacuate as individuals ; in others , participants were randomly assigned to groups of five or 25 individuals each which collectively act according to one of three specified protocols : first - to - go ( `` ftg '' ) , majority - vote ( `` mv '' ) , and last - to - go ( `` ltg '' ) . the ultimate action of the group was dependent on the combined decisions of individuals within the group .participants could view the decisions and rankings of their group members , referred to as _ social _ information , to formulate their strategy based on their group - mates behavior .prior to the behavioral experiment , we collected each participant s facebook archive , which contains their profile , liked pages , and site activity .we later used this information to formulate a measure of their personality .as shown in table [ table : face ] , we collected data on the variables of _ gender _ , _ age _ , _ number of friends _ , and _ page likes _ of films , television programs , and books . given the diversity and quantity of facebook pages , facebook likes can constitute an individual s unique `` digital footprint '' and can also reflect other internet behavior such as web searches or online purchases .furthermore , facebook likes have been shown to correlate with personality assessments based upon the five - factor , or big five , model ..we extract demographic information ( gender female , male , not specified ( ns ) or other and age ) , number of friends , and `` liked '' pages from participants facebook archives , which are used to determine individualized personality profiles for each participant .liked pages are grouped into genres ; the top five genres are shown , along with the range of liked pages per genre per person . [cols="^,^",options="header " , ] [ table : prinprobttest ] * individual heterogeneity .* we observe significant differences in behavior for different individuals .fig [ fig : phitvstime ] plots the time step and of each evacuation decision for all participants ; each subplot displays the decisions , represented by circles , of an individual participant .subplots are ordered by performance , with the highest - scoring subject ( s1 ) in the upper left corner and the lowest - scoring subject ( s50 ) in the lower right corner .we observe high variability in the distribution of decisions across all participants .for instance , s1 s decisions in mv trials are largely clustered around and mostly occur at time steps less than 30 , whereas s50 s mv decisions are largely clustered near time step 30 .s1 s ltg decisions also occur relatively early in their respective trials compared to s50 s ltg decisions .some participants distributions of decisions have low variance in , such as s6 and s36 , while others have low variance in time , such as s8 and s34 . however , there is no correlation between variance in or time and the participants ranks .one participant with relatively anomalous behavior is s37 ; aside from a few instances , s37 typically makes decisions at relatively high values of ( 0.6 or greater ) and at relatively late time steps ( 30 or later ) . for most participants , decisions in individual trialsare on average made at later time steps than decisions made during group trials .ftg and mv decisions are also typically made earlier than ltg decisions . despite the wide range of individual variability illustrated in fig [ fig : phitvstime ] , coarse metrics that may be extracted from the individual patterns , such as an individual s average or variance in time step and/or value for the evacuation decision ,are not correlated significantly with individual rank . in the next section ,we develop an artificial neural network that predicts individual and group decision making , which includes an additional node to capture individual personality factors from the data .in this section , we develop a progressive series of artificial neural network models for individual and group decision making based on empirical observations and social media use . fig [ fig : nnflow]a is a schematic illustration of the steps in our model development .we begin with a baseline model isolating key parameters of the experiment .we next augment the model with additional features extracted from the data to improve the overall accuracy .these include reaction time delay as well as gradients in the disaster likelihood and available shelter space .the final stage incorporates individual differences in experimental behavior , thereby inferring significant distinguishing characteristics ( fig [ fig : phitvstime ] ) that could not be determined using simple metrics such as the average or variance of aspects of individual behavior .alternatively , we substitute representation of individual differences in the experiment with an additional network layer derived from social media use , which leads to quantitatively similar results. * data balancing . * from the experimental data , we construct a set of positive and negative data points in non - sequential order ; some data is then withheld for training , and the artificial neural network is trained on the remaining data . the training and test data are assembled from the experimental data for all participants in all trials at every time step where the individual either decided to evacuate ( a positive data point ) or did not decide to evacuate , but could have ( a negative data point ) . in group trials ,positive data points occur when the individual is at home and decides to evacuate , whether or not their decision leads to a group evacuation at the particular time step .negative data points occur when the individual is at home and does not decide to evacuate , but could have ( i.e. , there is sufficient space available in the shelter ) .each individual can make at most one evacuation decision in each trial . in comparison, there are many time steps in which they remain at home and do not decide . as a result ,uniform data sampling leads to a disproportionate number of negative points . to resolve this imbalance, we down - sample the negative points . to isolate the time leading up to a decision, we restrict our samples to lie within a window of width containing at least one positive point .for example , if a participant decides to evacuate at time , we include in our training set negative data points between and including time steps and . herewe set .since this still results in excess negative data points , we up - sample the positive data points by doubling their weights . * baseline decision model .* we build our model incrementally as illustrated schematically in fig [ fig : nnflow]a .we begin with a baseline logistic regression model that depends on key experimental variables : the disaster likelihood , the available shelter capacity , the group size , and the evacuation protocol .these inputs constitute the starting point for both of the final models ( fig [ fig : nnflow]b and c ) , which differ only in how individual differences are incorporated in the final step .the baseline model takes vectors generated from the key variables as input and in turn outputs the probability of an evacuation decision , where , for trial and time step , is generated by concatenating all input variables into a vector ; * w * is a vector containing the weights of each of the terms of * x * ; and is the bias term .the values of * w * and are determined by training on the data set .the function is usually defined as a continuous nonlinear function ; we choose the sigmoidal function , a 9-dimensional input vector is generated for each time step in each trial in the data set .the first two dimensions of correspond to the disaster likelihood and the number of available shelter spaces at time in trial , respectively .the last 7 dimensions of represent the type of trial , i.e. , the evacuation protocol and the group size , using `` 1-out - of- '' coding .that is , for each input vector , only one of the last 7 dimensions has a value of 1 , and all other dimensions have value 0 .the 7 dimensions correspond to 7 possible combinations of group size and protocol : individual , ftg group-5 , ftg group-25 , mv group-5 , mv group-25 , ltg group-5 , and ltg group-25 , respectively .for example , trial 19 ( shown in the first panel of fig [ fig:0game ] ) is an individual trial . at ,the disaster likelihood is 0.5 .there are 31 spaces available in the shelter and 31 active participants . on that time step ,one person decided to evacuate while the other 30 did not decide to evacuate .the input vector at time step 34 is then ] , such that ] . adding the likelihood and shelter space gradients to the baseline model with time delay corrections, we obtain an accuracy of 80.2% , as illustrated in fig [ fig : measure ] in the column marked + g. * individual differences . *we observe significant heterogeneity in behavior across individuals , as illustrated in fig [ fig : phitvstime ] .some participants tend to decide early , even when the disaster likelihood is low , while other participants have greater spread in their decision times .incorporating individual differences is the final step in developing our model .we utilize two different approaches : one is based directly on the experimental data , and the other is extracted from the participant s facebook archive . ultimately , we find that these two methods lead to comparable results .the experiment - based method adds a personality node ( fig [ fig : nnflow]b ) and trains the model on the balanced data to obtain a unique personality factor for each participant .specifically , the personality input is incorporated as a 1-out - of- set of nodes , where , itemizing the number of participants , and the personality factor is the weight for individual obtained by training the networks .the accuracy obtained by adding the personality node to the baseline model with time delays and gradients is illustrated in fig [ fig : measure ] in the column marked + p.we obtain an accuracy of 85.0% , a substantial improvement over the 75.1% accuracy of the baseline model .this method is sufficient to capture individual personality differences and ultimately provides the most accurate prediction of collective behavior without overfitting . as a separate approach to capturing individual heterogeneity, we aggregated each participant s facebook archive information ( age , gender , friends , and likes ) to generate individual personality factors ( fig [ fig : nnflow]c ) by adding an extra layer to the neural network model .however , strong l2 regularization was required to prevent overfitting , even when we restricted our use of facebook likes to the most popular genres in training the model .an l2 regularization term was added to all weights corresponding to the facebook features , and the strength of the regularization was determined by validation .this method ultimately provides similar accuracy to that obtained using the personality node determined directly from the data .we obtain an accuracy of 84.2% , compared to 85.0% obtained from the experiment - based personality model , as illustrated in in fig [ fig : measure ] in the column marked + fb. for the model with personality parameters included , the decision probability is unique for participant in trial at time .the input vector is generated by concatenating all of the feature representations described above .the disaster likelihood and likelihood gradient , current and total shelter capacity , shelter gradient , group size , and group protocol , and their corresponding weights , are identical for every participant at a given time in a trial , while the personality parameter is unique to each participant and is not time- or trial - dependent . ultimately , the probability of participant deciding to evacuate at time in trial is defined by where * w * represents the weighting of each dimension of the vector , and is the bias term .the function is the sigmoid function defined in eq . additionally ,as illustrated in fig [ fig : personality ] , we directly compare personality factors determined from training on the experimental data with those determined from facebook data and find a strong correlation ( spearman rank correlation coefficient = 0.96 , ) , indicating that facebook demographic information and liked pages can be a good predictor of individual personality ; however , note that the strength of the correlation may result in part from the strong regularization used when incorporating facebook data into the model .ultimately , with each incremental addition of features , the accuracy of the enhanced model increases relative to the baseline model , indicating that the additional features result in better prediction of the test set .decisions to evacuate are dependent on current - time parameters as well as their immediate history and those that govern the situation on a more general level ( total shelter capacity , group constraints ) .since the test trials are withheld from the training data , and the test accuracy increases , the addition of features does not lead to overfitting of the training data . * comparing the model and experiment . *the final results of the artificial neural network model are illustrated in fig [ fig : compare ] , which compares the 17 test trials with results of the most accurate model ( + p ) .the + p model combines the baseline model with the time delay correction , nodes accounting for gradients in likelihood and shelter space , and the individual personality node based on experimental data .similar results are obtained for the + fb model , which replaces the experimentally determined personality node with information extracted from the participants facebook archives .the 17 test trials were selected from the middle portion of the experiment , and lie consecutively between trials 81 and 100 , with three trials ( 88 , 89 , 97 ) omitted due to technical difficulties in recording data .we chose to use test trials near the middle of the experiment to minimize possible bias associated with early or late stages of the experiment .no particular attention was given to the type of trial ; the sample includes at least one of every trial type except for ftg group-25 and ltg group-25 .model parameters were determined from the remaining 111 trials , with 80% of the data ( chosen randomly ) used to fit parameters and , and the remaining 20% to optimize the threshold used to convert the continuous decision probability function in eq to a binary evacuate - or - not decision for each individual at each time step .individual decision probabilities , as described in eq , are computed at each time step for the test data and converted to binary decisions for each participant .the decisions are then combined according to the assigned groups and protocols of the participants from the experiment to produce results for the model - predicted evacuations that are illustrated as blue curves in fig [ fig : compare ] .these predictions are made sequentially at each time step of the full data set containing 17 test trials , i.e. , with no up- or down - sampling ; the blue curves here depict only predicted group or individual evacuations , without showing the predicted decisions made by each group member .we observe striking agreement between the model predictions and the experimental data , represented as gray shaded regions . in most cases ,the model accurately predicts not only the total number of evacuations , but also the approximate timing of the evacuations .the root - mean - squared error ( rmse ) between observed and predicted evacuations is reported for each trial ; the error is calculated at each time step and averaged over all time steps in a trial .the trials exhibiting the greatest discrepancy between the model and observations are trials 81 and 93 , both of which are mv group-25 trials . in both cases ,the final total number of evacuations is correctly predicted ; however , the model predicts that the evacuations will occur somewhat earlier than is observed ( 9 time steps earlier and 6 time steps earlier , respectively ) .additional discrepancies are observed at early stages in some individual trials , such as trial 82 and trial 95 , where a small number of participants evacuate in the first few time steps of the trial , even though remains stationary and there are 25 shelter spaces .this early evacuation behavior may reflect the outcome of the previous trial , such as a disaster hit or shelter reaching capacity , which occurs in trials 81 and 94 , respectively .memory effects extending from one trial to the next are not captured by the model , which is based on single trial parameters .we report results of a behavioral experiment investigating human decision making in the face of an impending natural disaster .we characterize individual and group behavior by quantifying several key factors for decision making in a controlled virtual community , including the disaster likelihood , shelter capacity , and decisions of group members , and we formulate a neural network model that predicts decision making of both individuals and groups in different scenarios . including measures of individual personalitysignificantly increases prediction accuracy compared to a baseline model dependent only on collective experimental variables .we find that individual behavior does not readily predict group behavior .we observe a significant difference in decision making between scenarios in which individuals decide solely for themselves and in which individual decisions are combined to determine collective action of a group . on average , participants decide to evacuate at lower values of the disaster likelihood when in groups than when acting as individuals . in many cases , we find that larger groups are less effective than smaller groups ; the average success rate for individual trials is comparable to the average success rate of group trials , and groups of five perform better than groups of 25 under all protocols except majority - vote .the notion of the `` wisdom of crowds '' contends that the aggregated consensus decision of a collection of individuals is on average more accurate than an individual s decision .condorcet s jury theorem states that , under the assumption that all individuals independently decide with an identical probability of correct decision greater than 0.5 , a collective decision following the majority - vote protocol will have a greater success rate than an individual decision . in this idealizedsetting , the success of the majority - vote increases with group size and converges to 1 in the limit of infinite group size .an application of condorcet s theorem states that if the individual success rate is less than 0.5 , a pooled majority - vote decision will be less accurate than an individual decision , whereas if individual accuracy is greater than 0.5 , the collective majority - vote accuracy increases as a function of individual accuracy and furthermore increases more sharply for larger group sizes . while our experiment is significantly more complex than this idealized voting scenario , on average , our observations from majority - vote trials remain consistent with condorcet s theorem ; the individual success rate is 0.604 ( fig [ fig : prinprob ] ) , the group-5 mv success rate is 0.607 , and the group-25 mv success rate is much higher at 0.75 .including additional social factors such as the rankings of group members can give rise to a wide range of outcomes in group decision making .if some individuals have a higher success rate than others , assigning greater weight to the most successful individuals decisions when aggregating opinions into a consensus decision can result in greater collective accuracy . furthermore , in more complex situations where individuals must decide in response to multiple cues with variable reliability , the wisdom of crowds effect is not necessarily observed ; rather , there can be a finite optimal group size .ultimately , we do not find that larger groups necessarily perform better than smaller groups , and in many cases , groups of 25 are on average less successful than individuals . while there is no significant overall correlation between a participant s rank in individual trials and group trials ,we observe a significant correlation between individual rank and the time order of decisions in the ltg and mv group-25 trials that differentiates the top - performing group of 25 from the bottom - performing group . in the top - performing group ,the higher - scoring participants decide earlier on average than the lower - scoring participants within the group , suggesting possible emergent `` leader '' and `` follower '' behavior .this correlation is not observed in the bottom - performing group .given knowledge of the current state of the underlying stochastic process driving the disaster likelihood , it is possible to determine the optimal decision strategy . in many scenarios ,we observe behavior that is clearly sub - optimal .for example , when there is sufficient shelter space for all participants and the trial has just begun , some participants nevertheless decide to evacuate in the first few time steps , despite the lack of urgency in space or time .in those situations , participants would not experience a tradeoff between the information gained with time and the urgency to evacuate due to resource competition . in some cases, we also observe extreme trial - to - trial variations , such as early evacuations in the absence of time or space constraints , following trials in which the shelter reaches capacity or the disaster hit .our results emphasize the importance of properly characterizing sub - optimal decisions and non - bayesian adjustments in strategy over time in developing predictive models of human decision - making behavior .the sub - optimality of real human decision making is explored in , which compares observed behavior with the optimal strategy as determined under the assumption of a perfect `` bayesian learner '' who aggregates accumulated evidence from previous experience to inform decisions .the modeling procedure performed in this paper enables the identification and quantification of key factors , using machine learning techniques to identify correlation structure , while also taking into account individual differences in behavior which can be extracted from social network data .this work also motivates the development of a model for individual decision making in that is based on a few key parameters and contains an explicit functional form derived from the statistics of experimental data .that model accurately predicts individual decision making and is used to simulate group behavior , forming a baseline of predictions that are contrasted with observed group behavior to examine the differences between individuals and groups . *relationships to past work . *our work builds on four prior studies of collective behavior in evacuation scenarios .the first is a spatiotemporal analysis of wildfire progression on a rural landscape and transportation network that determines optimal evacuation routes and clearing times in a realistic setting under the assumption that all individuals act optimally .while the decision model is idealized and hence unrealistic , it provides an upper bound on the success of collective behavior .the second study developed a stochastic model to simulate the progression of a threat by characterizing the evolution of the strike probability as a markov process , and determined optimal decision strategies in the face of stochastically - varying uncertainty .we applied this stochastic process to determine the progression of disaster likelihood for each disaster scenario in our experiment .the third study simulated the decision dynamics of individuals on a social network , where decisions were influenced both by uniform global information broadcast and pairwise neighbor interactions .this study found that information transmission on a network could either facilitate or hinder collective action , depending on the influence of social network interactions , and identified the mechanisms leading to cascading behavior and stagnation . the experimental platform in this paper is largely derived from the fourth study , in which a similar controlled behavioral experiment involving an evacuation scenario was performed , and from which a data - driven model was developed to characterize the influence of disaster likelihood , time pressure , and shelter capacity on decision making .compared to the previous study , here we simplify the presentation of broadcast information ( time progression and disaster likelihood ) and social information ( decisions of others and shelter space ) so that it appears on a single screen , rather than being split into separate broadcast and social tabs which participants could switch between during each trial .the previous study also included the state of being `` in transit '' in addition to home or shelter ; if a participant decided to evacuate when the shelter was full , they would remain in the `` in transit '' state for the remainder of the trial .the losses incurred from being in the `` in transit '' state were an intermediate value between those of home and shelter . in the previous experiment ,the loss matrix varied from trial to trial and was displayed on the computer interface .however , values in the loss matrix were found to have minimal impact on decision making in the earlier study .therefore , for the current study we chose to fix the loss matrix and only displayed it briefly during the introductory training session .this allowed us greater flexibility to vary other parameters in the nested experimental design .our experiment differs most significantly from this previous study with the introduction of forced group trials and protocols for group action .the previous experiment allowed participants to probe the decisions of their neighbors over a structured social network . in this manner, individuals could gain information about the decisions of others and the remaining shelter space ; however , each participant acted alone in every trial .this allowed for the possibility of clustered evacuations , but did not force groups to act together . in the earlier study , participants spent most of their time on the broadcast information tab and relatively little time on the social information tab , aside from the start of the experiment where the available shelter space was displayed .this motivated our design of a group evacuation experiment to target the role of social information as a driver of decision making . individual personality , preferences , and risk perception have been shown to influence decision making in evacuation scenarios . in the current study , we quantified individual personality factors by training an artificial neural network containing a `` personality node '' on the data and correlated our model with personality factors determined from facebook activity .in contrast , the previous study employed written surveys to determine personality factors , specifically the big five inventory questionnaire , derived from the five - factor ( extraversion , agreeableness , conscientiousness , neuroticism , and openness ) model , and the domain - specific risk - attitude scale , a risk perception survey specific to six domains ( social , investment , gambling , health and safety , ethical , and recreational ) .the popularity of online social networks such as facebook and twitter has motivated the investigation of social network data to uncover predictive measures of individual traits .social network data provides a potentially more direct assessment of ongoing activity compared to self - reported preferences .furthermore , there is significantly more information in social network data ( which is often unavailable due to proprietary or privacy reasons ) which , when properly filtered , may further refine predictions of group relationships and individual personality differences . * methodological considerations .* we made multiple intentional choices in the presentation of broadcast information viewed by the participants .for instance , we displayed as a coarse - grained likelihood rather than a probability , since the presentation of probabilistic information has been shown to affect resultant decisions .we also represented risk information as losses rather than incentives to reflect the focus on loss of life or property rather than gains in real disaster situations .studies have shown that humans react differently to losses than to gains , assigning greater value to differences of equal magnitude in losses compared to gains . moreover ,when performing incentivized tasks , individuals encode the potential loss due to failure rather than the potential gain arising from success , causing some individuals performance levels to decline as incentive levels increase . the behavioral distinction inloss - based versus gain - based experimental settings can be extended to a distinction between risk - based and profit - based scenarios .risk scenarios such as the natural disaster scenarios considered in this experiment also include other situations that directly endanger personal safety , such as battlefield or military scenarios , firefighting , and search and rescue missions . on the other hand ,profit scenarios include group decision making in the workplace , community organizations and committees , or foraging ( in animal communities ) .we have chosen to ignore spatial variation in information arrival times , focusing instead on uniformly distributed information regarding a disaster that impacts or misses the entire community at once .this isolates key tensions between individual and group decision making in an otherwise homogeneous context . in the case of spatial search or monitoring , which is also observed in collective animal behavior ( e.g. , fish schooling ) , increasing group sizeis beneficial as the group can cover more area , unless the area or another resource necessary to perform the action is limited .for instance , the collective navigation and environmental sensing performance of golden shiners increases monotonically with group size , since individual fish rely more strongly on observations of neighboring fish rather than on the environment itself .we also fixed the decision protocols ( ftg , ltg , and mv ) in order to isolate the effect of different protocols on behavior . a popular choice of decision protocol is majority - vote , which has been observed in both hunter - gatherer tribal societies and modern democracies . in laboratory settings , when groups are free to act according to any decision rule , groups tend to choose the majority - vote protocol if information broadcast is uniform across all group members .however , if information broadcast is not uniform , groups adjust their decision protocol , often following a `` majority with exceptions '' rule , where majority - vote is followed unless one or more group members has an opposing opinion with a high confidence level . in our experiment ,participants are assigned to groups without their input .other types of `` forced '' groups that have been defined in advance include workplace teams and military units .however , individuals may still refuse to comply with the rest of the group or `` opt out '' of group action , e.g. , going awol from a military base .future experiments will explore the contrast between `` forced '' group behavior with the naturally - arising collective behavior of `` self - assembling '' groups of humans or animals ( e.g. , fish schools and bird flocks ) .the participants of this experiment were mostly undergraduate students , the majority of whom were male ( fig [ fig : facestats ] ) .evacuation decisions are highly influenced by age , socioeconomic status , geographic location , household size , and responsibility for children ( the presence of which tend to encourage evacuation ) or the elderly ( which discourage evacuation ) . risk perception also varies with age , cultural background , and experience .future studies will target participants from different demographic backgrounds in order to further probe the relationship between demographics and decision making .* concluding remarks .* in natural and man - made disasters , information on the progression and magnitude of an impending disaster is distributed through a combination of global broadcast networks ( e.g. , television , radio , and the internet ) and social networks ( e.g. , facebook , twitter , and text messaging ) . the former is generally associated with news organizations that present information in a formal , edited manner , while the latter is generally based upon informal direct observations made by individuals in the population . increasingly , the speed of information flow over social networks outpaces traditional broadcast communications , creating new opportunities to inform the public at unprecedented rates .however , along with fast - paced updates , social media viewed as a disaster communication tool introduces new and potentially dangerous fragilities associated with asynchronous , non - uniform , and incorrect updates that are intrinsic to unfiltered , open - access communication , yet are currently not well characterized .future extensions of our experimental platform will address these issues by incorporating asynchronous information updates within the population , spread and detection of misinformation , self - assembly and disassembly of groups , and realistic topography and transportation routes in order to develop increasingly realistic scenarios for human decision modeling .although this paper details a relatively small - scale experiment in a controlled and simplified setting , our findings may be extended to guide the development of policies for collective action in more complex situations of stress and uncertainty .decisions to evacuate in the face of impending disaster may depend not only on the current weather forecast , for example , but also on the volatility of the weather conditions over the past several hours , as reflected by our model , which takes into account both current parameters and those within a recent time window , as well as other trial - dependent factors such as overall shelter capacity and group size .moreover , the prevalence of sub - optimal decisions and non - bayesian strategy adjustments over time observed in our experiments serves as a precautionary guide to the development of large - scale , agent - based simulations of evacuations .this paper also addresses the limitations of extending characteristics of individuals without consideration of social influence to predict the behavior of groups .our results indicate significant differences in behavior between group sizes and protocols , and our observation that individual decision making does not generally predict strategy or rank in group action mandates the incorporation of social factors in computational modeling and design of optimized infrastructure and action protocols in preparation for natural or man - made disasters .this work was supported by the david and lucile packard foundation , the national science foundation graduate research fellowship program under grant no .dge-1144085 , and the institute for collaborative biotechnologies through contract no .w911nf-09-d-0001 .bosse t , hoogendoom m , klein mca , treur j , van der wal cn , van wissen a. modelling collective decision making in groups and crowds : integrating social contagion and interacting emotions , beliefs , and intentions .auton agent multi - agent syst .2012;27 .schlesinger kj , nguyen c , ali i , carlson jm .collective decision dynamics in group evacuation : modeling tradeoffs and optimal behavior ; 2016 . preprint . available from : arxiv:1611.09767 .cited 30 november 2016 .gladwin h , peacock wg .warning and evacuation : a night for hard houses . in : peacock wg , gladwin h , morrow bh , editors .hurricane andrew : gender , ethnicity and the sociology of disasters .new york , n.y .: routledge ; 1997 .
identifying factors that affect human decision making and quantifying their influence remain essential and challenging tasks for the design and implementation of social and technological communication systems . we report results of a behavioral experiment involving decision making in the face of an impending natural disaster . in a controlled laboratory setting , we characterize individual and group evacuation decision making influenced by several key factors , including the likelihood of the disaster , available shelter capacity , group size , and group decision protocol . our results show that success in individual decision making is not a strong predictor of group performance . we use an artificial neural network trained on the collective behavior of subjects to predict individual and group outcomes . overall model accuracy increases with the inclusion of a subject - specific performance parameter based on laboratory trials that captures individual differences . in parallel , we demonstrate that the social media activity of individual subjects , specifically their facebook use , can be used to generate an alternative individual personality profile that leads to comparable model accuracy . quantitative characterization and prediction of collective decision making is crucial for the development of effective policies to guide the action of populations in the face of threat or uncertainty .
collective intelligences ( coins ) are large , sparsely connected recurrent neural networks , whose `` neurons '' are reinforcement learning ( rl ) algorithms .the distinguishing feature of coins is that their dynamics involves no centralized control , but only the collective effects of the individual neurons each modifying their behavior via their individual rl algorithms .this restriction holds even though the goal of the coin concerns the system s global behavior .one naturally - occurring coin is a human economy , where the `` neurons '' consist of individual humans trying to maximize their reward , and the `` goal '' , for example , can be viewed as having the overall system achieve high gross domestic product .this paper presents a preliminary investigation of designing and using artificial coins as controllers of distributed systems .the domain we consider is routing of internet traffic .the design of a coin starts with a global utility function specifying the desired global behavior .our task is to initialize and then update the neurons `` local '' utility functions , without centralized control , so that as the neurons improve their utilities , global utility also improves .( we may also wish to update the local topology of the coin . ) in particular , we need to ensure that the neurons do not `` frustrate '' each other as they attempt to increase their utilities .the rl algorithms at each neuron that aim to optimize that neuron s local utility are _microlearners_. the learning algorithms that update the neuron s utility functions are _macrolearners_. for robustness and breadth of applicability , we assume essentially no knowledge concerning the dynamics of the full system , i.e. , the macrolearning and/or microlearning must `` learn '' that dynamics , implicitly or otherwise .this rules out any approach that models the full system .it also means that rather than use domain knowledge to hand - craft the local utilities as is done in multi - agent systems , in coins the local utility functions must be automatically initialized and updated using only the provided global utility and ( locally ) observed dynamics .the problem of designing a coin has never previously been addressed in full hence the need for the new formalism described below .nonetheless , this problem is related to previous work in many fields : distributed artificial intelligence , multi - agent systems , computational ecologies , adaptive control , game theory , computational markets , markov decision theory , and ant - based optimization . for the particular problem of routing ,examples of relevant work include .most of that previous work uses microlearning to set the internal parameters of routers running conventional shortest path algorithms ( spas ) .however the microlearning occurs , they do not address the problem of ensuring that the associated local utilities do not cause the microlearners to work at cross purposes .this paper concentrates on coin - based setting of local utilities rather than macrolearning .we used simulations to compare three algorithms .the first two are an spa and a coin . both had `` full knowledge '' ( fk ) of the true reward - maximizing path , with reward being the routing time of the associated router s packets for the spas , but set by coin theory for the coins .the third algorithm was a coin using a memory - based ( mb ) microlearner whose knowledge was limited to local observations .the performance of the fk coin was the theoretical optimum .the performance of the fk spa was worse than optimum . despite limited knowledge ,the mb coin outperformed the fk spa , achieving performance closer to optimum .note that the performance of the fk spa is an upper bound on the performance of any rl - based spa .accordingly , the performance of the mb coin is at least superior to that of any rl - based spa .section [ sec : math ] below presents a cursory overview of the mathematics behind coins .section [ sec : route ] discusses how the network routing problem is mapped into the coin formalism , and introduces our experiments .section [ sec : res ] presents results of those experiments , which establish the power of coins in the context of routing problems .finally , section [ sec : disc ] presents conclusions and summarizes future research directions .the mathematical framework for coins is quite extensive .this paper concentrates on four of the concepts from that framework : subworlds , factored systems , constraint - alignment , and the wonderful - life utility function .we consider the state of the system across a set of discrete , time steps , .all characteristics of a neuron at time including its internal parameters at that time as well as its externally visible actions are encapsulated in a real - valued vector .we call this the `` state '' of neuron at time , and let be the state of all neurons across all time. world utility , , is a function of the state of all neurons across all time , potentially not expressible as a discounted sum .a subworld is a set of neurons .all neurons in the same subworld share the same _ subworld utility function _ .so when each subworld is a set of neurons that have the most effect on each other , neurons are unlikely to work at cross - purposes all neurons that affect each other substantially share the same local utility .associated with subworlds is the concept of a ( perfectly ) _ constraint - aligned _ system . in such systemsany change to the neurons in subworld at time 0 will have no effects on the neurons outside of at times later than 0 . intuitively , a system is constraint - aligned if the neurons in separate subworlds do not affect each other directly , so that the rationale behind the use of subworlds holds . a _subworld - factored _ system is one where for each subworld considered by itself , a change at time 0 to the states of the neurons in that subworld results in an increased value for if and only if it results in an increased value for . for a subworld - factored system, the side effects on the rest of the system of s increasing its own utility ( which perhaps decrease other subworlds utilities ) do not end up decreasing world utility .for these systems , the separate subworlds successfully pursuing their separate goals do not frustrate each other as far as world utility is concerned .the desideratum of subworld - factored is carefully crafted . in particular , it does _ not _ concern changes in the value of the utility of subworlds other than the one changing its actions . nor does it concern changes to the states of neurons in more than one subworld at once . indeed , consider the following alternative desideratum : any change to the state of the entire system that improves all subworld utilities simultaneously also improves world utility .reasonable as it may appear , one can construct examples of systems that obey this desideratum and yet quickly evolve to a _minimum _ of world utility .it can be proven that for a subworld - factored system , when each of the neurons reinforcement learning algorithms are performing as well as they can , given each others behavior , world utility is at a critical point .correct global behavior corresponds to learners reaching a ( nash ) equilibrium .there can be no tragedy of the commons for a subworld - factored system .let be defined as the vector modified by clamping the states of all neurons in subworld , across all time , to an arbitrary fixed value , here taken to be 0 .the _ wonderful life subworld utility _ ( wlu ) is : when the system is constraint - aligned , so that , loosely speaking , subworld s `` absence '' would not affect the rest of the system , we can view the wlu as analogous to the change in world utility that would have arisen if subworld `` had never existed '' .( hence the name of this utility - cf . the frank capra movie . )note however , that is a purely mathematical operation .indeed , no assumption is even being made that is consistent with the dynamics of the system .the sequence of states the neurons in are clamped to in the definition of the wlu need not be consistent with the dynamical laws of the system .this dynamics - independence is a crucial strength of the wlu .it means that to evaluate the wlu we do _ not _ try to infer how the system would have evolved if all neurons in were set to 0 at time 0 and the system evolved from there .so long as we know extending over all time , and so long as we know , we know the value of wlu .this is true even if we know nothing of the dynamics of the system .in addition to assuring the correct equilibrium behavior , there exist many other theoretical advantages to having a system be subworld - factored .in particular , the experiments in this paper revolve around the following fact : a constraint - aligned system with wonderful life subworld utilities is subworld - factored . combining this with our previous result that subworld - factored systems are at nash equilibrium at critical points of world utility , this result leads us to expect that a constraint - aligned system using wl utilities in the microlearning will approach near - optimal values of the world utility .no such assurances accrue to wl utilities if the system is not constraint - aligned however . accordinglyour experiments constitute an investigation of how well a particular system performs when wl utilities are used but little attention is paid to ensuring that the system is constraint - aligned .in our experiments we concentrated on the two networks in figure [ fig : net ] , both slightly larger than those in . to facilitate the analysis , traffic originated only at routers indicated with white boxes and had only the routers indicated by dark boxes as ultimate destinations .note that in both networks there is a bottleneck at router 2 . as is standard in much of traffic network analysis , at any timeall _ traffic _ at a router is a real - valued number together with an ultimate destination tag . at each timestep , each router sums all traffic received from upstream routers in this timestep , to get a _load_. the router then decides which downstream router to send its load to , and the cycle repeats .a running average is kept of the total value of each router s load over a window of the previous timesteps .this average is run through a _load - to - delay _ function , , to get the summed _ delay_ accrued at this timestep by all those packets traversing this router at this timestep .different routers had different , to reflect the fact that real networks have differences in router software and hardware ( response time , queue length , processing speed etc ) . in our experiments for routers 1 and 3 , and for router 2 , for both networks .the global goal is to minimize total delay encountered by all traffic . in terms of the coin formalism, we identified the neurons as individual pairs of routers and ultimate destinations .so was the vector of traffic sent along all links exiting s router , tagged for s ultimate destination , at time .each subworld consisted of the set all neurons that shared a particular ultimate destination . in the spa each node tries to set to minimize the sum of the delays to be accrued by that traffic on the way to its ultimate destination .in contrast , in a coin tries to set to optimize for the subworld containing . for both algorithms ,`` full knowledge '' means that at time all of the routers know the window - averaged loads for all routers for time , and assume that those values will be the same at . for large enough , this assumption will be arbitrarily good , and therefore will allow the routers to make arbitrarily accurate estimates of how best to route their traffic , according to their respective routing criteria .in contrast , having limited knowledge , the mb coin could only _ predict _ the wlu value resulting from each routing decision .more precisely , for each router - ultimate - destination pair , the associated microlearner estimates the map from traffic on all outgoing links ( the inputs ) to wlu - based reward ( the outputs see below ) .this was done with a single - nearest - neighbor algorithm .next , each router could send the packets along the path that results in outbound traffic with the best ( estimated ) reward .however to be conservative , in these experiments we instead had the router randomly select between that path and the path selected by the fk spa .the load at router at time is determined by .accordingly , we can encapsulate the load - to - delay functions at the nodes by writing the delay at node at time as . in our experimentsworld utility was the total delay , i.e. , .so using the wlu , , where $ ] . at each time ,the mb coin used as the `` wlu - based '' reward signal for trying optimize this full wlu . in the mb coin , evaluating this reward in a decentralized fashion was straight - forward .all packets have a header containing a running sum of the s encountered in all the routers it has traversed so far .each ultimate destination sums all such headers it received and echoes that sum back to all routers that had routed to it . in this wayeach neuron is apprised of the wlu - based reward of its subworld .the networks discussed above were tested under light , medium and heavy traffic loads .table [ tab : traffic ] shows the associated destinations ( cf .1 ) . [ htb ].source destination pairings for the three traffic loads [ cols="^,^,^,^,^",options="header " , ] [ tab : results ] table [ tab : results ] provides two important observations : first , the wlu - based coin outperformed the spa when both have full knowledge , thereby demonstrating the superiority of the new routing strategy . by not havingits routers greedily strive for the shortest paths for their packets , the coin settles into a more desirable state that reduces the average total delay for _ all _ packets .second , even when the wlu is estimated through a memory - based learner ( using only information available to the local routers ) , the performance of the coin still surpasses that of the fk spa .this result not only establishes the feasibility of coin - based routers , but also demonstrates that for this task coins will outperform _ any _ algorithm that can only estimate the shortest path , since the performance of the fk spa is a ceiling on the performance of any such rl - based spa .figure [ fig : res ] shows how total delay varies with time for the medium traffic regime ( each plot is based on runs ) .the `` ringing '' is an artifact caused by the starting conditions and the window size ( ) .note that for both networks the fk coin not only provides the shortest delays , but also settles into that solution very rapidly . [ fig : res ]many distributed computational tasks are naturally addressed as recurrent neural networks of reinforcement learning algorithms ( i.e. , coins ) .the difficulty in doing so is ensuring that , despite the absence of centralized communication and control , the reward functions of the separate neurons work in synchrony to foster good global performance , rather than cause their associated neurons to work at cross - purposes .the mathematical framework synopsized in this paper is a theoretical solution to this difficulty . to assess its real - world applicability , we employed it to design a full - knowledge ( fk ) coin as well as a memory - based ( rl - based ) coin , for the task of packet routing on a network .we compared the performance of those algorithms to that of a fk shortest - path algorithm ( spa ) .not only did the fk coin beat the fk spa , but also the memory - based coin , despite having only limited knowledge , beat the full - knowledge spa .this latter result is all the more remarkable in that the performance of the fk spa is an upper bound on the performance of previously investigated rl - based routing schemes , which use the rl to try to provide accurate knowledge to an spa .there are many directions for future work on coins , even restricting attention to domain of packet routing . within that particular domain ,currently we are extending our experiments to larger networks , using industrial event - driven network simulators .concurrently , we are investigating the use of macrolearning for coin - based packet - routing , i.e. , the run - time modification of the neurons utility functions to improve the subworld - factoredness of the coin .j. boyan and m. littman .packet routing in dynamically changing networks : a reinforcement learning approach . in _ advances in neural information processing systems - 6 _ , pages 671678 .morgan kaufmann , 1994 .s. p. m. choi and d. y. yeung .predictive q - routing : a memory based reinforcement learning approach to adaptive traffic control . in _ advances in neural information processing systems - 8 _ , pages 945951 . mit press , 1996 .p. marbach , o. mihatsch , m. schulte , and j. tsisiklis .reinforcement learning for call admission control and routing in integrated service networks . in _ adv . in neural info .systems - 10 _ , pages 922928 . mit press , 1998 .d. subramanian , p. druschel , and j. chen .ants and reinforcement learning : a case study in routing in dynamic networks . in _ proceedings of the fifteenth international conference on artificial intelligence _ , pages 832838 , 1997 .
a collective intelligence ( coin ) is a set of interacting reinforcement learning ( rl ) algorithms designed in an automated fashion so that their collective behavior optimizes a global utility function . we summarize the theory of coins , then present experiments using that theory to design coins to control internet traffic routing . these experiments indicate that coins outperform all previously investigated rl - based , shortest path routing algorithms .
the aim of this paper is to find numerical solutions of mountain pass type for the problem for and .this is the equation of standing waves of the quasi - linear schrdinger equation where stands for the imaginary unit and the unknown is a complex valued function .various physically relevant situations are described by this quasi - linear equation .for example , it is used in plasma physics and fluid mechanics , in the theory of heisenberg ferromagnets and magnons and in condensed matter theory , see e.g. the bibliography of and the references therein .the mountain pass solutions of the semi - linear equation , corresponding to ground states for the schrdinger equation have been extensively studied in the last decades , both analytically and numerically . on the numerical side ,the typical tool is the so - called mountain pass algorithm , originally implemented by choi and mckenna ( see also for a different approach ) .this works nicely under the assumption that the functional associated with the problem is of class on a hilbert space and it satisfies suitable geometrical assumptions .now , we observe that the functional naturally but only formally associated with is in fact , taking as with , then the functional is not even well defined , as it might be the case that it assumes the value in the range . in two dimensions , with , it is well defined from to but it is merely lower semi - continuous .if , instead , stands for the set of such that , then it follows that is well defined from to , for every choice . from the physical viewpoint defining in this way makes it a natural choice for the energy space . on the other hand , for initial data in this space , currently there is no well - posedness result for problem ( see the discussion in ) .furthermore , is not even a vector space , although can be regarded as a complete metric space endowed with distance the and it turns out that is continuous over suggesting that a possible approach to the study of problem could be to exploit the ( metric ) critical point theory developed in . nevertheless , this continuity property is not enough to establish the convergence of a traditional mountain pass algorithm . actually , it is not even clear what a satisfying gradient for is .let us emphasize that we are interested in the convergence of the algorithm in infinite dimensional spaces which ensures the convergence of the discretized problem at a rate which does not blow - up when the mesh used for approximation becomes finer .hence , in conclusion , neither the mountain pass algorithm can be directly applied for the numerical computation of some solution of nor , to our knowledge , the current literature ( see e.g. and the references therein ) contains suitable generalization of the mountain pass algorithm to the case of non - smooth functionals , except the case of locally lipschitz functional , which unfortunately are incompatible with the regularity available for our framework .on the other hand in , in order to find the existence and qualitative properties of the solutions to , a change of variable procedure was performed to relate the solutions to with the solutions to an associated semi - linear problem where the function is the unique solution to the cauchy problem more precisely , is a smooth solution ( say ) to if and only if is a smooth solution to , that is a critical point of the -functional defined by in addition , as we shall see , the mountain pass values and the least energy values of these functionals correspond through the function .now , by applying the mountain pass algorithm to , we can find a mountain pass solution of .then will be a mountain pass solution of the original problem with a reasonable control on numerical errors .we mention that , at least in the case of large constant potentials and for a general class of quasi - linear problems , there are some uniqueness results for the solutions , see e.g. .the article is organized as follows . in section [ sec2 ], we will establish the equivalence between ground state solutions for and those for .section [ sec : conv - mp ] will recall the mountain pass algorithm and discuss assumptions under which its convergence is guaranteed for our problem . in section [ num ] , we will present our numerical investigations .we finish by a conclusion giving some conjectures and outlining future work ._ notation & terminology ._ we will denote the image of a function i.e. , .the notation stands for the topological closure of the set .the norm in the lebesgue spaces will be written ( will be dropped if clear from the context ) .we will call a ( nonlinear ) _ projector _ any function that is idempotent i.e. , any function such that for all .we noticed in the introduction that , thanks to the change of unknown , solutions to correspond to solutions to where ( see e.g. ) . in this section , we want to show that , in addition , if is at the mountain pass level , then is at the mountain pass level too in a suitable functional setting , see formula .hence , numerically computing a mountain pass solution up to a certain error , yields a mountain pass solution to the original problem up to a certain error ( involving also the error due to the numerical calculation of the solution to the cauchy problem in ) . in order to prove this , notice that defined by reads where we have set with defined by .notice first that , if where , then .furthermore , it follows that where is the action defined by .indeed , we have thanks to the cauchy problem . [ correspondence mp ]let be a mountain pass solution to problem , that is }{\mathcal t}(\gamma(t)),\qquad \gamma_{\mathcal t}=\big\{\gamma\in c([0,1],h^1({{\mathbb r}}^n ) ) : \gamma(0)=0,\ { \mathcal t}(\gamma(1))<0\big\}.\ ] ] then is a mountain pass solution to problem , that is }{\mathcal e}(\eta(t)),\qquad \gamma_{\mathcal e}=\big\{\eta\in c([0,1],x ) : \gamma(0)=0,\ { \mathcal e}(\gamma(1))<0\big\}.\ ] ] furthermore , is also a least energy solution to for the energy ( i.e. , achieving the infimum of on non - trivial solutions to ) .it is readily seen that if is a solution to , then is a solution to .setting now , on account of , we have }{\mathcal t}(\gamma(t ) ) = \mathop{\inf\vphantom{\sup}}_{\tilde\gamma\in\tilde\gamma_{\mathcal t}\ , } \sup_{t\in [ 0,1]}{\mathcal e}(\tilde\gamma(t)).\ ] ] on the other hand , we will show that , yielding assertion .in fact , let and consider .then , with ,h^1({{\mathbb r}}^n)\bigr) ] , and . by theorem 0.2 of , is a least energy solution . in turn , by step ii in the proof of theorem 1.3 of the last assertion follows .as we will see in the next section , in our case ( where is defined by ) is increasing on and therefore mountain pass solutions can be characterized by instead of .the numerical algorithm described below finds a local minimum of but there is no absolute guarantee that is a mountain pass solution .nonetheless , is a `` saddle point of mountain pass type '' ( * ? ? ?* definition 1.2 ) i.e. , there exists an open neighborhood of such that lies in the closure of two path - connected components of . since the map is an homeomorphism and , the same characterization holds for . in conclusion, the above discussion can be thought as an extension of the correspondence of proposition [ correspondence mp ] to saddle point solutions .let be a hilbert space with norm , a closed subspace of , a -functional and a peak selection for relative to , i.e. , a function such that , for any , is a local maximum point of on and .the mountain pass algorithm ( mpa ) uses to perform a constrained steepest descent search in order to find critical points of .the version we use in this paper is a slightly modified version of the mpa introduced by y. li and j. zhou which in turn is based on the pioneer work of y. s. choi and p. j. mckenna .let us first recall the version of y. li and j. zhou .[ mpa ] 1 .choose an initial guess , a tolerance and let ; 2 .if then stop ; + otherwise , compute for some satisfying the armijo s type stepsize condition 3 .let and go to step .conditions under which the sequence generated by the mpa converges , up to a subsequence , to a critical point of on have been studied in the case . in ,n. tacheny and c. troestler proved that , if an additional metric projection on the cone of non - negative functions is performed at each step of the algorithm , the mpa still converges ( at least up to a subsequence ) and that the limit of the generated sequence is guaranteed to be a non - negative solution . in that paper , the authors studied a variant of the above algorithm were the update at each step is given by )\bigr ) \quad\text{where } u_n[s ] : = u_n - s \frac{\nabla \mathcal{t}(u_n ) } { { \mathopen\vert\nabla \mathcal{t}(u_n)\mathclose\vert}},\ ] ] for some with being the metric projector on the cone .notice that , with this formulation , the projector only needs to be defined on the cone .the set of admissible stepsizes is defined as follows : first let \ne 0 \hspace{0.5em}\text{and}\hspace{0.5em } \mathcal{t}\bigl(pp_k(u_n[s])\bigr ) - \mathcal{t}(u_n ) <-\tfrac{1}{2}s { \mathopen\vert\nabla \mathcal{t}(u_n)\mathclose\vert } \bigr\}\ ] ] and then define . note that the right hand side of the inequality to satisfy does not depend on . under the following assumptions on the action functional and the peak selection : 1 . is well defined and continuous ; 2 . ; 3 . ; 4 . as ; they prove that the algorithm generates a palais - smale sequence in . roughly , using a numerical deformation lemma, they show that a step exists and that it can be chosen in a `` locally uniform '' way .the trick is to avoid being arbitrarily close to without being `` mandated '' by the functional .let us remark that the definition of is natural ( but not the only possible one ) to force this stepsize not to be `` too small '' .then , under some additional compactness condition ( e.g. the palais - smale condition or a concentration compactness result ) , they establish the convergence up to a subsequence .if furthermore the solution is isolated in some sense , the convergence of the whole sequence is proved .let us also mention that , recently , inspired from the theoretical existence result of a. szulkin and t. weth , the convergence of the sequence generated by the mountain pass algorithm has been proved ( in some cases , up to a subsequence ) even when and ( see ) .let us come back to our problem and verify that assumptions ( p1)(p4 ) hold for the functional given by and the projection defined as follows : is the unique maximum point of on the half line .the autonomous case was studied by m. colin and l. jeanjean who proved the existence of a positive solution under the assumptions 1 . is locally hlder continuous on ; 2 . ; 3 . when or , + for any , there exists such that , for all , when ; 4 . such that where when ( or , and and when ) . in our case, this situation corresponds to constant and where assumptions ( g0)(g3 ) are verified when .let us now work with the equivalent problem where is defined as in the introduction and verifies ( g0)(g3 ) .let us call .recalling that when and when , ( g1 ) implies that verifies .moreover , as a consequence of ( g2 ) , is subcritical with respect to the critical sobolev exponent .therefore is well defined on and is a local minimum of .this establishes ( p2)(p3 ) . because of ( g3 ) there exists a such that but , in order to have ( p1 ) and ( p4 ) , we need to replace ( g3 ) with the following stronger assumptions ( see ) : 1 . when where ; 2 . is increasing on .let us remark that when the domain is bounded ( as in numerical experiments ) , it is enough to require instead of .so we can work with in this case . for our problem with constant , it is easy to see that ( g4 ) is satisfied .let us now show that property ( g5 ) holds for .first , let us remark that the derivative of is a positive function times for , this quantity becomes which is negative because ( see e.g. ( * ? ? ?* lemma 2.2 ) ) .thus the map is increasing . as a consequence ,it remains to prove that is increasing when .for this , it is enough to show that is positive for . using the inequality ( see e.g. ( * ? ? ?* lemma 2.2 ) ) , one readily proves the assertion .the above arguments imply that the mountain pass algorithm applied to generates a palais - smale sequence when .numerically , it is natural to `` approximate '' the entire space by large bounded domains that are symmetric around . in the numerical experiments, we will consider .on , the palais - smale condition holds and consequently the mpa converges up to a subsequence .this approximation is reasonable for two reasons .first , the solution on goes exponentially fast to when .second , if we consider a family of solutions on which is bounded and stays away from and we extend by on , we will now sketch an argument showing that converges up to a subsequence to a non - trivial solution on ( see also where authors prove that , for some semilinear elliptic equations , ground state solutions on large domains weakly converge to a solution on ) .the boundedness ensures that , taking if necessary a subsequence , . at this point, it may well be that .nevertheless , e. lieb proved that if a family of functions is bounded away from zero in then there exists at least one family such that up to a subsequence . to conclude that , it suffices for example to pick up so that . intuitively , the role of is to bring back ( some of ) the mass that may loose at infinity .it is then classical to prove that is a non - trivial solution on .if moreover is a family of ground states , no mass can be lost at infinity and so . in this case , is a ground state of the problem on .of course , if is bounded in then weakly converges to a translation of for any bounded family .in particular , for , weakly converges , up to a subsequence , to a non - trivial solution . by regularity , in up to a subsequence . as the moving plane method can be applied to the autonomous case ( see ) , the positive solutions are even w.r.t.each hyperplane and are decreasing in each direction from to .as is bounded away from , the mass of is located around .thus converges up to a subsequence to a non - zero positive solution of on .moreover , it is expected that the ground state solutions are unique , up to a translation , which would imply the convergence of the whole family .this uniqueness result has been proved for sufficiently large ( see ) .the above reasoning can be applied to the sequence generated by the mpa . in section [ num ], the numerical experiments will provide bounded families of positive ( approximate ) solutions on which are bounded away from zero ( when ) .thus they converge to a solution on as . for the non - autonomous case ( non - constant ) , m. colin and l. jeanjean proved the existence of a positive solution for the equation on when 1 .there exists such that on ; 2 . and on ; 3 . and is continuous ; 4 . ; 5 .there exists when or if and such that for any . such that for any where .these assumptions are clearly satisfied for our model nonlinearity with . asbefore , to prove that possesses the properties ( p1)(p4 ) , we need to replace ( g4 ) with a slightly stronger assumption , namely that is increasing on for almost every where is defined by . essentially repeating the arguments that we developed for the autonomous case, one can show that this assumption is statisfied for our model problem .as a consequence , the mpa generates a palais - smale sequence . since the palais - smale condition holds for the ground state level ( even on when is non - constant ) , the mpa sequence converges up to a subsequence to a solution of . concerning the approximation of by large domains, we can argue similarly to the autonomous case .numerically ( see section [ num : non - constant v ] ) , is bounded away from zero and the peaks of are located around local minimums of .so , the entire mass of is not going to infinity . thus , as in the autonomous case , in with being a non - trivial solution on .in this section , we compute ground state solutions to problem using algorithm [ mpa ] with the update step ( instead of ) on problem .the algorithm relies at each step on the finite element method .let us remark that approximations are saddle points of the functional but , as the algorithm is a constrained steepest descent method , it is not guaranteed that they are gound state solutions . nevertheless , no non - trivial solutions with lower energy have been found numerically .let us now give more details on the computation of various objects intervening in the procedure . as we already motivated above , the numerical algorithm will seek solutions to on a `` large '' domain with zero dirichlet boundary conditions instead of the whole space .functions of will be approximated using -finite elements on a delaunay triangulation of generated by triangle .the matrix of the quadratic form is readily evaluated on the finite elements basis .a quadratic integration formula on each triangle is used to compute . the function is approximated using a standard adaptive ode solver .the gradient is computed in the usual way : the function is the solution of the equation , ] ( alternatively , one could seek such that = 0 ] .this choice guarantees that no arbitrary small steps are taken unless required by the functional geometry .the starting function for the mpa is always .the program stops when the gradient of the energy functional at the approximation has a norm less than .an simple adaptive mesh refinement is performed during the mpa iterations in order to increase accuracy while keeping the cost reasonable .the approximate solution is then further improved using a few iterations of newton s method . to start , we study the problem in the open bounded domain . running the mountain pass algorithm with and , we get the results presented in fig .[ fig : square , v=0,p=4 ] and table [ tablevalues , v=0,p=4 ] .these experiments suggest that the solutions go to as , that is when balls become larger .this is not surprising .indeed , for , the classical pohzaev identity for the semi - linear problem on reads { \,{\mathrm d}}x=0\ ] ] where is the primitive of with . in our case , is given by eq . with and thus .hence for instance for , the condition becomes thus or , equivalently , .we also repeat the experiments with the larger exponent i.e. , we consider the problem : the same conclusions can be drawn in this case . at ( 0,0 ) to and to on for and .,title="fig : " ] ; at ( 0,-1 ) to and to on for and .,title="fig : " ] ; at ( 0,0.5 ) ; at ( -0.45 , 0 ) ; at ( -0.45 , -1 ) ; at ( 1,0 ) to and to on for and .,title="fig : " ] ; at ( 1,-1 ) to and to on for and .,title="fig : " ] ; at ( 1,0.5 ) ; at ( 2,0 ) to and to on for and .,title="fig : " ] ; at ( 2,-1 ) to and to on for and .,title="fig : " ] ; at ( 2,0.5 ) ; at ( 3,0 ) to and to on for and .,title="fig : " ] ; at ( 3,-1 ) to and to on for and .,title="fig : " ] ; at ( 3,0.5 ) ; r|cccc & r = 1 & r = 5 & r = 10 & r = 30 + v & 415.8 & 2.97 & 1.17 & 0.45 + v_l^2 & 875 & 5.89 & 2.19 & 0.80 + u & 24.2 & 1.77 & 0.94 & 0.42 + t(v ) = e(u ) & 78148 & 6.75 & 1.19 & 0.18 + ( v ) & 8.210 ^ -8 & 1.410 ^ -11 & 1.610 ^ -10 & 9.610 ^ -10 at ( 0,0 ) to and to on for and .,title="fig : " ] ; at ( 0,-1 ) to and to on for and .,title="fig : " ] ; at ( 0,0.5 ) ; at ( -0.45 , 0 ) ; at ( -0.45 , -1 ) ; at ( 1,0 ) to and to on for and .,title="fig : " ] ; at ( 1,-1 ) to and to on for and .,title="fig : " ] ; at ( 1,0.5 ) ; at ( 2,0 ) to and to on for and .,title="fig : " ] ; at ( 2,-1 ) to and to on for and .,title="fig : " ] ; at ( 2,0.5 ) ; at ( 3,0 ) to and to on for and .,title="fig : " ] ; at ( 3,-1 ) to and to on for and .,title="fig : " ] ; at ( 3,0.5 ) ; r|cccc & r = 1 & r = 5 & r = 10 & r = 30 + v & 10.10 & 2.34 & 1.46 & 0.80 + v_l^2 & 18.96 & 4.15 & 2.52 & 1.32 + u & 3.59 & 1.52 & 1.11 & 0.70 + t(v ) = e(u ) & 87.5 & 5.00 & 1.97 & 0.58 + ( v ) & 210 ^ -8 & 4.210 ^ -11 & 310 ^ -8 & 210 ^ -9 then , for in the range ] , see fig .[ comparison , v=10 ] .this time , when , the solutions no longer vanish but seem to converge to a non - trivial positive solution on .this is what is expected in view of the argumentation in section [ sec : conv - mp ] .we conclude this section by examining the behavior of the solutions as the potential goes to .figure [ v->0,p=4 ] depicts the energy of the solution to obtained by the mountain pass algorithm on with for various values of .it clearly suggests that as , indicating that the ground - state solutions bifurcate from as approaches the bottom of the spectrum . at ( 0,0 ) to and to on for and .,title="fig : " ] ; at ( 0,-1 ) to and to on for and .,title="fig : " ] ; at ( 0,0.5 ) ; at ( -0.45 , 0 ) ; at ( -0.45 , -1 ) ; at ( 1,0 ) to and to on for and .,title="fig : " ] ; at ( 1,-1 ) to and to on for and .,title="fig : " ] ; at ( 1,0.5 ) ; at ( 2,0 ) to and to on for and .,title="fig : " ] ; at ( 2,-1 ) to and to on for and .,title="fig : " ] ; at ( 2,0.5 ) ; at ( 3,0 ) to and to on for and .,title="fig : " ] ; at ( 3,-1 ) to and to on for and .,title="fig : " ] ; at ( 3,0.5 ) ; r|cccc & r = 1 & r = 5 & r = 10 & r = 30 + v & 417 & 11.6 & 11.6 & 11.6 + v_l^2 & 877 & 20.9 & 20.8 & 20.8 + u & 24.24 & 3.87 & 3.87 & 3.87 + t(v ) = e(u ) & 79183 & 217 & 217 & 216.7 + ( v ) & 210 ^ -8 & 410 ^ -10&210 ^ -8 & 1.610 ^ -8 [ tablevalues , v=10,p=4 ] at ( 0,0 ) to and to on for and .,title="fig : " ] ; at ( 0,-1 ) to and to on for and .,title="fig : " ] ; at ( 0,0.5 ) ; at ( -0.45 , 0 ) ; at ( -0.45 , -1 ) ; at ( 1,0 ) to and to on for and .,title="fig : " ] ; at ( 1,-1 ) to and to on for and .,title="fig : " ] ; at ( 1,0.5 ) ; at ( 2,0 ) to and to on for and .,title="fig : " ] ; at ( 2,-1 ) to and to on for and .,title="fig : " ] ; at ( 2,0.5 ) ; at ( 3,0 ) to and to on for and .,title="fig : " ] ; at ( 3,-1 ) to and to on for and .,title="fig : " ] ; at ( 3,0.5 ) ; r|cccc & r = 1 & r = 5 & r = 10 & r = 30 + v & 10.57 & 5.98 & 5.98 & 5.98 + v_l^2 & 19.56 & 9.73 & 9.73 & 9.72 + u & 3.68 & 2.68 & 2.68 & 2.68 + t(v ) = e(u ) & 105.1 & 47.3 & 47.3 & 47.3 + ( v ) & 710 ^ -12 & 8.810 ^ -9 & 5.610 ^ -8 & 1.210 ^ -8 [ tablevalues , v=10,p=6 ] ( 3.5 , 0.3 ) ( 7.5 , 0.3 ) node[below] ; in 4 , 5 , 6 , 7 ( , 0.3 + 0.12 ) ( , 0.3 - 0.12 ) node[below] ; ( 3.5 , 0.3 ) ( 3.5 , 5 ) node[left] ; in 1 , 2 , 3 , 4 ( 3.5 + 0.07 , ) ( 3.5- 0.07 , ) node[left] ; plot filepe_v10_r1.dat ; at ( 4.3,4 ) ; plot filepe_v10_r1-delta1.dat ; at ( 4.4 , 2.3 ) ; plot filepe_v10_r1-delta0.dat ; at ( 4.4 , 0.9 ) ; ( 3.5 , 1.2 ) ( 7.5 , 1.2 ) node[below] ; in 4 , 5 , 6 , 7 ( , 1.2 + 0.7 ) ( , 1.2- 0.7 ) node[below] ; ( 3.5 , 1.2 ) ( 3.5 , 25 ) node[left] ; in 2 , 5 , 10 , ... , 23 ( 3.5 + 0.07 , ) ( 3.5- 0.07 , ) node[left] ; plot filepn_v10_r1.dat ; plot filepn_v10_r1-delta1.dat ; plot filepn_v10_r1-delta0.dat ; ( 0,0 ) ( 11 , 0 ) node[below] ; ( 0,0 ) ( 0 , 2.7 ) node[left] ; in 1, ...,10 ( , 0.1 ) ( , -0.1 ) node[below] ; /in 1/10,2/10 ^ 2 ( 0.1 , ) ( -0.1 , ) node[left] ; plot filebifurcation.dat ; since our numerical experiments are performed in two dimensions , it is natural to wonder what shape the solutions are expected to have when the nonlinearity has exponential growth . to that aim , we consider in this section the equation the outcome of the numerical computations is presented on figures [ fig : square , v=0,exp][fig : square , v=10,exp ] and tables [ tablevalues , v=0,exp][tablevalues , v=10,exp ]. unsurprisingly , one may see that the qualitative behavior of the solutions is similar to what could be observed before with power nonlinearities . at ( 0,0 ) for .,title="fig : " ] ; at ( 0,-1 ) for .,title="fig : " ] ; at ( 0,0.5 ) ; at ( -0.45 , 0 ) ; at ( -0.45 , -1 ) ; at ( 1,0 ) for .,title="fig : " ] ; at ( 1,-1 ) for .,title="fig : " ] ; at ( 1,0.5 ) ; at ( 2,0 ) for .,title="fig : " ] ; at ( 2,-1 ) for .,title="fig : " ] ; at ( 2,0.5 ) ; at ( 3,0 ) for .,title="fig : " ] ; at ( 3,-1 ) for .,title="fig : " ] ; at ( 3,0.5 ) ; r|cccc & r = 1 & r = 5 & r = 10 & r = 30 + v & 7.6 & 1.85 & 0.31 & 0.03 + v_l^2 & 11 & 3.76 & 0.63 & 0.065 + u & 3.1 & 1.30 & 0.30 & 0.03 + t(v ) = e(u ) & 44.4 & 2.02 & 0.06 & 0.0007 + ( v ) & 310 ^ -8 & 710 ^ -11 & 1.710 ^ -7 & 1.310 ^ -9 [ tablevalues , v=0,exp ] at ( 0,0 ) for .,title="fig : " ] ; at ( 0,-1 ) for .,title="fig : " ] ; at ( 0,0.5 ) ; at ( -0.45 , 0 ) ; at ( -0.45 , -1 ) ; at ( 1,0 ) for .,title="fig : " ] ; at ( 1,-1 ) for .,title="fig : " ] ; at ( 1,0.5 ) ; at ( 2,0 ) for .,title="fig : " ] ; at ( 2,-1 ) for .,title="fig : " ] ; at ( 2,0.5 ) ; at ( 3,0 ) for .,title="fig : " ] ; at ( 3,-1 ) for .,title="fig : " ] ; at ( 3,0.5 ) ; r|cccc & r = 1 & r = 5 & r = 10 & r = 30 + v & 8.04 & 6.63 & 6.63 & 6.62 + v_l^2 & 10.98 & 9.10 & 9.10 & 9.10 + u & 3.17 & 2.84 & 2.84 & 2.84 + t(v ) = e(u ) & 50.60 & 41.47 & 41.45 & 41.40 + ( v ) & 3.610 ^ -7 & 8.610 ^ -8 & 8.510 ^ -9 & 910 ^ -3 [ tablevalues , v=10,exp ] to conclude this investigation , let us examine the case of a variable potential .e. gloss in dimension and j. m. do and u. severo for showed that the equation possesses solutions concentrating around local minima of the potential when .when the potential is constant , this corresponds to and , for a positive potential , the above graphs show a concentration around the origin on a bounded domain ( which suggests that any point will do on ) . in this sectionwe have considered on with and the double well potential where and .it is pictured on fig .[ fig : square , vx , p=4 ] .note that the two wells have different depths .for , the mpa returns two different solutions ( see fig .[ fig : square , vx , p=4 ] ) depending on the initial function .( we have chosen not to display the function as its shape is similar to the one of . )the one with the lower value of ( hence ) is the solution located around the lower well of .it is obtained by using the mpa with the same initial guess as before . for the right one ,the mpa is applied starting with a function localized at the other well , namely for larger , such as , the mpa returns only one solution with a maximum point not too far from the point at which achieves its global minimum , see the rightmost graph on fig .[ fig : square , vx , p=4 ] ( the graph depicted is the outcome of the mpa with the usual initial function but even using the function as a starting point gives the same output ) .for even larger , such as , the solution is very similar to the one for displayed on fig .[ fig : square , v=10,p=4 ] . at ( 0 , -0.1 ) andthe corresponding sol . to on for .,title="fig : " ] ; at ( 0,0.5 ) potential ; at ( 1,0 ) and the corresponding sol . to on for .,title="fig : " ] ; at ( 1,0.5 ) ; at ( 1 , 0.1 ) ; at ( 2,0 ) and the corresponding sol . to on for .,title="fig : " ] ; at ( 2.1 , 0.1 ) ; at ( 2,0.5 ) ; at ( 3,0 ) and the corresponding sol . to on for .,title="fig : " ] ; at ( 3.1 , 0.1 ) ; at ( 3,0.5 ) ;in this paper , we tackled the computation of ground state solutions for a class of quasi - linear schrdinger equations which are naturally associated with a non - smooth functional .a change of variable was used to overcome the lack of regularity and a mountain pass algorithm was applied to the resulting functional to compute saddle point solutions . in the autonomous case , we outlined arguments and saw on the above numerical computations that the numerical solution on converges to a radially symmetric solution on the whole space as .the existence of was proved by colin and jeanjean .the fact that the same numerical solution is found with many different initial guesses suggests that the ground state solution on is unique , a fact that was proved for large .the numerical computations also suggest that the set of solutions bifurcate from as . for the case of variable potential , the numerical computations exhibited the existence of several solutions of mountain pass type which are local minima of the functional on the peak selection set .the asymptotic profile of these solutions seem to be radial as it is expected from the theoretical results on for .interestingly , the multiplicity of positive solutions does not seem to persist when grows larger .whether the multiple solutions come through a bifurcation from the branch of ground state solutions as goes to may be the subject of future investigations .
we discuss the application of the mountain pass algorithm to the so - called quasi - linear schrdinger equation , which is naturally associated with a class of nonsmooth functionals so that the classical algorithm can not directly be used . a change of variable allows us to deal with the lack of regularity . we establish the convergence of a mountain pass algorithm in this setting . some numerical experiments are also performed and lead to some conjectures .
some of the hubble space telescope s most ground - braking discoveries have been about black holes , which are arguably the most extreme and mysterious objects in the universe . as a result, black holes appeal to a general audience in a way that almost no other scientific subject does .unfortunately , most people have little idea of what black holes actually are , and they are more likely to associate them with science fiction than with science .these circumstances recommend black holes as a natural topic for an education and public outreach ( e / po ) activity . to this end, we have created a website called `` black holes : gravity s relentless pull , '' which serves as the e / po component of several hubble observing projects .the new website is part of hubblesite , the internet home of all hubble news .the url is .many sites on the internet already explain black holes in one way or other .most are encyclopedic , offering detailed text and graphics .by contrast , our site emphasizes user participation , and it is rich in animations and audio features .this approach was facilitated by the availability of powerful software for authoring multimedia content .the result is a website where scientific knowledge , learning theory , advanced technology , and pure fun all converge .the opening animation at our website introduces the basic concept of a black hole .it shows how one could turn the earth into a black hole , if only one could shrink it to the size of a marble . by connecting enigma and commonplace ,this scenario lends black holes a seeming familiarity .the core of the website consists of three sequential , interactive modules , of which `` finding the invisible '' is the first .it shows the night sky with a viewfinder that can be dragged around to discover images of about a dozen objects , including some that should be familiar to the user ( the sun , moon , and saturn ) and others that only an astronomer might recognize ( betelgeuse , crab nebula , cygnus x-1 , andromeda galaxy , and quasar 3c273 ) . the user can select the wavelength range of the viewfinder : visible light , radio waves , or x - rays .the goal is to teach which types of objects contain black holes and which do not .interested users can move to pages that explain the various objects , the telescopes used to observe them , and the features in the images at different wavelengths that indicate the presence of a black hole . in this waythe user not only learns about black holes , but also about their relation to other objects in the universe and the methods that astronomers use to study them . when the user has found one or more black holes , he or she can choose to go to the second module , `` the voyage , '' which offers a multimedia trip in an animated starship to a nearby black hole , either a stellar - mass black hole ( cygnus x-1 ) or a supermassive black hole ( in the center of the andromeda galaxy ) .the viewer traverses the solar system as well as our own milky way galaxy .various intriguing objects are encountered along the way , including many of the objects previously encountered in the first module .a goal is to connect the two - dimensional , projected view of the night sky to the actual three - dimensional structure of the universe . in the process, the user learns about the distance scale and the layout of the local universe .after the spaceship has arrived at the black hole , the user can proceed to the third module , `` get up close . ''orbiting around a black hole , the sky outside the spaceship window looks strangely distorted , because of the strong gravitational lensing of distant starlight .the module discusses this and many other fascinating phenomena to be encountered near black holes .five interactive experiments are offered to explore specific issues . ``create a black hole '' allows the user to study the evolution of stars of different masses by trying to create a black hole , rather than a white dwarf or neutron star .`` orbit around a black hole '' plots relativistically correct orbits around a black hole , showing how it is possible to orbit a black hole without being sucked in .`` weigh a black hole '' shows how to use observations of a black hole in a binary system to calculate its mass .this illustrates learning without seeing .`` drop a clock into a black hole '' explains time dilation and redshift .mysteriously , an outside observer never sees a falling clock disappear beyond the event horizon , which highlights the principle of relativity .the last experiment , `` fall into a black hole '' allows the viewer to take the final plunge and witness how one s body gets stretched by tidal forces . with this , the user s journey is complete , from the backyard view of the night sky to a personal encounter with a singularity .our motivations for adopting a strongly interactive website were grounded in specific learning theories , `` constructivism '' in particular .constructivism holds that learning is not merely an addition of items into a mental data bank , but rather requires a transformation of concepts in which the learner plays an active role making sense out of a range of phenomena .research has shown that people use various learning styles to perceive and process information .for these reasons , we structured the scientific content in multiple ways at our website , to engage a range of learners in a variety of meaningful ways .novices and children often prefer interactive learning experiences , which can motivate them to stay engaged until they achieve some payoff .the interactive modules and experiments of the black hole website are tailored for this audience .they offer goal - based scenarios , to motivate learning and a sense of accomplishment when completed .by contrast , experts and adults often prefer formal organization and ready access to information that they already know about and just want to locate . to reach this audience, the site also contains an encyclopedia of faqs frequently asked questions about the physics and astronomy of black holes , as well as a detailed glossary .we are currently working on a modified version of the web site that can be used as a kiosk exhibit in museums , science centers and planetaria .this will further broaden the audience of the project to include those people who do not have access to the internet , or who typically do not use it to broaden their knowledge .the entire site is based on the most recent scientific knowledge .recent results and ongoing projects are highlighted where possible .we will continue to update the site as new discoveries emerge .astronomical images from state - of - the - art telescopes are used in abundance , with a particular focus on hubble .extensive links are provided to other websites on the internet , which allows users to explore specific subjects in more detail .even with its sharp focus on black holes , our site strives to teach much more .we illustrate how humans can understand the universe by detailed observations of the night sky .we teach basic concepts , like light and gravity .we show how different perspectives can enrich our understanding of nature .we illustrate the many wonders of our universe , and demonstrate its scale by alerting the user to what is really near and what is really far away . and we highlight the many things about black holes and our universe that we still do not understand .in the broadest sense , our goal has been to show that even the most mysterious of things can be understood with the combined application of human thinking and powerful technology .we hope to convey the importance of scientific thought and to instill an appreciation for learning and an interest in science , especially in the younger generation of users .we are grateful to the many people who assisted with the creation of the website , or who provided graphics , animations , or software .a full list of credits is available at + http://hubblesite.org / discoveries / black_holes / credits.html .we would also like to thank bob brown for editorial assistance with this article .
we have created a website , called `` black holes : gravity s relentless pull '' , which explains the physics and astronomy of black holes for a general audience . the site emphasizes user participation and is rich in animations and astronomical imagery . it won the top prize of the 2005 pirelli internetional awards competition for the best communication of science and technology using the internet . this article provides a brief overview of the site . the site starts with an opening animation that introduces the basic concept of a black hole . the user is then invited to embark on a journey from a backyard view of the night sky to a personal encounter with a singularity . this journey proceeds through three modules , which allow the user to : find black holes in the night sky ; travel to a black hole in an animated starship ; and explore a black hole from up close . there are also five `` experiments '' that allow the user to : create a black hole ; orbit around a black hole ; weigh a black hole ; drop a clock into a black hole ; or fall into a black hole . the modules and experiments offer goal - based scenarios tailored for novices and children . the site also contains an encyclopedia of frequently asked questions and a detailed glossary that are targeted more at experts and adults . the overall result is a website where scientific knowledge , learning theory , and fun converge . despite its focus on black holes , the site also teaches many other concepts of physics , astronomy and scientific thought . the site aims to instill an appreciation for learning and an interest in science , especially in the younger users . it can be used as an aid in teaching introductory astronomy at the undergraduate level .
in this work , we present two distributional operations which identify relationships between seemingly different classes of random variables which are representable as linear functionals of a dirichlet process , otherwise known as _ dirichlet means_. specifically , the first operation consists of multiplication of a dirichlet mean by an independent beta random variable and the second operation involves an exponential change of measure to the density of a related infinitely divisible random variable having a generalized gamma convolution distribution ( ggc ) .this latter operation is often referred to in the statistical literature as _ exponential tilting _ or in mathematical finance as an _ esscher transform_. we believe our results add a significant component to the foundational work of cifarelli and regazzini . in particular ,our results allow one to use the often considerable analytic work on obtaining results for one dirichlet mean to obtain results for an entire family of otherwise seemingly unrelated mean functionals .it also allows one to obtain explicit densities for the related class of infinitely divisible random variables which are generalized gamma convolutions and an explicit description of the finite - dimensional distribution of their associated lvy processes ( see bertoin for the formalities of general lvy processes ) .the importance of this latter statement is that lvy processes now commonly appear in a variety of applications in probability and statistics , but there are relatively few cases where the relevant densities have been described explicitly .a detailed summary and outline of our results may be found in section [ sec : outline ] .some background information on , and notation for , dirichlet processes and dirichlet means , their connection with ggc random variables , recent references and some motivation for our work are given in the next section .let be a non - negative random variable with cumulative distribution function .note , furthermore , that for a measurable set we use the notation to mean the probability that is in one may define a dirichlet process random probability measure ( see freedman and ferguson ) , say on with total mass parameter and prior parameter via its finite - dimensional distribution as follows : for any disjoint partition on , say , the distribution of the random vector is a -variate dirichlet distribution with parameters hence , for each , has a beta distribution with parameters equivalently , setting for where are independent random variables with gamma distributions and has a gamma distribution .this means that one can define the dirichlet process via the normalization of an independent increment gamma process on , say as where , whose almost surely finite total random mass is a very important aspect of this construction is the fact that is independent of and hence of any functional of this is a natural generalization of lukacs characterization of beta and gamma random variables , which is fundamental to what is now referred to as the _ beta gamma algebra _ ( for more on this , see chaumont and yor ( , section 4.2 ) ; see also emery and yor for some interesting relationships between gamma processes , dirichlet processes and brownian bridges ) .hereafter , for a random probability measure on we write to indicate that is a dirichlet process with parameters these simple representations and other nice features of the dirichlet process have , since the important work of ferguson , contributed greatly to the relevance and practical utility of the field of bayesian non- and semi - parametric statistics .naturally , owing to the ubiquity of the gamma and beta random variables , the dirichlet process also arises in other areas .one of the more interesting and , we believe , quite important topics related to the dirichlet process is the study of the laws of random variables called _ dirichlet mean functionals _ , or simply dirichlet means , which we denote as as initiated in the works of cifarelli and regazzini . in , the authors obtained an important identity for the cauchy stieltjes transform of order this identity is often referred to as the _ markov krein identity _ , as can be seen in , for example , diaconis and kemperman , kerov and vershik , yor and tsilevich , where these authors highlight its importance to , for instance , the study of the markov moment problem , continued fraction theory and exponential representation of analytic functions .this identity is later called the _ cifarelli regazzini identity _ in .cifarelli and regazzini , owing to their primary interest , used this identity to then obtain explicit density and cdf formulae for the density formulae may be seen as abel - type transforms and hence do not always have simple forms , although we stress that they are still useful for some analytic calculations .the general exception is the case , which has a nice form .some examples of works that have proceeded along these lines are cifarelli and melilli , regazzini , guglielmi and di nunno , regazzini , lijoi and prnster , hjort and ongaro , lijoi and regazzini , and epifani , guglielmi and melilli .moreover , the recent works of bertoin _ et al . _ and james , lijoi and prnster ( see also , which is a preliminary version of this work ) show that the study of mean functionals is relevant to the analysis of phenomena related to bessel and brownian processes .in fact , the work of james , lijoi and prnster identifies many new explicit examples of dirichlet means which have interesting interpretations .related to these last points , lijoi and regazzini have highlighted a close connection to the theory of generalized gamma convolutions ( see ) . specifically , it is known that a rich subclass of random variables having generalized gamma convolutions ( ggc ) distributions may be represented as we call these random variables ggc in addition , we see from ( [ gammarep ] ) that is a random variable derived from a weighted gamma process and , hence , the calculus discussed in lo and lo and weng applies .in general , ggc random variables are an important class of infinitely divisible random variables whose properties have been extensively studied by and others .we note further that although we have written a ggc random variable as , this representation is not unique and , in fact , it is quite rare to see represented in this way .we will show that one can , in fact , exploit this non - uniqueness to obtain explicit densities for , even when it is not so easy to do so for while the representation is not unique , it helps one to understand the relationship between the laplace transform of and the cauchy stieltjes transform of order of which , indeed , characterize respectively the laws of and specifically , using the independence property of and leads to , for =\mathbb{e}\bigl[{\bigl(1+\lambda m_{\theta}(f_{x})\bigr)}^{-\theta}\bigr]=\mathrm{e}^{-\theta\psi_{f_{x}}(\lambda ) } , \label{cs}\ ] ] where \label{levy}\ ] ] is the _lvy exponent _ of we note that and exist if and only if for ( see , e.g. , and ) .the expressions in ( [ cs ] ) equate with the aforementioned identity obtained by cifarelli and regazzini . despite these interesting results ,there is very little work on the relationship between different mean functionals .suppose , for instance , that for each fixed value of denotes a dirichlet mean and denotes a collection of dirichlet mean random variables indexed by a family of distributions one may then ask the following question : for what choices of and are these mean functionals related , and in what sense ?in particular , one may wish to know how their densities are related .the rationale here is that if such a relationship is established , then the effort that one puts forth to obtain results such as the explicit density of can be applied to an entire family of dirichlet means furthermore , since dirichlet means are associated with ggc random variables , this would establish relationships between a ggc random variable and a family of ggc random variables .simple examples are , of course , the choices and which , due to the linearity properties of mean functionals , result easily in the identities in law naturally , we are going to discuss more complex relationships , but with the same goal .that is , we will identify non - trivial relationships so that the often considerable efforts that one makes in the study of one mean functional can then be used to obtain more easily results for other mean functionals , their corresponding ggc random variables and lvy processes . in this paper, we will describe two such operations which we elaborate on in the next subsection .section [ sec : prelim ] reviews some of the existing formulae for the densities and cdfs of dirichlet means . in section[ sec : beta ] , we will describe the operation of multiplying a mean functional by an independent beta random variable with parameters say , , where we call this operation _beta scaling_. theorem [ thm1beta ] shows that the resulting random variable is again a mean functional , but now of order .in addition , the ggc random variable is equivalently a ggc random variable of order now , keeping in mind that tractable densities of mean functionals of order are the easiest to obtain , theorem [ thm1beta ] shows that by setting , the densities of the uncountable collection of random variables are all mean functionals of order theorem [ thm2beta ] then shows that efforts used to calculate the explicit density of any one of these random variables , via the formulae of , lead to the explicit calculation of the densities of all of them .additionally , theorem [ thm2beta ] shows that the corresponding ggc random variables may all be expressed as ggc random variables of order representable in distribution as .a key point here is that theorem [ thm2beta ] gives a tractable density for without requiring knowledge of the density of which is usually expressed in a complicated manner .these results will also yield some non - obvious integral identities .furthermore , noting that a ggc random variable , is infinitely divisible , we associate it with an independent increment process known as a_ subordinator _ ( a non - decreasing non - negative lvy process ) , where , for each fixed =\mathbb{e}[\mathrm{e}^{-\lambda t_{\theta t}}]=\mathrm{e}^{-t\theta\psi_{f_{x}}(\lambda)}.\ ] ] that is , marginally , and in addition , for is independent of we say that the process is a _ggc subordinator_. proposition [ prop1beta ] shows how theorems [ thm1beta ] and [ thm2beta ] can be used to address the usually difficult problem of explicitly describing the densities of the finite - dimensional distribution of a subordinator ( see ) .this has implications in , for instance , the explicit description of densities of bayesian nonparametric prior and posterior models , but is clearly of wider interest in terms of the distribution theory of infinitely divisible random variables and associated processes . in section [ sec :gamma ] , we describe how the operation of exponentially tilting the density ofa ggc random variable leads to a relationship between the densities of the mean functional and yet another family of mean functionals .this is summarized in theorem [ thm1gamma ] .section [ sec : tiltbeta ] then discusses a combination of the two operations .proposition [ prop1gamma ] describes the density of beta - scaled and tilted mean functionals of order 1 .using this , proposition [ prop2gamma ] describes a method to calculate a key quantity in the explicit description of the densities and cdfs of mean functionals . in section [ sec : example ] , we show how the results in sections [ sec : beta ] and [ sec : gamma ] are used to derive the finite - dimensional distribution and related quantities for classes of subordinators suggested by the recent work of james , lijoi and prnster and bertoin _ et al ._ .suppose that is a positive random variable with distribution and define the function .\ ] ] furthermore , define where , using a lebesgue stieltjes integral , cifarelli and regazzini ( see also ) apply an inversion formula to obtain the distributional formula for as follows . for all , the cdf can be expressed as provided that possesses no jumps of size greater than or equal to one .if we let denote the density of then it takes its simplest form for , which is density formulae for are described as an expression for the density , which holds for all , was recently obtained by james , lijoi and prnster as follows : where for additional formulae , see . throughout , for random variables when we write the product , we will assume , unless otherwise mentioned , that and are independent .this convention will also apply to the multiplication of the special random variables that are expressed as mean functionals .that is , the product is understood to be a product of independent dirichlet means . throughout , we will be using the fact that if is a gamma random variable , then the independent random variables satisfying imply that this is true because gamma random variables are simplifiable . forthe precise meaning of this term and associated conditions , see chaumont and yor , sections 1.12 and 1.13 .this fact also applies to the case where is a positive stable random variable .[ sec : beta]in this section , we investigate the simple operation of multiplying a dirichlet mean functional by certain beta random variables .note , first , that if denotes an arbitrary positive random variable with density then , by elementary arguments , it follows that the random variable where is beta independent of has density expressible as however , it is only in special cases that the density can be expressed in even simpler terms .that is to say , it is not obvious how to carry out the integration . in the next results ,we show how remarkable simplifications can be achieved when in particular , for the range and when is a symmetric beta random variable .first , we will need to introduce some additional notation .let denote a bernoulli random variable with success probability then , if is a random variable with distribution , independent of it follows that the random variable has distribution and cdf hence , there exists the mean functional where denotes a dirichlet process with parameters in addition , we have , for =\sigma\phi_{f_{x}}(x)+(1-\sigma)\log(x ) \label{pid}.\ ] ] when and hence let denote a set such that =\sigma. \mbox{if } x\in\mathcal{c}(f_{x}), \mbox{if } x\notin\mathcal{c}(f_{x}). ] the next result yields another surprising property of these random variables .[ thm2beta]consider the setting in theorem [ thm1beta ] .then , when , it follows that for each fixed the random variable has density specified by ( [ sinp ] ) .since ggc this implies that the random variable has density since the density is of the form ( [ m1 ] ) for each fixed . ] furthermore , let denote an arbitrary disjoint partition of the interval . ] it follows that their sizes satisfy and since is a subordinator , the independence of the is a consequence of its independent increment property .in fact , these are essentially equivalent statements .hence , we can isolate each it follows that for each , the laplace transform is given by =\mathrm{e}^{-\theta which shows that each is ggc for hence , the result follows from theorem [ thm2beta ] .in this section , we describe how the operation of _ exponential tilting _ of the density of a ggc random variable leads to a non - trivial relationship between a mean functional determined by and and an entire family of mean functionals indexed by an arbitrary constant additionally , this will identify a non - obvious relationship between two classes of mean functionals .exponential tilting is merely a convenient phrase for the operation of applying an exponential change of measure to a density or more general measure . in mathematical finance and other applications ,it is known as an _ esscher transform _ and is a key tool for option pricing .we mention that there is much known about exponential tilting of infinitely divisible random variables and , in fact , bondesson , example 3.2.5 , explicitly discusses the case of ggc random variables , albeit not in the way we shall describe it . in addition , examining the gamma representation in ( [ gammarep ] ) , one can see a relationship to lo and weng , proposition 3.1 ( see also kchler and sorensen and james , proposition 2.1 ) , for results on exponential tilting of lvy processes ) . however , our focus here is on the properties of related mean functionals , which leads to genuinely new insights .before we elaborate on this , we describe generically what we mean by exponential tilting .suppose that denotes an arbitrary positive random variable with density , say , it follows that for each positive the random variable is well defined and has density exponential tilting refers to the exponential change of measure resulting in a random variable , say defined by the density }.\ ] ] thus , from the random variable , one gets a family of random variables obviously , the density for each does not differ much .however , something interesting happens when is a scale mixture of a gamma random variables , that is , for some random positive random variable independent of in that case , one can show , see , that , where is sufficiently distinct for each value of we demonstrate this for the case where first , note that , obviously , for each which , in itself , is not a very interesting transformation .now , setting with density denoted , the corresponding random variable resulting from exponential tilting has density and laplace transform }{\mathbb{e}[\mathrm{e}^{-cg_{\theta}m_{\theta}(f_{x})}]}=\mathrm{e}^{-\theta[\psi_{f_{x}}(c(1+\lambda))-\psi_{f_{x}}(c)]}. \label{tilt2}\ ] ] now , for each define the random variable that is , the cdf of the random variable can be expressed as in the next theorem , we will show that relates to the family of mean functionals by the tilting operation described above .moreover , we will describe the relationship between their densities .[ thm1gamma ] suppose that has distribution and for each is a random variable with distribution for each , let denote a ggc random variable having density let denote a random variable with density and laplace transform described by ( [ tilt1 ] ) and ( [ tilt2 ] ) , respectively . is then a ggc random variable and hence representable as furthermore , the following relationships exist between the densities of the mean functionals and : a. supposing that the density of , say , is known , then the density of is expressible as for b. conversely , if the density of , is known , then the density of is given by we proceed by first examining the lvy exponent ] then , for the result can be deduced by using proposition [ prop1gamma ] in the case first , note that now , equating the form of the density of given by ( [ m1 ] ) with the expression given in proposition [ prop1gamma ] , it follows that which yields the result .we point out that if represents a gamma random variable for , independent of then it is not necessarily true that is a ggc random variable . for this to be true, would need to be equivalent in distribution to some in that case , our results above would be applied for a ggc model .in this section , we will demonstrate how our results in sections [ sec : beta ] and [ sec : gamma ] can be applied to extend results for two random processes recently studied in the literature .the first involves a class of ggc subordinators that can be derived from a random mean of a two - parameter poisson dirichlet process with a uniform base measure , which was studied as a special case in james , lijoi and prnster ; see pitman and yor for more details of the two parameter poisson dirichlet distribution .the second involves a class of processes recently studied in bertoin _et al . _ ; see also maejima for some discussion of this process .a key component will be the ability to obtain an explicit expression for the respective in the first example , we do not have much explicit information on the relevant density , however , we can rely on a general theorem of james , lijoi and prnster to obtain in the second case of the models discussed in bertoin _ et al . _ , this theorem apparently does not apply .however , we will be able to use an explicit form of the density , obtained for a particular value of by bertoin _et al . _ , to obtain as we shall show , both of these processes are connected to a random variable , whose properties we now describe . for denote a positive -stable random variable specified by its laplace transform =\mathrm{e}^{-\lambda^{\alpha}}.\ ] ] in addition , define where is independent of and has the same distribution .the density of this random variable was obtained by lamperti ( see also chaumont and yor , exercise 4.2.1 ) and has the remarkably simple form furthermore ( see fujita and yor and ( james , proposition 2.1 ) , it follows that the cdf of satisfies , for }^{1/2 } } \label{sinid}\ ] ] and \bigr)&=&\frac{\sin(2\curpi\alpha)+2z\sin ( \curpi \alpha)}{1 + 2z\cos(\curpi\alpha)+z^{2 } } \nonumber \\[-8pt ] \\[-8pt ] \nonumber & = & \frac{2\sin(\curpi \alpha)[\cos(\curpi\alpha)+z]}{1 + 2z\cos(\curpi\alpha)+z^{2 } } .\label{idd}\end{aligned}\ ] ] when \bigr)=\frac{z}{z^{2}+1}.\ ] ] for and we define the special case of a two - parameter poisson dirichlet random probability measures as where are i.i.d .uniform[0,1 ] random variables and the are a sequence of independent random variables , independent of .so , in particular , these random variables satisfy =f_{u}(\cdot) ] random variable .in addition , is a dirichlet process .then , consider the random means given as the represent a special case of random variables representable as mean functionals of the class of two - parameter poisson dirichlet random probability measures that is to say , random variables , where is a general distribution .an extensive study of this larger class was conducted by james , lijoi and prnster . in regards to , they show that has an explicit density }}.\ ] ] furthermore , from james , lijoi and prnster , theorem 2.1 , for this implies that are ggc now , from vershik , yor and tslevich ( see also james , lijoi and prnster , equation ( 16 ) ) , it follows that &=&{\biggl(\frac{\lambda ( \alpha+1)}{{(\lambda+1)}^{\alpha+1}-1}\biggr)}^{{\theta}/{\alpha } } \\&=&\exp\bigl(-\theta\mathbb{e}[\log(1+\lambda\mathbb{u}_{\alpha,0})]\bigr),\end{aligned}\ ] ] where this expression follows from the generalized stieltjes transform of order of a uniform[0,1 ] random variable , =\int_{0}^{1}{(1+\lambda x)}^{\alpha}\,\mathrm{d}x=\frac{{(\lambda+1)}^{\alpha+1}-1}{\lambda ( \alpha+1)}.\ ] ] a description of the densities of for is available from the results of . however , with the exceptions of and their densities are generally expressed in terms of integrals with respect to functions that possibly take on negative values . here, by focusing instead on random variables for we can utilize the results in james , lijoi and prnster to obtain explicit expressions for their densities and the corresponding ggc random variables . in particular , we will obtain explicit descriptions for the finite - dimensional distribution of a ggc , say , subordinator , where and hence =\frac{\lambda ( \alpha+1)}{{(\lambda+1)}^{\alpha+1}-1}.\ ] ] although not immediately obvious, one can show that from this , due to the tilting relationship discussed in section 3 , we see that we can also obtain results for the ggc subordinator , say to the best of our knowledge , this process and its mean functionals have not been studied .now , from james , lijoi and prnster , theorem 5.2(iii ) , it follows that }^{{1}/{(2\alpha ) } } } .\label{jlp}\ ] ] this , combined with our results , leads to an explicit description of the finite - dimensional distribution of the relevant subordinators . consider the ggc subordinator and theggc subordinator denote an arbitrary disjoint partition of the interval ] hence , in this case , an application of proposition 3.2 shows that we note that , otherwise , it is not easy to calculate , in this case , by direct arguments .our final example shows how one can apply the results in sections 2 and 3 to obtain new results for subordinators recently studied by bertoin __ .in particular , they investigate properties of the random variables corresponding to the lengths of excursions of bessel processes straddling an independent exponential time , which can be expressed as where , for any , for a bessel process starting from with dimension , with or , equivalently , . additionally , an exponentially distributed random variable with mean see also fujita and yor for closely related work . in order to avoid confusion, we will now denote relevant random variables appearing originally as and in bertoin _ et al . _ as and , respectively . from bertoin _ et al . _ , let denote a subordinator such that &=&{\bigl((\lambda+1)^{\alpha}-\lambda^{\alpha } \bigr)}^{t}\\ & = & \exp\bigl(-t(1-\alpha)\mathbb{e}[\log(1+\lambda/\mathbb{g}_{\alpha})]\bigr ) , \end{aligned}\ ] ] where , from bertoin _ et al . _ , theorems 1.1 and 1.3 , denotes a random variable such that and has density on given by hence , it follows that the random variable takes its values on with probability one and has cdf satisfying as noted by bertoin _ , is a ggc subordinator , where the ggc random variable satisfies where denotes a uniform { \mbox{if } } x > 1, { \mbox{if } } 0<x\leq1. { \mbox{if } } x\leq1, { \mbox{if } } x > 1. ] with lengths and for consider the ggc subordinator and , for each fixed the ggc subordinator .the following results then hold : a. the finite - dimensional distribution of is such that each is independent and has distribution where . furthermore , for any fixed , the density of is given by b. for the ggc process , , it follows that each where for each has density }^{{\sigma\alpha}/{(1-\alpha ) } } \mathcal{d}_{\alpha}({y}/{(c(1-y))})}{\curpi { [ ( c+1)^{\alpha}-c^{\alpha}]}^{\sigma}y^{{\sigma\alpha}/{(1-\alpha)}+1 } } \qquad{\mbox{for } } 0<y<1.\ ] ] from theorem 2.2 , we have that the general form of the density of is given by the proof is completed by applying proposition 4.3 and ( [ sinap ] ) and ( [ cdfid ] ) . the process is well defined for and and presents itself as an interesting class worthy of further investigation . letting ,it is evident that converges to as shown by bertoin _ , section 3.6.3 , for , has a similar interpretation as but where the bessel process is now replaced by a diffusion process whose inverse local time at is distributed as a gamma subordinator furthermore , albeit not explicitly addressed in bertoin_ et al . _ , the random variable has a similar interpretation where is now replaced by a process whose inverse local time is distributed as a generalized gamma subordinator , that is , a subordinator whose lvy density is specified by for this interpretation may be deduced from donati - martin and yor ( , see page 880 ( 1.c ) ) , where equates with a downwards bessel process with drift bertoin _ et al . _ also show that a ggc random variable satisfies hence , the laplace transform of the ggc subordinator , say is given by \biggr)}^{t } .\ ] ] additionally , using the fact that leads to which leads to a description of a ggc subordinator .the above points may also be found in the survey paper of james , roynette and yor .consider the ggc subordinator and the ggc subordinator the following results then hold : a. the finite - dimensional distribution of is such that each is independent and is equivalent in distribution to furthermore , for any fixed , the density of is given by , for b. similarly , each and , for each fixed has density apply theorem 2.2 and theorem 3.1 , where , from ( [ id3 ] ) , note that as hence , they have the same limiting behavior , described in section 4.2 , as the random variables in section 4.1 .this research was supported in part by grants hia05/06.bm03 , rgc - hkust 6159/02p , dag04/05.bm56 and rgc - hkust 600907 of the hksar .( 1992 ) . _generalized gamma convolutions and related classes of distributions and densities_. _ lecture notes in statistics _ * 76*. new york : springer .( 2003 ) . _exercises in probability . a guided tour from measure theory to random processes , via conditioning ._ _ cambridge series in statistical and probabilistic mathematics _ * 13*. cambridge univ . press .( 1979 ) . considerazioni generali sullimpostazione bayesiana di problemi non parametrici .le medie associative nel contesto del processo aleatorio di dirichlet i , ii ._ riv . mat .social _ * 2 * 3952 . ( 2006 ) .gamma tilting calculus for ggc and dirichlet means via applications to linnik processes and occupation time laws for randomly skewed bessel processes and bridges .available at http://arxiv.org/abs/math.pr/0610218 .( 2001 ) . on the markov krein identity and quasi - invariance of the gamma process .. nauchn . sem ..- peterburg .otdel . mat .inst . steklov . _( pomi ) * 283 * 2136 .[ in russian .english translation in _ j. math .* 121 * ( 2004 ) 23032310 ] .
an interesting line of research is the investigation of the laws of random variables known as dirichlet means . however , there is not much information on interrelationships between different dirichlet means . here , we introduce two distributional operations , one of which consists of multiplying a mean functional by an independent beta random variable , the other being an operation involving an exponential change of measure . these operations identify relationships between different means and their densities . this allows one to use the often considerable analytic work on obtaining results for one dirichlet mean to obtain results for an entire family of otherwise seemingly unrelated dirichlet means . additionally , it allows one to obtain explicit densities for the related class of random variables that have generalized gamma convolution distributions and the finite - dimensional distribution of their associated lvy processes . the importance of this latter statement is that lvy processes now commonly appear in a variety of applications in probability and statistics , but there are relatively few cases where the relevant densities have been described explicitly . we demonstrate how the technique allows one to obtain the finite - dimensional distribution of several interesting subordinators which have recently appeared in the literature .
the improved infrastructure and an increase in the adoption of cyber - technology have led to increased connection and ease of interaction for users across the globe . however , at the same time , these developments have increased users exposure to risk .the importance of investing in security measures in this developing landscape is two - fold : while such expenditure helps entities protect their assets against security threats , by association it also benefits other interacting users , as an investing entity is less likely to be infected and used as a source of future attacks . in other words , a users expenditure in security in an interconnected system provides _ positive externalities _ to other users .consequently , the provision of security is often studied as a problem of public good provision . in particular ,when users are rational , the strategic decision making process leading to security investment decisions is studied as an _ ( interdependent ) security game _ .it is well - known that in an unregulated environment , the provision of public goods is in general inefficient . to eliminate this inefficiency ,the literature has proposed regulating mechanisms for implementing the socially optimal levels of security in these games , see e.g. .specifically , examples of existing mechanisms in the literature include introducing subsidies and fines based on security investments , assessing rebates and penalties based on security outcomes , imposing a level of due care and establishing liability rules , etc .our focus in the current paper is on mechanisms that use monetary payments / rewards to incentivize improved security behavior . within this context, we will examine two incentive mechanisms , namely the _ and _ externality _ mechanisms , both of which induce socially optimal user behavior by levying a monetary tax on each user participating in the proposed mechanism . aside from inducing optimal behavior , incentive mechanisms are often designed so as to maintain a _ ( weakly ) balanced budget ( bb ) _ and ensure _ voluntary participation ( vp ) _ by all users . the budget balance requirement states that the designer of the mechanism prefers to redistribute users payments as rewards , and ideally to eitherretain a surplus as profit or at least to not sustain losses .otherwise , the designer would need to spend external resources to achieve social optimality .the voluntary participation constraint on the other hand ensures that all users voluntarily take part in the proposed mechanism and the induced game , and prefer its outcome to that attained if they unilaterally decide to opt out of the mechanism .a user s decision when contemplating participation in an incentive mechanism is dependent not only on the structure of the game induced by the mechanism , but also on the options available when staying out .the latter is what sets the study of incentive mechanisms for security games apart from other public good problems where similar pivotal and externality mechanisms have been applied , e.g. , . to elaborate on this underlying difference, we note that security is a _ non - excludable _ public good .that is , although the mechanism optimizes the investments in a way that participating users are exposed to lower risks , those who stay out of the mechanism can benefit from the externalities of such improved state of security as well .the availability of these spill - overs in turn limits users willingness to pay for the good or their interest in improving their actions .in contrast , with excludable public goods , e.g. transmission power allocated in a communication system , users willingness to participate is determined by the change in their utilities when contributing and receiving the good , as compared to receiving _ no allocation at all_. this means that the designer has the ability to collect more taxes and require a higher level of contribution when providing an excludable good . as a result , tax - based mechanisms , such as the externality mechanism( e.g. ) and the pivotal mechanism ( e.g. ) , can be designed so as to incentivize the socially optimal provision of an excludable good , guarantee voluntary participation , and maintain ( weak ) budget balance . however , in this paper we show that given the non - excludable nature of security , there is no reliable tax - based mechanism that can achieve social optimality , voluntary participation , and ( weak ) budget balance simultaneously in all instances of security games .we show this result through two sets of counter - examples : we first limit the network structure to a star topology , and then consider the commonly studied weakest link model for users risk functions .we then further elaborate on this particular nature of security games by examining the pivotal and externality mechanisms in the special case of a _ weighted total effort _ interdependence model .this interdependence model is of particular interest as it can capture varying degrees and possible asymmetries in the influence of users security decisions on one another .specifically , we evaluate the effects of : ( i ) increasing users self - dependence ( equivalently , decreasing their interdependence ) , ( ii ) having two diverse classes of self - dependent and reliant users , and ( iii ) presence of a single dominant user , on the performance of the pivotal and externality mechanisms .we show that when possible , the selection of equilibria that are less beneficial to the outliers helps the performance of both mechanisms , so that they can achieve optimality , budget balance , and voluntary participation simultaneously .in addition , we see that these incentive mechanisms become of interest when they can facilitate a tax - transfer scheme , such that users who are highly dependent on externalities pay to incentivize improved investments by others who are key to improving the state of security .the main findings of this work can therefore be summarized as follows .first , we show that there is no tax - based incentive mechanism that can simultaneously guarantee social optimality , voluntary participation , and weak budget balance in all instances of security games .this result is applicable to other problems concerning the provision of non - excludable public goods over social and economic networks as well ( see section [ sec : related ] ) .second , we provide further insight on this impossibility by evaluating two incentive mechanisms , namely the pivotal and externality mechanisms , in weighted total effort games . we identify some of the parameters affecting the performance of these mechanisms , and instances in which the implementation of each mechanism is of interest .the rest of this paper is organized as follows .we present the model for security games , as well as the pivotal and externality mechanisms , in section [ sec : model ] , followed by the general impossibility result in section [ sec : imp ] .section [ sec : sim ] illustrates this result by applying the pivotal and externality mechanisms to weighted total effort models .we summarize related work in section [ sec : related ] , and conclude in section [ sec : conclusion ] .consider a network of interdependent users . each user can choose to exert effort towards securing its system , consequently achieving the _ level of security _ or _ level of investment _ .let denote the _ state of security _ of the system , i.e. , the profile of security levels of all users .we let denote the _ investment cost function _ of user ; it determines the monetary expenditure required to implement a level of security .we assume this function is continuous , increasing , and convex .the assumption of convexity entails that security measures get increasingly costly as their effectiveness increases .the expected amount of assets user has subject to loss , given the state of security , is determined by the _ risk function _ , and is denoted by .we assume is continuous , non - increasing , and strictly convex , in all arguments . the non - increasing nature of this function in arguments , models the positive externality of users security decisions on one another .the convexity on the other hand implies that the effectiveness of security measures in preventing attacks ( or the marginal utility ) is overall decreasing , as none of the available security measure can guarantee the prevention of all possible attacks. a user s _ ( security ) cost function _ at a state of security is therefore given by : we refer to the one - stage , full information game among the utility maximizing users with utility functions as the security game .the level of investments in the nash equilibrium of these games , and their sub - optimality when compared to the socially optimal investments , has been extensively studied in the literature , see e.g. . here , the socially optimal investment levels are those maximizing the total welfare , or equivalently , minimizing the sum of all users costs , i.e. , the literature has further proposed mechanisms for decreasing the inefficiency gap in security games , by either incentivizing or dictating improved security investments ; see for a survey .our focus in the present paper is on regulating mechanisms that use monetary taxation to incentivize socially optimal security behavior .such mechanisms assess a tax on each user ; this tax may be positive , negative , or zero , indicating payments , rewards , or no transaction , respectively .we further assume that users utilities are quasi - linear .therefore , the _ total cost _ of a user when it is assigned a tax is given by : in addition to implementing the socially optimal solution , incentive mechanisms are often required to satisfy two desirable properties .first , when using taxation , the mechanism designer prefers to maintain _( weak ) budget balance ( bb ) _ ; i.e. , it is desirable to have . in particular, implies a budget deficit , such that the implementation of the mechanism would call for the injection of additional resources by the designer .in addition , it is desirable to design the mechanism in a way that users _ voluntary participation ( vp ) _ conditions are satisfied ; i.e. users prefer implementing the socially optimal outcome while being assigned taxes , to the outcome attained had they unilaterally opted out .otherwise , the designer would need to enforce initial cooperation in the mechanism .note that we focus on the notion of voluntary participation instead of the usual _ individual rationality ( ir ) _ constraint , which requires a user to prefer participation to the outcome it attained at the state of anarchy ( i.e. , prior to the implementation of the mechanism ) . as mentioned in section [ sec : intro ] , such distinction is important as security is a non - excludable public good , i.e. , users can still benefit from the externalities of the actions of users participating in the mechanism , even when opting out themselves .this is in contrast to games with excludable public goods , where vp and ir are equivalent .we now proceed to introduce the pivotal and externality tax - based incentive mechanisms for security games . groves mechanisms , also commonly known as vickery - clarke - groves ( vcg ) mechanisms , refer to a family of mechanisms in which , through the appropriate design of taxes for users with quasi - linear utilities , a mechanism designer can incentivize users to reveal their true preferences in dominant strategies , thus implementing the socially optimal solution .however , the ( weak ) budget balance and voluntary participation conditions do not necessarily hold in these mechanisms , and are further dependent on the specifics of the design , as well as the game environment. in general , let be user s utility .here , is user s type ; a user s type determines the preference of the user over possible outcomes . in security games ,a user s type is its risk and investment cost functions , or equivalently , its cost function .users are required to report their types to the mechanism designer , based on which the designer decides on an allocation . in security games , an allocation is the vector of investments prescribed by the mechanism .the vcg family of mechanisms achieve truth revelation and efficiency by assigning the following taxes to users , when their reported types are : here , is the socially optimal allocation given users reported types , and is an arbitrary function that depends on the reported types of users other than .any choice of this function results in truth revelation and a socially efficient outcome , and a careful design may further result in vp and/or ( w)bb .one such choice that can achieve vp in certain environments is the _ pivotal _ , or _, mechanism , with taxes given by : here , , is the outcome maximizing the social welfare in the absence of user .this mechanism satisfies the participation constraints and achieves weak budget balance in many private and public good games ; however , this is not necessarily the case in security games .the taxes in the pivotal mechanism for the security game can be set as follows : where , is user s security cost function , is the socially optimal solution , and is the cost minimizing actions of users given user s action , and is determined by . in a game of complete information, will be the nash equilibrium of the game between user and the participating users .it is straightforward to verify that this design of the pivotal mechanism in security games internalizes the externalities of users actions , and can thus lead to the implementation of the socially optimal solution .formally , [ prop1 ] in the pivotal mechanism with taxes given by , investing the socially optimal level of investment will be individually optimal , for all users .therefore , the socially optimal solution is implemented .furthermore , such design will ensure participation by all users .that is , [ prop2 ] the pivotal mechanism with taxes given by satisfies all voluntary participation constraints .the proofs of these propositions follow directly from existing literature , see e.g. .we next examine a taxation mechanism that can achieve the socially optimal solution in security games , while maintaing a balanced budget .this mechanism is adapted from .the components of the mechanism are as follows . _ the message space : _ each user provides a message to the mechanism designer . denotes user s proposal on the public good , i.e. , it proposes the amount of security investment to be made by everyone in the system , referred to as an _ investment profile_. denotes a _ pricing profile _ which suggests the amount to be paid by everyone .as illustrated below , this is used by the designer to determine the taxes of all users .therefore , the pricing profile is user s proposal on the private good . _the outcome function : _ the outcome function takes the message profiles as input , and determines the security investment profile and a tax profile as follows : in , and are treated as and , respectively .note that as by , the budget balance condition is satisfied through this construction .what this means is that the designer will not be spending resources or making profit , as the users whose tax is positive will be financing the rewards for those who have negative taxes . in other words ,the mechanism proposes a tax _ redistribution _ scheme to incentivize improved security investments . to establish that the externality mechanism can implement the socially optimal outcome in security games , we first need to show that a profile , derived at any possible ne of the externality regulated game , is the socially optimal solution .formally , [ th1 ] let be the investment and tax profiles obtained at the nash equilibrium of the regulated security game .then , is the optimal solution to the centralized problem .furthermore , if is any other nash equilibrium of the proposed game , then .furthermore , we have to show the converse of the previous statement , i.e. , given an optimal investment profile , there exists an ne of externality regulated game which implements this solution .formally , we can show the following : [ th2 ] let be the optimal investment profile in the solution to the centralized problem .then , there exists at least one nash equilibrium of the regulated security game such that .the proofs of these theorems follow the method used by .we refer the interested reader to these papers , as well as our earlier work , where we present a sketch of the proof of theorem [ th1 ] , along with an intuitive interpretation for this mechanism . using the proof of [ th1 ], we show that the tax terms at the equilibrium of the externality mechanism are given by : the interpretation is that by implementing this mechanism , each user will be financing part of user s reimbursement . according to , this amount is proportional to the positive externality of s investment on user s utility .in the previous section , we stated two well - known tax - based incentive mechanisms for incentivizing socially optimal actions , namely the pivotal and the externality mechanisms , in the context of security games .the pivotal mechanism is designed to guarantee voluntary participation , while the externality mechanism focuses on budget balance .following these observations , one may ask whether either of these schemes , or other tax - based mechanisms , can achieve social optimality , while guaranteeing both budget balance and voluntary participation simultaneously , in all instances of security games . in this section , we show that in fact no such reliable mechanism exists .we illustrate this impossibility through two families of counter - examples .the first counter - example considers games in which the network structure is a star topology , while the second family focuses on security games with weakest link risk functions . inwhat follows , to evaluate users voluntary participation conditions , we consider a user , referred to as the _ loner _ or _outlier _ , who is unilaterally contemplating opting out of this mechanism .as the game considered here is one of full information , the remaining participating users , who are choosing a welfare maximizing solution for their ( )-user system , will have the ability to predict the best - response of the loner to their collective action , and thus choose their investments accordingly . as a result , the equilibrium investment profile when user opts out is the nash equilibrium of the game between the participating users and this loner .we will henceforth refer to this equilibrium as the _ exit equilibrium _ ( ee ) .assume some tax - based incentive mechanism is proposed for security games .consider users connected through the star topology depicted in fig .[ ex : star ] , where the security decisions of the root affects all leaves , but each leaf s investment only affects itself and the root .formally , let the cost function of the root be given by : and that of all leaves be : here , is any function satisfying the assumptions in section [ sec : model ] .the investment cost functions are linear , with the same unit investment cost for all users .\(1 ) 1 child[norm ] node ( 2 ) 2 child[norm ] node ( 3 ) 3 child[ndots ] node ( a ) [ draw = none ] child[norm ] node ( n ) n ; to find the socially optimal investment profile , we solve the optimization problem of minimizing the sum of all users costs , , subject to non - negative user investments .this profile , , should satisfy : based on the above , it is easy to see that in the socially optimal investment profile for this graph , only the root will be investing in security , while all leaves free - ride on the resulting externality .this socially optimal investment profile is given by : now , assume the root user is considering stepping out of the mechanism . to find the investment profile resulting from this unilateral deviation , first note that the leaves security decisions will not affect one another , so that the socially optimal investment profile for the leaves is the same as their individually optimal decisions .user 1 will also be choosing its individually optimal level of investment .therefore , using users first order conditions for cost minimization , the exit equilibrium is : finally , if any leaf user leaves the mechanism , the exit equilibrium will satisfy : again , it is easy to see that .therefore , the exit equilibrium when user unilaterally leaves the mechanism is given by : we now use the socially optimal investment profile and the exit equilibria to evaluate voluntary participation and budget balance in a general mechanism .assume assigns a tax to a participating user .then , voluntary participation will hold if and only if , which reduces to : the sum of these taxes is thus bounded by : however , the above sum could be negative , e.g. , when or , indicating that weak budget balance will fail regardless of how the taxes are determined in a mechanism . in this section , we again assume a general tax - based incentive mechanism is proposed for the security games .we focus on a family of security games which approximate the _weakest link _ risk function .intuitively , this model states that an attacker can compromise the security of an interconnected system by taking over the least protected node . to use this model in our current framework , we need a continuous , differentiable approximation of the minimum function .we use the approximation , where the accuracy of the approximation is increasing in the constant .user s cost function is thus given by : where investment cost functions are assumed to be linear , with the same unit investment cost for all users . in this game , the socially optimal investment profile is given by the solution to the first order condition , which leads to : by symmetry , all users will be exerting the same socially optimal level of effort : next , assume a user unilaterally steps out of the mechanism , while the remaining users continue participating .the exit equilibrium profile can be determined using : solving the above , we get : we now use the socially optimal investment profile and the exit equilibria to analyze users participation incentives in a general mechanism , as well as the budget balance conditions .denote by the tax assigned to user by .a user s total cost functions when participating and staying out are given by : the voluntary participation condition for this user will hold if and only if , which reduces to : on the other hand , for weak budget balance to hold , we need .nevertheless , by , we have : it is easy to see that given and for any , the above sum will always be negative , indicating a budget deficit for a general mechanism , regardless of how taxes are determined . , the externality mechanism with taxes determined by will be budget balanced and guarantee voluntary participation .however , the pivotal mechanism will carry a deficit in both regions . ] to close this section , we would like to point out that the impossibility result on a simultaneous guarantee of social optimality , voluntary participation , and weak budget balance , is demonstrated through two family of counter - examples .in other words , we have shown that without prior knowledge of the graph structure or users preferences , it is not possible for a designer to propose a _reliable _ mechanism ; that is , one which can promise to achieve so , vp , and wbb , regardless of the realizations of utilities .nevertheless , it may still be possible to design reliable mechanisms under a restricted space of problem parameters ; in fact we identify a few such instances in section [ sec : sim ] by analyzing the class of weighted total effort models .in the remainder of the paper , to further illustrate some of the parameters affecting the performance of incentive mechanisms in security games , we focus on the pivotal and externality mechanisms .we consider the special case of weighted total effort games , and identify some of the factors that affect the total budget and participation incentives in the pivotal and externality mechanisms , respectively .the gap between the nash equilibrium and the socially optimal investment profile of a security game , as well as users participation incentives and possible budget imbalances , are dependent on the specifics of the security cost functions defined in .in particular , an appropriate choice of the risk functions for a given game is based on factors such as the type of interconnection , the extent of interaction among users , and the type of attack .several models of security interdependency have been proposed and studied in the literature ; these include the _ total effort _ , _weakest link _ , and _best shot _ models considered in the seminal work of varian on security games , as well as the _ weakest target _ games proposed in , the _ effective investment _ and _ bad traffic _ models in , and the _ linear influence network _ games in . in this paper, we take the special case of the _ weighted total effort _ games , with exponential risks and linear investment cost functions , to study the effects of interdependency on the performance of the pivotal and externality mechanisms .formally , the total cost function of a user in this model is given by : here , the investment cost function is assumed linear , .the coefficients determine the dependence of user s risk on user s action .consequently , user s risk is dependent on a weighted sum of all users actions . in particular , to isolate the effect of different features of the model on the performance of the two mechanisms , we focus on three sub - classes of the weighted total effort model .we first look at the effects of varying users self - dependence .next , we consider the effects of diversity , by breaking users into two groups of self - dependent and reliant users . finally , we study the effect of making all users increasingly dependent on a single node s action .we present numerical results and intuitive interpretation for each of the above scenarios ; formal analysis is given in the online appendix .consider a collection of users , with total cost functions determined according to , with , and : we assume , so as to ensure the existence of non - zero equilibria ; i.e. , at least one user exerts non - zero effort at any equilibrium of the game .the socially optimal and exit equilibria of this game can be determined by using the first order conditions on the users cost minimization problems , subject to non - negative investments .the resulting systems of equations can be solved to determine the possible exit equilibria , as well as parameter conditions under which each equilibrium happens ; the results are summarized in table [ t : vpbb_adiag ] . according to this table , we can identify five sets of parameter conditions under which different exit equilibria are possible . we can further analyze each case separately to find whether the voluntary participation conditions are satisfied under the externality mechanism , as well as whether the pivotal mechanism can operate without a budget deficit .these results are summarized in table [ t : vpbb_adiag ] as well ..can vp and bb hold simultaneously ? - effect of self - dependence [ cols="^,<,<,<,<",options="header " , ] [ t : vpbb_dominant ] to verify the analysis summarized in table [ t : vpbb_dominant ] , we plot a user s benefit from participating in the externality mechanism ( i.e. , ) , the budget of the pivotal mechanism ( i.e. , ) , and the price of anarchy of the game , as the dependence on the dominant user , , increases . in particular , we set , , and increase from 1 to 15 . as a result , we will initially be in case , with and move to case , with once .the results are depicted in fig .[ dominant_a ] . as predicted by our analysis, the pivotal mechanism will always carry a deficit . also , the voluntary participation condition for non - dominant users will fail under both mechanisms . - single dominant user , scaledwidth=80.0% ] we observe that in these family of games , having a less beneficial equilibrium leads to the voluntary participation of the dominant user , as seen in the top plot in fig . [ dominant_a ] . as the exit equilibria for the non - dominant users remains unchanged, so does their participation incentives .furthermore , we see that no equilibrium can lead to budget surplus in the pivotal mechanism . that is , although the pivotal mechanism needs to give out a smaller reward to the dominant user in case as compared to case ( hence the jump in the third plot in fig .[ dominant_a ] ) , it still fails to avoid a deficit in both cases , due to the small willingness of free - riders to pay the taxes required to cover this reward .first , note that we have identified families of _ positive instances _ ; i.e , problem parameters under which one or both mechanisms can achieve participation and maintain a balanced budget simultaneously .these include cases and in table [ t : vpbb_adiag ] , which are positive instances for both mechanisms , as well as the region with small and parameters , fig .[ small_a2_1 ] , which is a positive instance for the pivotal mechanism .it is also worth mentioning the insight behind the existence of each positive instance : * in cases and of table [ t : vpbb_adiag ] , incentive mechanisms allow an _ exchange of favors _ among users : as all users are mainly dependent on others investments , they coordinate to each increase their investments in return for improved investments by other users . * in the region with small and parameters in fig .[ small_a2_1 ] , the pivotal mechanism is successful as it facilitates _ the transfer of funds _ from the reliant users to the self - dependent users in return for their improved investments .second , we observe that when possible , the selection of exit equilibria that are less beneficial to the outliers helps the performance of both mechanisms .a less beneficial equilibrium can be one that requires a free - rider to become an investor when leaving the mechanism , or one that requires an investor to continue exerting effort when out ( although possibly at a lower level ) .one instance of this feature can be seen by comparing cases and with case in table [ t : vpbb_adiag ] . the same can be observed from fig .[ small_a2_1 ] , when grows , and also by comparing figs .[ small_a2_1 ] and [ large_a2_1 ] .based on this observation , we can expect that in a repeated game setup of security games , by punishing outliers with an appropriate selection of less beneficial equilibria , social optimality , voluntary participation , and budget balance conditions can be simultaneously guaranteed .the problem of incentivizing optimal security investments in an interconnected system is one example of problems concerning the provision of non - excludable public goods in social and economic networks .other examples include creation of new parks or libraries at neighborhood level in cities , reducing pollution by neighboring towns , or spread of innovation and research in industry .we summarize some of the work most relevant to the current paper . introduces a network model of public goods , and studies different features of its nash equilibria .this model is equivalent to a total effort game with linear investment costs and a general interdependence graph .the authors show that these games always have a _ specialized _nash equilibrium ; i.e. , one in which users are either specialists exerting full effort ( equivalent to main investors in our terminology ) , or free - riders .they show that such equilibria correspond to maximal independent sets of the graph , and that specialized equilibira may lead to higher welfare compared to other ( distributed ) nash equilibria .similarly , studies the nash equilibrium of a linear quadratic interdependence model , and relates the equilibrium effort levels to the nodes bonacich centrality in a suitably defined matrix of local complementarities .the work in generalizes these results by studying existence , uniqueness , and closed form of the nash equilibrium in a broader class of games for which best - responses are linear in other players actions .all the aforementioned work focuses on the nash equilibrium in public good provision environments .the work of is the most relevant to our work , as it focuses on implementation of pareto efficient public good outcomes , rather than the nash equilibria on a given network .the authors define a _ benefits matrix _ for any given network graph ; an entry of the matrix is the marginal rate at which s effort can be substituted by the externality of s action .the main result of the paper states that efforts at a lindahl outcome constitute an eigenvalue centrality vector of this benefits matrix .one such pareto efficient outcome , the socially optimal outcome , can be implemented using lindahl taxes determined through the externality mechanism .the current paper differs from in both modeling and results .first , while a user s action in is strictly costly for the user itself , users in our framework benefit from their own investments as well .more importantly , the focus of is on characterizing users effort levels in terms of network structure for lindahl outcomes , the individual rationality of which is established by comparing the pareto efficient outcome with the state of anarchy , rather than considering unilateral deviations from the mechanism .our work on the other hand considers both lindahl and pivotal taxes , and focuses on users voluntary participation incentives when unilaterally opting out , as well as tax balance issues .finally , in the context of security games , our work in section [ sec : sim ] is most related to .the weighted total effort risk model is a generalization of the total effort model in , and is similar to the effective investment model in and the linear influence network game in .the linear influence models in have been proposed to study properties of the interdependence matrix affecting the existence and uniqueness of the nash equilibrium .the effective investment model in has been considered to determine a bound on the price of anarchy gap , i.e. the gap between the socially optimal and nash equilibrium investments , in security games .our work on the above model complements this literature , by considering the effect of users interdependence on the performance of incentive mechanisms .we have shown that in the problem of provision of non - excludable public goods on networks , under general assumptions on the graph structure and users preferences , it is not possible to design a tax - based incentive mechanism to implement the socially optimal solution while guaranteeing voluntary participation and maintaining a ( weakly ) balanced budget . even under a fully connected graph and users with weighted total effort risk functions , we need further conditions on problem parameters ( e.g. number of users , the level of interdependence , and cost of investment ) to ensure that the well - known pivotal and externality mechanisms can achieve social optimality , budget balance , and voluntary participation , simultaneously .these positive instances occur when users can exchange favors by agreeing on increasing their investments , or when they can transfer funds to the more influential users in return for their increased efforts .a comprehensive characterization of problem instances in which all requirements can be simultaneously satisfied remains a main direction of future work .the authors would like to thank armin sarabi and hamidreza tavafoghi for many useful discussions .this work is supported by the department of homeland security ( dhs ) science and technology directorate , homeland security advanced research projects agency ( hsarpa ) , cyber security division ( dhs s&t / hsarpa / csd ) , baa 11 - 02 via contract number hshqdc-13-c - b0015 .10 url # 1`#1`urlprefixhref # 1#2#2 # 1#1 a. laszka , m. felegyhazi , l. buttyn , a survey of interdependent security games , crysys 2 . a. mas - colell , m. d. whinston , j. r. green , et al . , microeconomic theory , vol . 1 , oxford university press new york , 1995 . j. grossklags , s. radosavac , a. a. crdenas , j. chuang , nudge : intermediaries role in interdependent network security , in : trust and trustworthy computing , springer , 2010 , pp . 323336 .r. bhme , g. schwartz , modeling cyber - insurance : towards a unifying framework ., in : workshop on the economics of information security ( weis ) , 2010 .l. jiang , v. anantharam , j. walrand , how bad are selfish investments in network security ? ,ieee / acm transactions on networking 19 ( 2 ) ( 2011 ) 549560 .h. kunreuther , g. heal , interdependent security , journal of risk and uncertainty 26 ( 2 - 3 ) ( 2003 ) 231249 .h. varian , system reliability and free riding , economics of information security ( 2004 ) 115 .e. h. clarke , multipart pricing of public goods , public choice 11 ( 1 ) ( 1971 ) 1733 .l. hurwicz , outcome functions yielding walrasian and lindahl allocations at nash equilibrium points , the review of economic studies ( 1979 ) 217225 .d. c. parkes , iterative combinatorial auctions : achieving economic and computational efficiency , ph.d .thesis , university of pennsylvania ( 2001 ) .s. sharma , d. teneketzis , a game - theoretic approach to decentralized optimal power allocation for cellular networks , telecommunication systems 47 ( 1 - 2 ) ( 2011 ) 6580 .j. grossklags , n. christin , j. chuang , secure or insure ? : a game - theoretic analysis of information security games , in : proceedings of the 17th international conference on world wide web , acm , 2008 , pp .209218 .p. naghizadeh , m. liu , budget balance or voluntary participation ? incentivizing investments in interdependent security games , in : 52th annual allerton conference on communication , control , and computing , ieee , 2014 .r. miura - ko , b. yolken , j. mitchell , n. bambos , security decision - making among interdependent organizations , in : computer security foundations symposium , 2008 .ieee 21st , ieee , 2008 , pp .c. ballester , a. calv - armengol , interactions with hidden complementarities , regional science and urban economics 40 ( 6 ) ( 2010 ) 397406 .m. elliott , b. golub , a network approach to public goods , in : proceedings of the fourteenth acm conference on electronic commerce , acm , 2013 , pp .377378 .y. bramoull , r. kranton , public goods in networks , journal of economic theory 135 ( 1 ) ( 2007 ) 478494 . c. ballester , a. calv - armengol , y. zenou , who s who in networks. wanted : the key player , econometrica 74 ( 5 ) ( 2006 ) 14031417 .in this appendix , we solve for the socially optimal investment profile , and identify the possible exit equilibria , and parameter conditions under which each equilibrium is possible. the socially optimal investment profile in this game will be given by : to find the exit equilibrium when a user steps out , , we can write the first order conditions on the users cost minimization problems . to simplify notation , we denote and .the system of equation determining and is given by : there are four possible exit equilibria , depending on the whether and/or are non - zero .we look at each case separately . intuitively , when user steps out , both sides continue to invest in security , perhaps at reduced levels , but no user is fully free - riding .we would need the following to hold simultaneously : let and . solving for leads to : to find the range of parameters for which the above holds , we need to ensure that are indeed positive . * if , then . for ,we need : * if , then . for ,we need : in this case , the participating users revert to investing zero , so that the outlier is forced to increase its investment : as a result , we get . for this to be consistent with the second condition ,we require : the above always fails to hold for , as the lhs is always more than 1 , while the rhs is surely less than 1 by the assumption . intuitively , when self - dependence is higher than co - dependence on the outlier , the remaining users will not rely solely on externalities and continue investing when user steps out . for on the other hand , for a small enough ( which in turn leads to higher investment be the outlier ) ,the equation may hold .this means that the loner free - rides , so that we have : as a result , we get . for this to be consistent with the first condition ,we need : note that this always hold for , but not necessarily for .we would need the following to hold simultaneously : which will never hold , as we initially required that . in this appendix , we present a ( partial ) analysis of possible exit equilibria , and parameter conditions under which each is possible , for the wighted total effort family described in section [ sec : sim_2a ] . denoting the investments of the users in and by and , respectively , the socially optimal investment profile in this gameis determined according to the first order conditions on users cost minimization problems : it is easy to see that at this socially optimal solution , , and is given by : in general , the above equation does not have a closed form solution .however , it is possible to find a lower bound and an upper bound on the solution .it is also possible to solve for numerically .once we solve for this socially optimal investment , we can determine the taxes assigned by the externality mechanism : note that the sum of the above taxes is indeed zero .also , it is interesting to note that as expected , the free - riders in always pay a tax , while the main investors in receive a subsidy . in order to find exit equilibria in this family of games , we would again need to solve equations with a similar format to that of the socially optimal solution , which in general lack a closed form solution .we therefore do not include a full analysis of this scenario , and limit our discussion to some interesting features of the possible exit equilibria .let denote the investment of the deviating user from group , and and denote the investments of users remaining in and .we have the following system of equations : by an argument similar to that in the derivation of the so solution in [ sec : sim_2a ] , we will always have . as a result ,the system of equations reduces to : we consider the following possible cases , depending on whether and/or are non - zero .this would require a solution to the following system of equations : the above equations tell us that : we can now substitute in the second equation to find : \\ & - \frac{a_1 - 1}{n_1 - 1}[\log\frac{a_1}{c}-a_1x ] ) + n_2\exp(-[\log\frac{a_1}{c}-(a_1 - 1)x ] ) = c~ , \end{aligned}\ ] ] which can be solved numerically .this would require : the first equation above can be used to find a lower - bound on . intuitively , the investments should be high enough for the outlier to decide against investing ( i.e. , set ) .we have : given that the lhs of the second equation , which determins , is decreasing in , the above system of equation is consistent if and only if : let denote the investment of the deviating user from group , and and denote the investments of users remaining in and .we have the following system of equations : by an argument similar to that in the derivation of the so solution in [ sec : sim_2a ] , we will always have . as a result, the system of equations reduces to : we consider the following possible cases .this would require a solution to the following system of equations : from the first equation we know that . to solve the system , we substitute in the second equation and obtain : ) \\ & \hspace{1 in } + ( n_2 - 1)\exp(-\log\frac{a_2}{c}-(1-a_2)x ) = c~ , \end{aligned}\ ] ] which can be solved numerically . in this appendix , we solve for the socially optimal investment profile , and identify the possible exit equilibria , and parameter conditions under which each equilibrium is possible .it is easy to show that in a socially optimal investment profile , only user 1 will be exerting effort , so that : we next find the exit equilibria under two different cases .first , if any non - dominant user steps out of the mechanism , user will continue exerting all effort , but at a lower level given by : next , if user 1 steps out of the mechanism , there are two possible exit equilibria : if , there will be enough externality for users to continue free - riding , resulting in the following equilibrium investment levels : however , when , user 1 will free - ride on the externality of other users investments , leading to the exit equilibrium : this appendix , we separately analyze each of the possible cases identified in appendix [ app : adiag ] , summarized in table [ t : vpbb_adiag ] . specifically , we are interested in the budget balance condition under the pivotal mechanism , and users participation incentives in the externality mechanism . in this case , the underlying parameters satisfy and . as a result ,the exit equilibrium ( ee ) is such that , and . therefore , the costs of users at the so and ee are given by : note that is a decreasing function of .thus , for all , resulting in , indicating rewards to all users , and thus a budget deficit in all scenarios . intuitively ,although when a user steps out , other users have to invest less in security , thus decreasing their direct investment costs , still their overall security costs go up as a result of the increased risks .consequently , each user should be payed a reward to be kept in the mechanism , resulting in a budget deficit .based on the last inequality , define the function .this function is increasing in . as a result, it obtains its maximum when reaches its maximum value , which by the initial condition is given by .thus , let , and define ( i.e. , we are assuming a fixed ) .the derivative of this function wrt is given by : as the above is positive for all , we conclude that is an increasing function in .furthermore , , which in turn means that , and therefore , is always non - positive .this in turn means that the vp condition can never be satisfied . for this case ,the underlying parameters satisfy and . as a result ,the exit equilibrium ( ee ) is such that , and are given by and .therefore , the costs of users at the so and ee are given by : for the mechanism to have a budget deficit we would need , which holds if and only if : the last line is true because , for all .therefore , the mechanism always carries a budget deficit .the mechanism fails voluntary participation if and only if : let , and define ( i.e. , we are assuming a fixed ) .the derivative of this function wrt is given by : as the above is positive for all , we conclude that is an increasing function in .furthermore , , which in turn means that , and therefore , that the vp condition always fails to hold under these parameter settings . here , we only require that , and all other values of or will guarantee the existence of an equilibrium and .this is thus parallel with case .users costs in the so and ee are similarly given by : note that is a decreasing function of .thus , for all , resulting in , indicating rewards to all users , and thus a budget deficit in all scenarios ( exactly similar to case ) .voluntary participation will fail if and only if , that is : first , we note that the rhs is always greater than 1 , as . on the other hand , since , holds for all , so that the lhs will be less than 1 .therefore , the vp condition always fails . for the mechanism to have budget balance we would need , which holds if and only if : the last line follows from the previous because , and is true because its lhs is and its rhs is .therefore , the mechanism always has budget balance in this scenario .the mechanism has voluntary participation if and only if : the last statement holds because the second element in the logarithm is always less than 1 , due to , and the result follows as , for all . first we use to conclude that .now , for the mechanism to have budget balance we would need , which holds if and only if : where the last line line follows from the previous because , and is true because its lhs is and its rhs is .therefore , the mechanism always has budget balance in this scenario . as is a decreasing function in when , and , the costswhen staying out are higher for user . therefore vp is satisfied in the externality mechanism in this case . in this appendix , we look at the performance of the pivotal and externality mechanisms , under the different exit equilibria identified in section [ sec : sim_dominant ] , summarized in table [ t : vpbb_dominant ] . in the externality mechanism ,users taxes are given by : for non - dominant users to voluntarily participate in the mechanism , we require : however , , and .therefore , the voluntary participation constraints will always fail to hold in the externality mechanism .finally , we analyze the total budget in the pivotal mechanism for the current scenario .the taxes for the non - dominant users will be given by : the taxes for user 1 will depend on the realized exit equilibrium .if , this tax is given by : the sum of the pivotal taxes under this parameter conditions will then be given by : note that , and therefore , with , the above sum is always negative . we conclude that the pivotal mechanism will always carry a deficit .on the other hand , when , the tax for the dominant user is given by : the sum of the pivotal taxes will then be given by : by the same argument as before , the above sum will always be negative , indicating a budget deficit in the pivotal mechanism under this scenario as well .
in a system of interdependent users , the security of an entity is affected not only by that user s investment in security measures , but also by the positive externality of the security decisions of ( some of ) the other users . the provision of security in such system is therefore modeled as a public good provision problem , and is referred to as a security game . in this paper , we compare two well - known incentive mechanisms in this context for incentivizing optimal security investments among users , namely the pivotal and the externality mechanisms . the taxes in a pivotal mechanism are designed to ensure users voluntary participation , while those in an externality mechanism are devised to maintain a balanced budget . we first show the more general result that , due to the non - excludable nature of security , no mechanism can incentivize the socially optimal investment profile , while at the same time ensuring voluntary participation and maintaining a balanced budget for all instances of security games . to further illustrate , we apply the pivotal and externality mechanisms to the special case of _ weighted total effort _ interdependence models , and identify some of the effects of varying interdependency between users on the budget deficit in the pivotal mechanism , as well as on the participation incentives in the externality mechanism . interdependent security games , budget balance , voluntary participation , mechanism design
consider a functional regression model where is the value of unknown function at the observed covariate and is an error term . to fit an unknown function may consider a process regression model where is a random function and is an error process for .a gpr model assumes a gaussian process ( gp ) for the random function .it has been widely used to fit data when the regression function is unknown : for detailed descriptions see rasmussen and william ( 2006 ) , and shi and choi ( 2011 ) and references therein .gpr has many good features , for example , it can model nonlinear relationship nonparametrically between a response and a set of large dimensional covariates with efficient implementation procedure . in this paperwe introduce an etpr model and investigate advantages in using an extended t - process ( etp ) .blup procedures in linear mixed model are widely used ( robinson , 1991 ) and extended to poisson - gamma models ( lee and nelder , 1996 ) and tweedie models ( ma and jorgensen , 2007 ) .efficient blup algorithms have been developed for genetics data ( zhou and stephens , 2012 ) and spatial data ( dutta and mondal , 2015 ) . in this paper, we show that blup procedures can be extended to gpr models. however , gpr fits are susceptible to outliers in output space ( ) .loess ( cleveland and devlin , 1988 ) has been developed for a robust fit against such outliers .however , it requires fairly large densely sampled data set to produce good models and does not produce a regression function that is easily represented by a mathematical formula . for models with many covariates , it is inevitable to have sparsely sampled regions .wauthier and jordan ( 2010 ) showed that the gpr model tends to give an overfit of data points in the sparsely sampled regions ( outliers in the input space , ) .thus , it is important to develop a method which produces good fits for sparsely sampled regions as well as densely sampled regions .wauthier and jordan ( 2010 ) proposed to use a heavy - tailed process .however , their copula method does not lead to a close form for prediction of as an alternative to generate a heavy - tailed process , various forms of student -process have been developed : see for example yu _ et al_. ( 2007 ) , zhang and yeung ( 2010 ) , archambeau and bach ( 2010 ) and xu _et al_. ( 2011 ) .however , shah _ et al_. ( 2014 ) noted that the -distribution is not closed under addition to maintain nice properties in gaussian models . in this paper , we develop a specific etpr model which is closed under addition to retain many favorable properties of gpr models . due to its special structure of construction ,the resulting etpr model gives computationally efficient algorithm , i.e. a slight modification of the existing blup algorithm provides the robust blup procedure . under the proposed etpr model , marginal and predictive distributionsare in closed forms .furthermore , it gives a robust blup procedure against outliers in both input and output spaces .properties of the robust blup procedure are investigated .the remainder of the paper is as follows .section 2 presents an etp and its properties .section 3 proposes an etpr model and discusses the inference and implementation procedures .robustness properties and information consistency of robust blup predictions are shown in section 3 .numerical studies and real examples are in section 4 , followed by concluding remarks in section 5 .all the proofs are in appendix .as a motivating example , we generated two data sets with sample sizes of and where s are evenly spaced in [ 0 , 1.5 ] for the 9 ( or 48 ) data points and the remaining point is at 2.0 ( or two points at 1.8 and 2.0 ) .thus , the remaining point or the two points are sparse ones , meaning they are far away from the other data points in input space . in addition, we also make the data point 2.0 to be an outlier in output space by adding an extra error from either or , where is the student distribution with two degree of freedom .prediction curves for simulated data are plotted in figure [ fig1 ] , where circles represent data points , solid and dashed lines stand for prediction and their 95% confidence bounds .the true function is zero . for a small sample size ,figure 1(a - f ) shows that loess and gpr predictions are similar and the etpr prediction is the smoothest and shrinks the data point 2.0 the most heavily , i.e. selective shrinkage occurs . for a moderate sample size ,1(g - l ) shows that loess and etpr predictions are similar .however , the etpr prediction still shrinks the most at 2.0 . even though unreported , for a large sample size all give similar predictions .denote observed data set by where and . for random component at a new point ,the best unbiased predictor is it is called a blup if it is linear in its standard error can be estimated with to have an efficient implementation procedure , it is useful to have explicit forms for the predictive distribution and . let be a real - valued random function such that .analogous to double hierarchical generalized linear models ( lee and nelder , 2006 ) , we consider a following hierarchical process , where stands for gp with mean function and covariance function , and stands for an inverse gamma distribution with the density function and is the gamma function .then , follows an etp implying that for any collection of points , we have where means that has an extended multivariate -distribution ( emtd ) with the density function , , and for some mean function and kernel function it follows that at any collection of finite points etp has an analytically representable emtd density being similar to gp having multivariate normal density .note that is defined when and is defined when .when , becomes the multivariate -distribution of lange _et al_. ( 1989 ) .when and , becomes the generalized multivariate -distribution of arellano - valle and bolfarine ( 1995 ) .for it easily obtains that , and thus , we may say that the has a heavier tail than the .* proposition 1 * _ let _ * _ when as , we have _ * _ let be a p random vector such that . for a linear system with , we have with and _ . *_ let be a new data point and then , with , for . _ even if the mean and covariance functions of can not be defined when , from proposition 1(iii ) , the mean and covariance functions of the conditional process do always exist if . also from proposition 1(iii ) , the conditional process converges to a gp , as either or goes to .thus , if the sample size is large enough , the etp behaves like a gp . for a new point , we have , where and .note that from lemma 2(iv ) . under various combinations of and ,the etp generates various -processes proposed in the literature .for example , is the -process of shah _ et al_. ( 2014 ) .they showed that if covariance function follows an inverse wishart process with parameter and kernel function , and , then has an extended -process . is the student s -process of rasmussen and william ( 2006 ) and is that of zhang and yeung ( 2010 ) .consider the process regression model ( [ assumed ] ) in this section we introduce an etpr model , where and have a joint etp process , kernel function and is an indicator function .the joint etp above can be constructed hierarchically as and this implies that and to give hence , additivity property of the gp and many other properties hold conditionally and marginally in the etp .when , the etpr model becomes a gpr model . for observed data ,this leads to a functional regression model where and .now it follows that where .consider a linear mixed model where is the design matrix for fixed effects , is the design matrix for random effect and is a white noise .suppose that , , and with and .then , the linear mixed model becomes the functional regression model with this shows that etpr models extend the conventional normal linear mixed models to a nonlinear functional regression . in contrary to loess , this also shows that the etpr method can produce a regression function , easily represented by a mathematical formula . in the hierarchical construction of etp ,there is only one single random effect so that is not estimable , confounded with parameters in covariance matrix .this means that and are not estimable .following lee and nelder ( 2006 ) , we set because if thus , the variance does not depend upon as this is also true when by doing this way , the first two moments of gp and etp have the same parametrization . zellner ( 1976 ) also noted that can not be estimated with a single realization of . in multivariatet - distribution , lange _ et al_. ( 1989 ) proposed to use .zellner ( 1976 ) suggested that can be chosen according to investigator s knowledge of robustness of regression error distribution .as , etp tends to gp .when robustness property is an important issue , a smaller is preferred .we tried various values for and find that works well . from now on we set to have furthermore , in functional regression models it is conventional to assume thus , without loss of generality we assume so far we have assumed that the covariance kernel is given . to fit the etpr model , we need to choose .a way is to estimate the covariance kernel nonparametrically ; see e.g. hall _ et al_. ( 2008 ) . however , this method is very difficult to be applied to problems with multivariate covariates .thus , we choose a covariance kernel from a function family such as a squared exponential kernel and matrn class kernel .this paper employs a combination of a squared exponential and a non - stationary linear covariance kernel as follows , where are a set of hyper - parameters . in ( [ kerfun ] ) , measurethe length scale of each input covariate , known as scaling parameter which controls the vertical scale of variations of a typical function of the input , and defines the scale of non - stationary linear trends .the small value of means that the corresponding covariate may have great contribution in the covariance function .more about kernel function can be seen in rasmussen and william ( 2006 ) and shi and choi ( 2011 ) .let where is a parameter for and are those for . herethe joint density of is where and are density functions of emtds .because , the maximum likelihood ( ml ) estimator for can be obtained by solving where , , and the score equations for gpr models above are ml estimating equations in linear mixed models with and thus , a little modification of existing blup procedures gives a parameter estimation for etpr models .since from lemma 2(iii ) we have with thus , given is linear in i.e. the blup for which is an extension of the blup in linear mixed models to etpr models .this blup has a form independent of so that it is the blup for gpr models .however , the conditional variance depends upon except when i.e. under gpr models .thus , the blups for the etpr and gpr models have a common form , but have different predictors and their variance estimations because of different parameter estimations ( and ) .furthermore , all quantities necessary to compute and are available during implementing blup procedures . for a given new data point , we have by lemma 2(iii ) , the predictive distribution is where furthermore , from proposition 1(iii ) , where and from lemma 2(iii ) , we also have with and .consequently , this conditional predictive process can be used to construct prediction of the unobserved response at and its standard error can be formed using the predictive variance , given by , and the proof is in appendix .the predictive variance for in ( [ spec.cov ] ) differs from that for the prediction of and discussed above is the best unbiased predictions under etpr models , and so is under gpr models .however , their standard errors ( variance estimators ) differ .note that where is the blup for .thus , the standard error estimate of the blup under the etpr model increases if the model does not fit the responses well while that under the gpr model does not depend upon the model fit .random - effect models consist with three objects , namely the data , unobservables ( random effects ) and parameters ( fixed unknowns ) for inferences of such models , lee and nelder ( 1996 ) proposed the use of the h - likelihood .lee and kim ( 2015 ) showed that inferences about unobservables allow both bayesian and frequentist interpretations . in this paper , we see that the etpr model is an extension of random - effect models .thus , we may view the functional regression model ( [ assumed ] ) either as a bayesian model , where a gp or an etp as a prior , or as a frequentist model where a latent process such as gp and etp is used to fit unknown function in a functional space ( chapter 9 , lee _ et al_. , 2006 ) . with the predictive distribution above, we may form both bayesian credible and frequentist confidence intervals .estimation procedures in section 3.1 can be viewed as an empirical bayesian method with a uniform prior on in frequentist ( or bayesian ) approach , ( [ margin ] ) is a marginal likelihood for fixed ( or hyper ) parameters .let and be the blup for and its variance estimate , respectively , under the etpr model . and let and be those under the gpr model with .let and be two student t - type statistics for a null hypothesis . under a bounded kernel function , if for some , , while remains bounded .therefore , for etpr is more robust against outliers in output space compared to that for gpr .this property still holds for ml estimators .* proposition 2 * _ if kernel function is bounded , continuous and differentiable on , then the ml estimator from the etpr has bound influence function , while that from the gpr does not ._ let be the density function to generate the data given under the true model ( [ true ] ) , where is the true underlying function of .let be a measure of random process on space .let be the density function to generate the data given under the assumed etpr model ( [ assum1 ] ) .thus , the assumed model ( [ assum1 ] ) is not the same as the true underlying model ( [ true ] ) . here is the common in both models and is the true value of .let be the estimated density function under the etpr model .denote =\int ( \log { p_{1}}-\log { p_{2}})dp_{1} { x}_{i} { y}_{i-1} { y}_{n} { x}_{n} { y}_{n} { k} { k}_{n} { b} { k} { k}_{n} { b}^{-1} { \alpha } { k}_{n} { k} { k}_{n} { i}_{n} { \alpha} { y}_{n} { y}_{n} { k}_{n} { b}^{-1} { y}_{n} { x}_{n} { k}_{n} { b}^{-1}$ } ) , \label{eqp}\end{aligned}\ ] ] where .hence , it follows from ( [ fld ] ) , ( [ kbdist ] ) and ( [ eqp ] ) that since the covariance function is bounded and continuous in and , we have as .hence , there exist positive constants and such that for large enough plugging ( [ eqns ] ) in ( [ gpcons-1 ] ) , we have the inequality ( [ gpcons ] ) . for proof of proposition 3 , we need condition ( a ) is bounded and .* proof of proposition 3 * : it easily shows that . under conditions in lemma 3 , and condition ( a ) , it follows from lemma 3 that )\longrightarrow 0,{\mbox{as}}~~n\rightarrow \infty .\notag\ ] ] that proves proposition 3. . archambeau , c. and bach , f. ( 2010 ) , multiple gaussian process models ._ advances in neural information processing systems_. 2 .arellano - valle , r. b. and bolfarine , h. ( 1995 ) . on some characterization of the t - distribution ._ statistics _ & _ probability letters _ , 25 , 79 - 85 .cleveland , w.s . , devlin , s.j .( 1988 ) , locally - weighted regression : an approach to regression analysis by local fitting ._ journal of the american statistical association _ 83 : 596 - 610 .dutta , s. and mondal , d. ( 2015 ) , an h - likelihood method for spatial mixed linear models based on intrinsic auto - regressions ._ j. r. statist .b _ 77 : 699 - 726 . 5 . hall , p. , mller , h .-g . , and yao , f. ( 2008 ) , modelling sparse generalized longitudinal observations with latent gaussian processes,_journal of royal statistical society , ser .b _ , 70 , 703 - 723 .hampel , f.r . ,ronchetti , e.m . ,rousseeuw , p.j . and stahel , w.a .( 1986 ) , robust statistics : the approach based on influence functions , wiley .lange , k.l . , little , r. j.a . and taylor j. m.g .( 1989 ) , robust statistical modelling using the t distribution , _ journal of the american statistical association _ , 84 , 881 - 896 .lee , y. and kim , g. ( 2015 ) .h - likelihood predictive intervals for unobservables , _ international statistical review _ , doi : 10.1111/insr.12115lee , y. and nelder , j.a .hierarchical generalized linear models ._ journal of the royal statistical society b _ , 58 , 619 - 678 .lee , y. and nelder , j.a .double hierarchical generalized linear models ( with discussion ) . _ journal of the royal statistical society : c ( applied statistics ) _ , 55 , 139 - 185 .lee , y. , nelder , j.a . andpawitan , y. ( 2006 ) .generalized linear models with random effects , unified analysis via h - likelihood .chapman & hall / crc .ma , r. and jorgensen , b. ( 2007 ) . nested generalized linear mixed models : an orthodox best linear unbiased predictor approach ._ journal of the royal statistical society b _ , 69 , 625 - 641 . 13 .rasmussen , c. e. and williams , c. k. i. ( 2006 ) , gaussian processes for machine learning .cambridge , massachusetts : the mit press .robinson , g.k .( 1991 ) . that blup is a good thing : the estimation of random effects ( with discussion ) ._ statistical science _ 6 , 15 - 51 . 15 .seeger m. w. , kakade s. m. and foster d. p. ( 2008 ) , information consistency of nonparametric gaussian process methods , ieee transactions on information theory , 54 , 2376 - 2382 .shah a. , wilson a.g . and ghahramani z. ( 2014 ) , student - t processes as alternatives to gaussian processes .proceedings of the 17th international conference on artificial intelligence and statistics ( aistats ) , 877 - 885 . 17 .shi , j. q. and choi , t. ( 2011 ) , gaussian process regression analysis for functional data , london : chapman and hall / crc .wang , b. and shi , j.q .( 2014 ) , generalized gaussian process regression model for non - gaussian functional data ._ journal of the american statistical association _ , 109 , 1123 - 1133 .wauthier , f. l. and jordan , m. i. ( 2010 ) .heavy - tailed process priors for selective shrinkage . in advances in neural information processing systems , 2406 - 2414 . 20 .xu , p , lee , y. and shi , j. q. ( 2015 ) automatic detection of significant areas for functional data with directional error control . arxiv:1504.08164xu , z. , yan , f. and qi , y. ( 2011 ) , sparse matrix - variate t process blockmodel . proceedings of the 25th aaai conference on artificial intelligence , 543 - 548 .yu s. , tresp v. and yu k. ( 2007 ) , robust multi - tast learning with t - process .proceedings of the 24th international conference on machine learning , 1103 - 1110 . 23 .zellener , a. ( 1976 ) , bayesian and non - bayesian analysis of the regression model with multivariate student - t error terms , _ journal of the american statistical association _, 71 , 400 - 405 . 24 .zhang , y. and yeung , d.y . (2010 ) , multi - task learning using generalized process .proceedings of the 13th international conference on artificial intelligence and statistics ( aistats ) , 964 - 971 . 25 .zhou , x. and stephens , m. ( 2012 ) , genome - wide efficient mixed - model analysis for association studies ._ nature genetics _ 44 : 821 - 824 .
gaussian process regression ( gpr ) model has been widely used to fit data when the regression function is unknown and its nice properties have been well established . in this article , we introduce an extended t - process regression ( etpr ) model , which gives a robust best linear unbiased predictor ( blup ) . owing to its succinct construction , it inherits many attractive properties from the gpr model , such as having closed forms of marginal and predictive distributions to give an explicit form for robust blup procedures , and easy to cope with large dimensional covariates with an efficient implementation by slightly modifying existing blup procedures . properties of the robust blup are studied . simulation studies and real data applications show that the etpr model gives a robust fit in the presence of outliers in both input and output spaces and has a good performance in prediction , compared with the gpr and locally weighted scatterplot smoothing ( loess ) methods . gaussian process regression , selective shrinkage , robustness , extended process regression , functional data
we provide an overview of the r&d focus at the zurich cell of olsen ltd . by detailing the general conceptual framework and by identifying key themes embedded in it, we outline what we believe are the prerequisites and building - blocks for successfully devising trading models ( tms ) and other financial applications .chapters [ sec : lon ] , [ sec : fund ] , and [ sec : cs ] are based on appendix a , chapter [ sec : sl ] on appendix c of .this document is the property of olsen ltd . and is proprietary and confidential .it is not permitted to disclose , copy or distribute the document except by written permission of olsen ltd .laws of nature can be seen as regularities and structures in a highly complex universe .they depend critically on only a small set of conditions , and are independent of many other conditions which could also possibly have an effect .science can be understood as a quest to capture laws of nature within the framework of a formal representation , or model .naively one would expect science to adhere to basic notions of common sense , like logic , empiricism , causality , and rationality . the philosophy of science deals with the assumptions , foundations , methods and implications of science .it tries to describe what constitutes a law of nature and tries to answer the question of what knowledge is and how it is acquired . in the philosophy of science ,the programs of logical empiricism and critical rationalism have been unsuccessful in conclusively answering the above mentioned questions and in providing an ultimate justification for science based on common sense . indeed , the streams of postmodernism , constructivism and relativism explicitly question the notions of objectivity , rationality , absolute truths and empiricism .but then , why has science been so successful at describing reality ? and why is science producing the most amazing technology at breakneck speed ?it is a great feature of reality , that the formal models which the human mind discovers / devises find their match in the workings of nature .we will return to this enigma later on . in being pragmatic and disregarding the conceptual problems , one can identify two domains of nature and two modes of describing reality . in the following section , we give a short overview of this _weltanschauung_. in secs .[ sec : fund ] and [ sec : cs ] the details are described .the functioning of nature can broadly speaking be separated into two categories , either belonging to the domain of _ fundamental processes _ or _complex systems_. as examples , elementary particles in a force - field represent the former , while a swarm of birds constitutes the latter .it has been possible to describe nature with two methods : the _ analytical _ and _ algorithmic _ approach .the analytical approach is what most people are familiar with .physical problems are translated into mathematical equations which , when solved , give new insights .the algorithmic approach simulates the physical system in a computer according to algorithms , where the dynamics of the real system are described by the evolution of the simulation . in fig .[ tbl : ovwerlaws ] a simple illustration of this categorization is seen .four combinations emerge : both fundamental processes and complex systems can be tackled analytically or algorithmically .an obvious challenge is to identify the most successful method to investigate a certain problem .this means not only choosing the formal representation but also identifying the reality domain it belongs to . most of science can be seen to be related to strategy * a * , as is discussed in sec .[ sec : fund ] . strategy * b * has been successful in addressing real - world complex systems , which is detailed in sec .[ sec : cs ] .the possibilities * c * and * d * have only been sparsely explored .regarding the former , some authors have recently argued that complex systems can and should be tackled with mathematical analysis .see for instance .strategy * d * is mostly uncharted , and some tentative efforts include describing space - time as a network in some fundamental theories of quantum gravity ( e.g. , spin networks in loop quantum gravity ) or deriving fundamental laws from cellular automaton networks . we make two choices we believe to be instrumental to the success of understanding real - world financial markets and devising profitable tms : 1 .understand the problem as originating from the domain of complex systems ; 2 .tackle the problem with an algorithmic approach .hence , from our point of view , we see * b * as the most promising strategy , see also sec .[ sec : fx ] .as mentioned , science can be understood as the quest to capture the processes of nature within formal mathematical representations . in other words ,`` mathematics is the blueprint of reality '' in the sense that formal systems are the foundation of science .this notion is illustrated in fig .[ fig : fund ] . the left - hand side of the diagram represents the real world , i.e. , the observable universe .scientists focus on a well - defined problem or question , identifying a relevant subset of reality , also called a natural system . to understand more about the nature of the natural system under investigation , experiments are performed yielding new information .robert boyle was instrumental in establishing experiments as the cornerstone of physical sciences around 1660 .approximately at the same time , the philosopher francis bacon introduced modifications to aristotle s ( nearly two thousand year old ) ideas , introducing the so - called scientific method where inductive reasoning plays an important role .this paved the way for a modern understanding of scientific inquiry .in essence , guided by thought , observation and measurement , natural systems can be `` encoded '' into formal systems , depicted on the right - hand side of fig .[ fig : fund ] . representing nature as mathematical abstractionsis understood as a mapping from the real world to the mathematical world .using logic ( e.g. , rules of inference ) in the formal system , predictions about the natural system can be made .these predictions can be understood as a mapping back to the physical world , `` decoding '' the knowledge gained from from the abstract model .checking the predictions with the experimental outcome shows the validity of the formal system as a model for the natural system .the following two examples should underline the great power of this approach , where new features of reality where discovered solely on the requirements of the mathematical model .firstly , in order to unify electromagnetism with the weak force ( two of the three non - gravitational forces ) , the theory postulated two new elementary particles : the w and z bosons .needless to say , these particles where hitherto unknown and it took 10 years for technology to advance sufficiently to prove their existence . secondly , the fusion of quantum mechanics and special relativity lead to the dirac equation which demands the existence of an , up to then , unknown flavor of matter : antimatter .four years after the formulation of the theory , antimatter was experimentally discovered .is it possible to isolate a single idea that has been instrumental to the success of describing fundamental processes ? it can be argued that the most fruitful paradigm in the study of fundamental processes in nature has been : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * p1 .mathematical models of reality are independent of their formal representation . * _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this idea leads to the notions of symmetry and invariance .to illustrate , imagine an arrow located in space .it has a length and an orientation . in the mathematical world , this can be represented by a vector , labeled . by choosing a coordinate system ,the abstract entity can be given physical meaning , for instance .the problem is however , that depending on the choice of the coordinate system , which is arbitrary , the same vector is described very differently : . the paradigm above states that the physical content of the mathematical model should be independent form the choice of how one represents the mathematical model .although this sounds rather trivial , for physics it has very deep consequences .for instance , the requirements that physical experiments should be unaffected by the time of day and geographic location they are performed at , are formalized as time and translational invariance , respectively .these requirements alone give rise to the conservation of energy and momentum ( noether s theorem ) .generally , the requirement of invariance and symmetry results in a great part of physics , from quantum field theories to general relativity , see fig . [fig : mm ] . in general relativitythe vectors are somewhat like multidimensional equivalents called tensors and the common sense requirement , that the calculations involving tensor do not depend on how they are represented in space - time , is covariance .see the right - hand side of fig .[ fig : mm ] .it is quite striking , but there is only one more ingredient needed in order to construct one of the most aesthetic and accurate theories in physics .it is called the equivalence principle and states that the gravitational force is equivalent to the forces experienced during acceleration .again , this may sound trivial , has however very deep implications .the standard model of elementary particle physics unites the quantum field theories describing the fundamental interactions of particles in terms of their ( gauge ) symmetries .see the left - hand side of fig .[ fig : mm ] .we now come back to the questions raised in sec .[ sec : lon ]. why has science been so successful in describing reality and why is science producing amazing technology at breakneck speed ?the simple answer is that there is no reason for this to be the case , other than the fact that it is the way things are .the nobel laureate eugene wigner captures this salient fact in his essay `` the unreasonable effectiveness of mathematics in the natural sciences '' : : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` [ ] the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and [ ] there is no rational explanation for it . '' _`` [ ] it is not at all natural that ` laws of nature ' exist , much less that man is able to discover them . ''`` [ ] the two miracles of the existence of laws of nature and of the human mind s capacity to divine them . ''`` [ ] fundamentally , we do not know why our theories work so well . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ over the past 300 years , physics has been very successful with this approach , describing most of the observable universe .in essence , this formalism works well for the fundamental workings of nature ( strategy * a * ) .however , to explain real - life complex phenomena , one needs to adopt a more systems oriented focus .this also means that the interactions of entities becomes an integral part of the formalism .some ideas should illustrate the change in perspective : * most calculations in physics are idealizations and neglect dissipative effects like friction ; * most calculations in physics deal with linear effects , as non - linearity is hard to tackle and is associated with chaos ; however , most physical systems in nature are inherently non - linear ; * the analytical solution of three gravitating bodies in classical mechanics , given their initial positions , masses , and velocities , can not be found ; it turns out to be a chaotic system which can only be simulated in a computer ; there are an estimated hundred billion galaxies in the universe .a _ complex system _ is usually understood as being comprised of many interacting or interconnected parts .a characteristic feature of such systems is that the whole often exhibits properties not obvious from the properties of the individual parts .this is called _ emergence_. in other words , a key issue is how the macro behavior emerges from the interactions of the system s elements at the micro level . moreover, complex systems also exhibit a high level of adaptability and self - organization .the domains complex systems originate from are mostly socio - economical , biological or physio - chemical .the study of complex systems appears complicated , as it is different to the reductionistic approach of established science .a quote from illustrates this fact : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` at each stage [ of complexity ] entirely new laws , concepts , and generalizations are necessary [ ] .psychology is not applied biology , nor is biology applied chemistry '' . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this means that the knowledge about the constituents of a system does nt reveal any insights into how the system will behave as a whole ; so it is not at all clear how you get from quarks and leptons via dna to a human brain and consciousness . moreover , complex systems are usually very reluctant to be cast into closed - form analytical expressions .this means that it is generally hard to derive mathematical quantities describing the properties and dynamics of the system under study .the paradigms of complex systems are straightforward : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * pi .every complex system is reduced to a set of objects and a set of functions between the objects .macroscopic complexity is the result of simple rules of interaction at the micro level . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ paradigm i is reminiscent of the natural problem solving philosophy of object - oriented programming , where the objects are implementations of classes ( collections of properties and functions ) interacting via functions ( public methods ) .a programming problem is analyzed in terms of objects and the nature of communication between them .when a program is executed , objects interact with each other by sending messages .the whole system obeys certain rules ( encapsulation , inheritance , polymorphism , etc . ) .indeed , in mathematics the field of category theory defines a category as the most basic structure : a set of objects and a set of morphisms ( maps between the sets ) .special types of mappings , called functors , map categories into each other .category theory was understood as the `` unification of mathematics '' in the 1940s .a natural incarnation of a category is given by a complex network where the nodes represent the objects and the links describe their relationship or interaction .now the structure of the network ( i.e. , the topology ) determines the function of the network .there are many excellent introductory texts , surveys , overviews and books covering the many topics related to complex networks : .this also highlights the paradigm shift from mathematical ( analytical ) models to algorithmic models ( computations and simulations performed in computers ) . in other words ,the analytical description of complex systems is abandoned in favor of algorithms describing the interaction of the objects , also called _agents _ , in a system according to rules of local interactions .this approach has given rise to the prominent field of _ agent - based modeling_. in addition , a key realization is that also the structure and complexity of each agent can be ignored when one focuses on their interactional structure . hence the animals in swarms , the ants foraging , the chemicals interacting in metabolic systems , the humans in a market , etc ., can all be understood as being comprised of featureless agents and modeled within this paradigm .paradigm ii is perhaps as puzzling as the `` unreasonable effectiveness of mathematics in the natural sciences '' . to quote stephen wolfram s reaction , to the realization that simplicity encodes complexity , from : _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` and i realized , that i had seen a sign of a quite remarkable and unexpected phenomenon : that even from very simple programs behavior of great complexity could emerge . '' _`` indeed , even some of the very simplest programs that i looked at had behavior that was as complex as anything i had ever seen .+ it took me more than a decade to come to terms with this result , and to realize just how fundamental and far - reaching its consequences are . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in summary , paradigms i and ii can be seen to belong to strategy * b * , as seen in fig .[ tbl : ovwerlaws ] . ]it is remarkable that simple interactions result in complex behavior : emergence , adaptivity , structure - formation and self - organization .in essence , complexity does not stem from the number of agents but from the number of interactions .for instance , there are roughly genes in a human vs. about genes in a grain of rice . in fig .[ fig : pi - ii ] an illustrated overview of an agent - based simulation is given : in a computer simulation agents are interacting according to simple rules and give rise to patterns and behavior seen in real - world complex systems .this also highlights the departure from a top - down to a bottom - up approach to complexity . as an example , swarming behavior in nature can be easily modeled by agents obeying three simple and local rules , reproducing its emergent and adaptive characteristics : 1 .move in the same direction as your neighbors ; 2 . remain close to your neighbors ; 3 .avoid collisions with your neighbors .in addition , biological ( temporal - spatial ) pattern formation , population dynamics , pedestrian / traffic dynamics , market dynamics etc . , which where hitherto impossible to tackle with a top - down approach , are well understood by the bottom - up approach given by the paradigms of complex systems . however , to be precise , there is still some mathematical formalism used in the study of complex systems .for instance , at the macro level , the so - called fokker - planck differential equation gives the collective evolution of the probability density function of a system of agents as a function of time .while at the micro level , a single agent s behavior can be described by the so - called langevin differential equation . the two formalism can be mapped into each other . however , as an example , 10000 agents following langevin equations in a computer simulation approximate the macro dynamics of the system more efficiently than an analytical investigation attempting to solve the equivalent fokker - planck differential equation . by understanding complexity as arising from the interaction of dynamical systems ,it is natural to adopt another paradigm in their study : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * piii .the passage of time is defined by events , i.e. , system interactions . * _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in this time ontology , time rests in the absence of events .in contrast to physical time , only interactions , or events , let a system s clock tick .hence this new methodology is called _ intrinsic time _ .implicit in this approach is that a system is made up exclusively of interactions , becoming a dynamic object with a past , present and future .every interaction with other systems is a new system .this event - based approach opens the door to a model that is self - referential , does not rely on static building blocks and has a dynamic frame of reference .this approach was a crucial ingredient in the formulation of the new empirical scaling laws .this is described in secs .[ sub : csl ] and [ sub : dc ] .the empirical analysis of real - world complex systems has revealed unsuspected regularities , such as scaling laws , which are robust across many domains . this has suggested that universal or at least generic mechanisms are at work in the structure - formation and evolution of many such systems .tools and concepts from statistical physics have been crucial for the achievement of these findings .in essence , _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * scaling laws can be seen as laws of nature found for complex systems . * _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a scaling law , or power law , is a simple polynomial functional relationship two properties of such laws can easily be shown : * a logarithmic mapping yields a linear relationship ; * scaling the function s argument preserves the shape of the function , called scale invariance .see for instance .scaling - law relations characterize an immense number of natural processes , prominently in the form of 1 .scaling - law distributions ;scale - free networks ; 3 . cumulative relations of stochastic processes .scaling - law distributions have been observed in an extraordinary wide range of natural phenomena : from physics , biology , earth and planetary sciences , economics and finance , computer science and demography to the social sciences .it is truly amazing , that such diverse topics as * the size of earthquakes , moon craters , solar flares , computer files , sand particle , wars and price moves in financial markets ; * the number of scientific papers written , citations received by publications , hits on web - pages and species in biological taxa ; * the sales of music , books and other commodities ; * the population of cities ; * the income of people ; * the frequency of words used in human languages and of occurrences of personal names ; * the areas burnt in forest fires ; are all characterized by scaling - law distributions .first used to describe the observed income distribution of households by the economist vilfredo pareto in 1897 , the recent advancements in the study of complex systems have helped uncover some of the possible mechanisms behind this universal law . however , there is still no conclusive understanding of the origins of scaling law distributions .some insights have been gained from the study of critical phenomena and phase transitions , stochastic processes , rich - get - richer mechanisms and so - called self - organized criticality .processes following normal distributions have a characteristic scale given by the mean of the distribution .in contrast , scaling - law distributions lack such a preferred scale. measurements of scaling - law processes yield values distributed across an enormous dynamic range ( sometimes many orders of magnitude ) , and for any section one looks at , the proportion of small to large events is the same .historically , the observation of scale - free or self - similar behavior in the changes of cotton prices was the starting point for mandelbrot s research leading to the discovery of fractal geometry .it should be noted , that although scaling laws imply that small occurrences are extremely common , whereas large instances are quite rare , these large events occur nevertheless much more frequently compared to a normal ( or gaussian ) probability distribution . for such distributions , events that deviate from the mean by , e.g. , 10 standard deviations ( called `` 10-sigma events '' ) are practically impossible to observe . for scaling law distributions , extreme events have a small but very real probability of occurring .this fact is summed up by saying that the distribution has a `` fat tail '' ( in the terminology of probability theory and statistics , distributions with fat tails are said to be leptokurtic or to display positive kurtosis ) which greatly impacts the risk assessment .so although most earthquakes , price moves in financial markets , intensities of solar flares , etc. , will be very small , the possibility that a catastrophic event will happen can not be neglected .the degree distribution of most complex networks follows a scaling - law probability distribution , see also .scale - free networks are characterized by high robustness against random failure of nodes , but susceptible to coordinated attacks on the hubs .theoretically , they are thought to arise from a dynamical growth process , called preferential attachment , in which new nodes favor linking to existing nodes with high degree . although alternative mechanisms have been proposed .next to distributions of random variables , scaling laws also appear in collections of random variables , called stochastic processes .prominent empirical examples are financial time - series , where one finds empirical scaling laws governing the relationship between various observed quantities . as an example, uncovered 18 novel empirical scaling - law relations , 12 of them being independent of each other . in finance ,where frames of reference and fixed points are hard to come by and often illusory , these new scaling laws provide a reliable framework .we believe they can enhance our study of the dynamic behavior of markets and improve the quality of the inferences and predictions we make about the behavior of prices .the new laws represent the foundation of a completely new generation of tools for studying volatility , measuring risk , and creating better forecasting and trading models .the new laws also substantially extend the catalogue of stylized facts and sharply constrain the space of possible theoretical explanations of the market mechanisms .see also and sec .[ sub : dc ] .the foreign exchange ( fx ) market can be characterized as a complex network consisting of interacting agents : corporations , institutional and retail traders , and brokers trading through market makers , who themselves form an intricate web of interdependence . with an average daily turnover of three to four trillion usd , and with price changes nearly every second, the fx market offers a unique opportunity to analyze the functioning of a highly liquid , over - the - counter market that is not constrained by specific exchange - based rules .there has been a long history of attempting to understand finance from an analytical point of view ( i.e. , within strategy * a * ) .indeed , the field of mathematical finance has produced a vast body of mathematical tools characterized by a very high level of abstraction .we believe this is a misleading approach to fundamentally understand markets and to devise tms . on the one hand , to make the equations tractable , often stringent constraints and unrealistic assumptions have to be imposed .on the other hand , the reality of markets being an epitome of a complex system is ignored .hence our shift to strategy * b*. we choose to tackle the problem of financial markets in accordance with paradigm i , viewing it in terms of interacting agents . applying the algorithmic approach given by paradigmii , entails understanding the observed market complexity at the macro level arising from the interaction of heterogeneous agents at the micro level according to simple rules .the heterogeneity is given for instance by the traders geographical locations , trading time horizons , risk profiles , dealing frequencies , trade sizes etc .in essence , our tms are agent - based models .an agent is defined by a position , comprised of the set , where is the current average ( or entry price ) and is the gearing ( position size ) .each position also has a set of event - based and simple rules .the agent s interaction with each other are constrained by the price curve .there is a very concrete application of paradigm iii , the event - based methodology , to fx time series . as mentioned in sec .[ sec : it ] , it is tightly connected with the existence of cumulative scaling - law relations . , ( b ) .these directional - change events ( diamonds ) act as natural dissection points , decomposing a total - price move between two extremal price levels ( bullets ) into so - called directional - change ( solid lines ) and overshoot ( dashed lines ) sections .note the increase of spread size during the two weekends with no price activity .time scales depict physical time ticking evenly across different price - curve activity regimes , whereas intrinsic time triggers only at directional - change events , independent of the notion of physical time .taken from .,scaledwidth=50.0% ] in the _ direction change _ algorithm was introduced .[ fig : dc_to ] , taken from , depicts how the price curve is dissected into so - called directional - change and overshoot sections .the dissection algorithm measures occurrences of a price change from the last high or low ( i.e. , extrema ) , if it is in an up or down mode , respectively . at each occurrence of a directional change ,the overshoot segment associated with the previous directional change is determined as the difference between the price level at which the last directional change occurred and the extrema , i.e. , the high when in up mode or low when in down mode .the high and low price levels are then reset to the current price and the mode alternates . presented the directional - change count scaling law where is the number of directional changes measured for the threshold .extending this event - driven paradigm further enables us to observe new , stable patterns of scaling . in event - based approach was crucial for discovering eight of the 12 primary scaling law relations , and four of the six secondary ones .this establishes the fact , that moving from the empirical time series to their event - based abstractions provides a unique point of view , from which patterns can be observed which would otherwise remain hidden .this is illustrated in fig .[ fig : piii ] . moreover , the discovery of the overshoot scaling law was instrumental in extending the event - based methodology to accommodate a second type of event .this scaling law relates the length of the average overshoot segment to the directional change threshold it turns out that the average length is about the same size as the threshold : .this finding motivates the dissection of the price curve into , not only directional - change events , but also overshoot events , occurring during the overshoot segments .crucially , both are defined by the same threshold . as a result , by applying the direction change algorithm to empirical price moves , we can reduce the level of complexity of the real - world time series . in detail ,the various fixed event thresholds of different sizes define focal points , blurring out irrelevant details of the price evolution . in fig .[ fig : cl_plt1 ] an example of an empirical price curve and its associated array of events for a chosen directional - change threshold is shown .this effectively unveils the key structures of the market .an example of this is highlighted in fig .[ fig : cl_plt2 ] : the _ coastline _ , comprised solely of directional changes and overshoots , with no constraints coming from physical time .this implies , that the coastline faithfully maps the activity of the market : during low volatility , the coastline is shrunk , whereas active market environments get stretched and all their details are exposed .hence , by construction , this procedure is adaptive . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * the coastlines associated with different directional - change thresholds are taken as the basis for our r&d effort , coupled with the _ weltanschauung _ coming from strategy * * and paradigms i to iii . *_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ; the red triangles represent directional - change and the blue bullets overshoot events.,title="fig:",scaledwidth=90.0% ] ; the red triangles represent directional - change and the blue bullets overshoot events.,title="fig:",scaledwidth=90.0% ] , is a pure event - based price curve and lacks any reference to physical time ; its derivation is seen in fig .[ fig : cl_plt1 ] ; by measuring the various coastlines for an array of thresholds , multiple levels of event activity are considered.,scaledwidth=90.0% ] , 1993 , , invited presentation at the xxxixth international aea conference on real time econometrics , 14 - 15 oct 1993 in luxembourg , and the 4th international pase workshop , 22 - 26 nov 1993 in ascona ( switzerland ) ; also in `` erfolgreiche zinsprognose '' , ed . by b.lthje , verband ffentlicher banken , bonn 1994 , isbn 3 - 927466 - 20 - 4 ; uam.1993 - 08 - 16 , olsen & associates , seefeldstrasse 233 , 8008 zrich , switzerland .
* research : * the process of discovering fundamental new knowledge and understanding . * development * : the process by which new knowledge and understanding is applied .
photonic crystals , which consist of a periodically arranged lattice of dielectric scatterers , have attracted substantial research interest over the past decade .most commonly these structures are used to trap , guide , and otherwise manipulate pulses of light , and have led to a variety of important applications in modern nanophotonics , including extremely high - q electromagnetic cavities and slow - light propagation of electromagnetic pulses . however , a new and increasingly important application of photonic crystal structures is in the field of photovoltaics .it has long been known that structured materials can be used to achieve photovoltaic conversion efficiency beyond the yablonovitch limit , and indeed researchers have recently proposed photonic crystals and arrays of dielectric nanowires as inexpensive ways to create highly efficient absorbers .recent research in this area has been driven by advances in nanostructure fabrication concurrent with increased investment in renewable energy technology ( for the latest developments see ) .one aspect in which the use of photonic crystals in photovoltaics differs markedly from their employment in optical nanophotonics is the important role played by material absorption . for most nanophotonic applications absorptionis an undesirable effect which must in general be minimized and can in many cases be neglected .this is in stark contrast to photovoltaics , in which the main aim is to exploit the properties of the structure to increase the overall absorption efficiency .modeling of absorbing photonic crystals has thus far been performed using direct numerical methods such as the finite - difference time - domain ( fdtd ) method , the finite element method ( fem ) or using the transfer matrix method .although these methods have produced valuable information about the absorption properties of such structures they do not allow us to gain direct physical insight into the mechanism of the absorption within them .furthermore , these calculations require substantial computational time and resources . herewe present a rigorous modal formulation of the scattering and absorption of a plane wave by an array of absorbing nanowires , or , correspondingly , an absorbing photonic crystal slab ( fig .[ slab ] ) .the approach is a generalization of diffraction by capacitive grids formulated initially in refs for perfectly conducting cylinders .in contrast to conventional photonic crystal calculations the material absorption is taken into account rigorously , using measured values for the real and the imaginary parts of the refractive indices of the materials comprising the array .our formulation can be applied to array elements of arbitrary composition and cross - section .the semi - analytical nature of this modal approach results in a method which is quick , accurate , and gives extensive physical insight into the importance of the various absorption and scattering mechanisms involved in these structures .the method is based on an expansion in terms of the fundamental eigenmodes of the structure in each different region . within the photonic crystal slab the fieldsare expanded in terms of bloch modes , while the fields in free space above and below the slab are expanded in a basis of plane waves .these expansions are then matched using the continuity of the tangential components of the fields at the top and the bottom interfaces of the slab to compute fresnel interface reflection and transmission matrices .an important aspect of this approach is that the set of bloch modes must form a complete basis .this is not a trivial matter , as even for structures consisting of lossless materials the eigenvalue problem for the bloch modes is not formally hermitian .however , it is well known from the classical treatment of non - hermitian eigenvalue problems that a complete basis may be obtained by including the bloch modes of the adjoint problem . here, we use the fem to compute both the bloch modes and the adjoint bloch modes . because the mode computation may be carried out in two dimensions ( 2d ) like waveguide mode calculations , this routine is highly efficient .we note that a previous study , undertaken using a different method , failed to locate some of the modes , as is demonstrated in sec .[ section : dispersion ] ; we emphasis that the algorithm presented here is capable of generating a complete set of modes .in addition the fem allows us to compute bloch modes for arbitrary materials and cross sections .we note that this approach differs from earlier formulations in which the photonic crystal had to consist of an array of cylinders with a circular cross - section . with the modesidentified , the transmission through , and reflection from , the slab can be computed using a generalization of fresnel reflection and transmission matrices , and the absorption is found using an energy conservation relation .the organization of the paper is as follows .the theoretical foundation of the method is given in sec .[ formulation ] while the numerical verification , validation and characterization of the absorption properties of a particular silicon nanowire array occurs in sec .[ applica ] .the details of the mode orthogonality , normalization , completeness as well as energy and reciprocity relations are given in the appendices .as mentioned in sec . [ intro ] , we separate the solution of the diffraction problem into three steps , one involving the consideration of the scattering of plane waves at the top interface and the introduction of the fresnel reflection and the transmission matrices for a top interface ( fig . [ slab ] ) .next we introduce the fresnel reflection and transmission matrices for the bottom interface by considering the reflection of the waveguide modes of the semi - infinite array of cylinders at the bottom interface . then the total reflection and transmission through the slab can be calculated using a fabry - perot style of analysis .the approach is based on the calculation of the bloch modes and adjoint bloch modes of an infinite array of cylinders . before doing sohowever , we first provide the field s plane wave expansions above and below the photonic crystal slab . in a uniform media such as free space ,all components of the electromagnetic field of a plane wave must satisfy the helmholtz equation where is the free space wavenumber . herewe consider the diffraction of a plane wave on a periodic square array of cylinders with finite length ( see fig . [ slab ] ) .in such a structure , the fields have a quasi - periodicity imposed by the incident plane wave field , where .that is , where and is a lattice vector , where and are integers .all plane waves of the form , must satisfy the bloch condition eq .( [ qp ] ) and so . hence , where is an integer .it follows then that the coefficients and are discretized as follows and form the well known diffraction grating orders .we split the electromagnetic field into its transverse electric and the transverse magnetic components ( see , for example , ref . ) . for the transverse electric mode, the electric field is perpendicular to the plane of incidence , while for the transverse magnetic mode the magnetic field vector is perpendicular to the plane of incidence with the plane of incidence being defined by the -axis and the plane wave propagation direction given by the vector .these te and tm resolutes are given by respectively , where .the te and tm plan wave modes are mutually orthogonal , and are normalized such that where the overline in denotes complex conjugation and the integration is over the unit cell .the general form of the plane wave expansions above and below the grating ( see fig .[ slab ] ) can then be written in terms of these and modes . following the nomenclature of ref . , the plane wave expansions take the forms \bm{r}^e_s(\bm{r } ) \nonumber\\ & + & \chi_s^{1/2 } \left [ f^{m-}_s e^{-i { \gamma}_s(z - z_0 ) } \right.\nonumber\\ & + & \left .f^{m+}_s e^{i { \gamma}_s(z - z_0 ) } \right ] \bm{r}^m_s(\bm{r } ) \label{epw}\\ \bm{e}_z\times \bm{h}_\perp(\bm{r } ) & = & \sum_{s } \chi_s^{1/2 } \left [ f^{e-}_s e^{-i { \gamma}_s(z - z_0 ) } \right.\nonumber\\ & - & \left .f^{e+}_s e^{i { \gamma}_s(z - z_0 ) } \right ] \bm{r}^e_s(\bm{r } ) \nonumber\\ & + & \chi_s^{-1/2 } \left [ f^{m-}_s e^{-i { \gamma}_s(z - z_0 ) } \right.\nonumber\\ & -&\left .f^{m+}_s e^{i { \gamma}_s(z - z_0 ) } \right ] \bm{r}^m_s(\bm{r } ) \label{kpw } \label{pwexpansion}\end{aligned}\ ] ] where and represent the amplitudes of transverse electric and magnetic component of the downward and upward propagating plane waves and where denotes a plane wave channel represented by the pair of integers . in the numerical implementationit is convenient to order the plane waves in descending order of . in eqs( [ epw ] ) and ( [ kpw ] ) , is defined as and the factors are included to normalize the calculation of energy fluxes .the fem presented here is a general purpose numerical method which can handle the square , hexagonal or any other array geometry .the constitutive materials of the array can be dispersive and lossy .we first introduce the eigenvalue problem and then we present a variational formulation and the corresponding fem discretization . at a fixed frequency ,the electric and magnetic fields of the electromagnetic modes satisfy maxwell s equations where and are the relative dielectric permittivity and magnetic permeability respectively .we express the time dependence in the form .the magnetic field has been rescaled as with , the impedance of free space .we now consider the electromagnetic modes of an array of cylinders of infinite length .the cylinder axes are aligned with the .the dielectric permittivity and magnetic permeability of the array are invariant with respect to . from this translational invariance , we know that the bloch modes of the array have a and are quasi - periodic with respect to and .this reduces the problem of finding the modes to a unit cell in the ( see fig .[ slab ] ) .as explained in section [ section : adjoint ] , this modal problem for the cylinder arrays is not hermitian and therefore the eigenmodes do not necessarily form an orthogonal set .however , by introducing the modes of the adjoint problem we can form a set of adjoint modes which have a biorthogonality property with respect to the primal eigenmodes . in order to introduce the adjoint problem , we first write maxwell s equations for conjugate material parameters and : the fields and have the same time dependence and quasi - periodicity as and .we define the adjoint modes as the conjugate fields and , and they satisfy maxwell s equations : therefore , the adjoint modes and satisfy the same wave equations as and , but have the opposite quasi - periodicity and time dependence .let denote a unit cell of the periodic lattice . within the array ,a bloch mode is a nonzero solution of the vectorial wave equation which is quasi - periodic in the transverse plane with respect to the wave vector , and has exponential dependence , with the propagation constant , i.e. the longitudinal and transverse components of the electric field are respectively and ^t - ] ( according to the scaling ( [ ez : scaled ] ) ) and a _ downward propagating _wave = [ \bm{e}_{\perp } , -i \ , \zeta^- \hat{e}_z ] ] .if is a piecewise continuous polynomial of degree , i.e. , , then the two components of are piecewise polynomials of degree , i.e. , .we can then verify that , with our choice of and , the first exactness relation of eq .( [ eq1:diagram ] ) , i.e. , , is also reproduced at the discrete level .we do not discuss the second exactness relation since , here , we do not have to build an approximation space for .we recall that in this paper we set , and approximate the transverse and longitudinal components using polynomials of degrees 2 and 3 respectively ; the construction of the basis functions for the fem spaces is described in ref . .modal orthogonality relations , or more correctly biorthogonality relations , in the case of the problem considered here , are important in determining the field expansion coefficients in eq .( [ array : mode : expansion ] ) . although we may recast eq .( [ matrix1 ] ) in a form in which each matrix is hermitian ( for the lossless case ) : \left [ \begin{array}{c } \bm{e}_{\perp , n } \\\hat{\bm{e}}_{z , n } \end{array } \right ] = \zeta_n^2 \left [ \begin{array}{cc } \bm{m}_{tt } & \bm{k}_{zt}^h \\\bm{k}_{zt } & \bm{k}_{zz } \end{array } \right ] \left [ \begin{array}{c } \bm{e}_{\perp , n } \\n } \end{array } \right],\ ] ] this generalized eigenvalue problem is not hermitian in general , since for two hermitian matrices , and , the corresponding eigenproblem , is not necessarily hermitian since the product of two hermitian matrices is not , in general , hermitian .accordingly , the eigenmodes do not necessarily form an orthogonal set .it is also clear that the eigenproblem is not hermitian in the presence of loss .therefore we introduce the adjoint problem that we solve to obtain a set of adjoint modes which have a biorthogonality property with respect to the eigenmodes . in order to define the adjoint operators , we first write( [ mode3b ] ) , the alternate form of eq .( [ mode3 ] ) , as where and are the differential operators defined by \ ! , \\ \label{operator :m } \mathcal{m } \, \bm{e } \ ! = \!\ !\left [ \begin{array}{cc } - \mu^{-1 } \bm{e}_{\perp } & \mu^{-1 } \nabla_{\perp } \hat{e}_z \\ \ ! - \nabla_{\perp } \! \cdot \ ! ( \mu^{-1 } \bm{e}_{\perp } \ ! ) & \nabla_{\perp } \! ( \mu^{-1 } \nabla_{\perp } \hat{e}_z ) \ ! + \ !k^2 \varepsilon \hat{e}_z \end{array } \! \right ] \! .\end{aligned}\ ] ] the adjoint operators and , with respect to the inner product eq .( [ eq : inner : product ] ) are \ ! , \\ \mathcal{m}^{\dagger } \bm{f } \ ! = \!\ !\left [ \begin{array}{cc } - \overline{\mu}^{-1 } \bm{f}_{\perp } & \overline{\mu}^{-1 } \nabla_{\perp } \hat{f}_z \\ \ ! - \nabla_{\perp } \ ! \cdot \ !( \overline{\mu}^{-1 } \bm{f}_{\perp } ) & \nabla_{\perp } \! ( \overline{\mu}^{-1 } \nabla_{\perp } \hat{f}_z ) \ ! + \ !k^2 \overline{\varepsilon } \hat{f}_z \end{array } \ !\right ] \!\ ! , \end{aligned}\ ] ] and follow from the definitions although for lossless media , we then have and , as in eq .( [ matrix : sym2 ] ) , eigenproblem eq .( [ primal : mode:1 ] ) is not hermitian and , as we see in section [ applica ] , complex values of can occur even for a lossless photonic crystal .the modes are the eigenmodes of the problem which satisfy the same quasi - periodic boundary conditions as .we now conjugate the boundary value problem eq .( [ adjoint : mode:1 ] ) and take into account the fact that and .consistent with eqs ( [ eq2:a1 ] ) and ( [ eq2:a2 ] ) we redefine the adjoint mode as the eigenmode which satisfies the same partial differential equation as , i.e. , but with the quasi - periodic boundary conditions associated with the adjoint wavevector .this is convenient for the fem programming since the same subprograms can be used to handle the partial differential equations ( [ primal : mode:1 ] ) and ( [ adjoint : mode:2 ] ) while only a few lines of code are needed to manage the sign change for .the spectral theory for non - self - adjoint operators is difficult and in general less developed . in this paperwe assume that the modes form a complete set and that the adjoint modes can be numbered such that , and the following biorthogonality relationship is satisfied in which ] using the relation which is derived directly from maxwell s equations ( [ a2 ] ) .a similar spectral property has been proven for a class of non - self - adjoint sturm - liouville problems ( see for instance , theorem 5.3 of ref .it is not clear if such a theorem can be extended to the vectorial waveguide mode problem , although it has been shown that the spectral theory of compact operators can be applied to a waveguide problem .however , our numerical calculations have generated modes which satisfy the biorthogonality relation and verify the completeness relations eqs ( [ mc5 ] ) and ( [ ip10 ] ) , in appendix [ mc ] , to generally within , and with even a better convergence when the number of plane wave orders and array modes , used in the truncated expansion , increase .we now establish the biorthogonality relation eq .( [ eq1:biorthog ] ) using the operator definitions eqs .( [ def : adjoint : l ] ) and ( [ def : adjoint : m ] ) of the adjoint , and eqs ( [ primal : mode:1 ] ) and ( [ adjoint : mode:2 ] ) .this relation may also be established directly from maxwell s equations , as is shown in appendix [ om ] .we begin with and , since and are eigenfunctions , we obtain now , by taking into account eq .( [ def : adjoint : m ] ) , we can derive the following biorthogonality property : i.e. , the integrand of the field product in eq .( [ eq2:biorthog ] ) can be expressed in term of the fields and as in eq .( [ eq1:biorthog ] ) by noting since , ( [ primal : mode:1 ] ) and ( [ operator : l ] ) .the biorthogonality relation eq .( [ eq2:biorthog ] ) is useful for the fem implementation of the field product since the product of two vectors and takes the form where is the discrete version of the operator and is the matrix on the right hand side of eq .( [ matrix : sym2 ] ) .the calculation of the modes and the adjoint modes using this fem approach is highly efficient and numerically stable . in the following section we use modes of the structure for the field expansion inside the photonic crystal slab , and exploit the adjoint modes , which are biorthogonal to the primal modes , in the solution of the field matching problem in a least square sense . in this sectionwe introduce the fresnel reflection and transmission matrices for photonic crystal - air interfaces and calculate the total transmission , reflection and absorption of a photonic crystal slab .first we introduce the fresnel reflection and transmission matrices for an interface between free space and the semi - infinite array of cylinders .we specify an incident plane wave field ( see eqs([epw])([kpw ] ) ) propagating from above onto a semi - infinite slab , giving rise to an upward reflected plane wave field and a downward propagating field of modes in the slab .the field matching equations between the plane wave expansions eqs ( [ epw ] ) and ( [ kpw ] ) and the array mode expansion eq .( [ array : mode : expansion ] ) are obtained by enforcing the continuity of the tangential components of transverse fields on either side of the interface : equations ( [ eq12a ] ) and ( [ eq12b ] ) correspond to the continuity condition of the tangential electric field and magnetic field respectively . here denotes the downward tangential electric field component of mode while denotes the downward tangential magnetic field of mode , which satisfy the orthonormality relation in appendix [ mc ] we derive completeness relations for both the bloch modes basis and the plane wave basis .we now proceed to solve these equations in a least squares sense , using the galerkin - rayleigh - ritz method in which the two sets of equations are respectively projected on the two sets of basis functions . in this treatment, we project the electric field equation onto the plane wave basis and the magnetic field equation onto the slab mode basis to derive , in matrix form , where ,\nonumber\\ \qquad j^{e / m}_{sm } & = & \iint \overline{\bm{r}}^{e / m}_s \cdot \bm{e}_{m\perp } \,\ , ds , \\\bm{j}^\dag & = & \left ( \begin{matrix } { \bm{j}}^{\dag e } \,\ , \bm{j}^{\dag m } \end{matrix } \right ) , \qquad \bm{j}^{\dag e / m } = \left [ j^{\dag e / m}_{ms } \right],\nonumber\\ \qquad j^{\dag e / m}_{ms } & = & \iint { \bm{r}}^{e / m}_s \cdot \bm{e}^{\dag}_{m\perp } \,\ , ds , \label{jmat } \\\,\,\bm{\chi } & = & \mathrm{diag~ } \ { \chi_s \}.\end{aligned}\ ] ] then , defining the scattering matrices and according to the definitions we may solve eqs ( [ eqe12 ] ) and ( [ eqk12 ] ) to derive for a structure with inversion symmetric inclusions the following useful relations hold : , and .while the fresnel matrices and given in eqs ( [ eqr12 ] ) and ( [ eqt12 ] ) have been derived by presuming a plane wave field incident from above , identical forms are derived if we presume incidence from below , a consequence of the given symmetries of the modes .we now derive the slab - free space fresnel coefficients , assuming that we have a modal field incident from above and giving rise to a reflected modal field and a transmitted plane wave field below .this time , the field matching equations are and we again project the electric field equation onto the plane wave basis and the magnetic field equation on to the modal basis for the slab . accordingly , we derive then , defining the scattering matrices and according to and , we form these fresnel scattering matrices , , , can now be readily utilized to calculate the total transmission , reflection and the absorption of the slab .note that for inversion symmetric inclusions the following reciprocity relations hold : and .these relations hold independently of the truncation of the field expansions .in addition , in general , the following relations are also true : and . following the diagram in fig .[ fig : diagram ] , we may use the fresnel interface matrices to calculate the transmission and the reflection matrices for the entire structure from the following equations : where ] is the vector containing the magnitudes of components of the incident plane wave in the specular diffraction order , and is the angle between the electric vector and the plane of incidence .the absorptance , , is calculated by energy conservation as ,\end{aligned}\ ] ] where , are the diffraction order components of , and is the set of all propagating orders in free space . in the absence of absorption the following relations for the slab reflection and transmission matrices can be deduced ( for details see appendix c ) .given that the photonic crystal slab is up / down symmetric , the slab transmission and the reflection matrices for plane wave incidence from below are the same as for incidence from above : and .therefore the energy conservation relations take the form with denoting a diagonal matrix with the entries for the propagating plane wave channels and zeros for the evanescent plane wave channels , and being a diagonal matrix which has entries in the evanescent channels and in the evanescent channels , and zeros for the propagating channels .the semi - analytic expressions for the transmission eq .( [ tr ] ) and the reflection eq .( [ trr ] ) matrices for the slab can give important insight to improve the overall absorption efficiency .for instance , in eq .( [ tr ] ) the matrices and represent the coupling matrices for a plane wave into and out of the slab , while the scattering matrix describes fabry - perot - like multiple scattering . as demonstrated in ref . , absorption is enhanced if first , there is a strong coupling ( ) , strong scattering amplitudes eq .( [ fpm ] ) that increase the effective path in the slab multiple times , and the field strength is concentrated in the region of high absorption .in this section , we first use our mode solver to compute the dispersion curves of an array of lossless cylinders .then we apply our modal approach to analyze the absorption spectrum of an array of lossy cylinders ( silicon ) and we also examine the convergence of the method with respect to the truncation parameters . finally , we consider the example of a photonic crystal slab which exhibits fano resonances .though our method can be applied to inclusions of any cross section , we first consider an array of lossless circular cylinders , with dielectric constant ( alumina ) , and normalized radius , in an air background ( refractive index ) .we compute the propagation constant of the bloch modes defined in eq .( [ mode1 ] ) using the vectorial fem .figure [ fig : dispersion:1 ] shows the dispersion curves corresponding to a periodic boundary condition in the transverse plane , i.e. , .the solid red curves indicate values of the propagation constant such that is real with positive values of corresponding to propagating modes , while negative values indicate evanescent modes .the dispersion curve for the fundamental propagating mode is at the lower right corner and starts at the coordinate origin .complex values of can also occur , even for a lossless system , and even with ; these modes , which occur in conjugate pairs , are shown by the dashed blue curves and are distinguished by a horizontal axis which is labeled .we can observe that the dashed blue curves connect a maximum point of a solid red curve to a minimum point of another solid red curve .this property of the dispersion of cylinders arrays was observed by blad and sudb .the dispersion curves in fig .[ fig : dispersion:1 ] are plotted using the same parameters as in fig . 4 of ref .all curves shown in fig .4 of also appear in fig .[ fig : dispersion:1 ] of the present paper although our figure reveals many additional curves , for instance , near ( . in , the dispersion curves were plotted using a continuation method whose starting points are on the axis ; most of the curves which do not intersect this axis are missing in fig . 4 of buttheir solutions are required for the completeness of the modal expansion eqs ( [ eq12a ] ) and ( [ eq12b ] ) .for instance , if the eigenvalues are numbered in decreasing order of , then the index number of the eigenvalues , which appear near ( in fig .[ fig : dispersion:1 ] , are between 10 and 20 and , as explained in the convergence study of the next section , we typically need well over 20 modes to obtain good convergence .we now consider a silicon nanowire ( sinw ) array consisting of absorptive nanowires of radius , arranged in square lattice of lattice constant .this constitutes a dilute sinw array since the silicon fill fraction is approximately 3.1% .the dilute nature of the array can facilitate the identification of the modes which play a key role in the absorption mechanism .the height of the nanowires is . for siliconwe use the complex refractive index of green and keevers .figure [ fig : dilute : sinw:1 ] shows the absorptance spectrum of the dilute sinw array , together with the absorptance of a homogeneous slab of equivalent thickness and of a homogeneous slab of equivalent volume of silicon .the absorption feature between 600 and 700 nm is absent in bulk silicon and is entirely due to the nanowire geometry . using our methodwe have identified some specific bloch modes which play a key role in this absorption behavior . at shorter wavelengths the absorption of the silicon is high and therefore the absorption of the slabdoes not depend on the slab thickness ( see fig.[fig : dilute : sinw:1 ] thin blue curve and thick red curve ) .note that that the geometry of the inclusion does not need to be circular since our fem based method can handle arbitrary inclusion shapes . indeed , in fig .[ fig : dilute : sinw:2 ] the absorptance spectrum of a sinw array consisting of square cylinders is analyzed and compared to the absorptance for circular cylinders of same period and cross sectional area . at long wavelengths ,the absorption for the two geometries is the same , while at shorter wavelengths the absorption is slightly higher for the square cylinders .this can be explained by the field concentration at the corners of the square cylinders .the contour plot in fig .[ fig : dilute : sinw:3 ] shows the absorptance versus wavelength and the cylinder height for the circular sinw array .note that the nanowire height of used in fig .[ fig : dilute : sinw:1 ] is , indeed , in a region of high absorptance for the wavelength band ] , for 409 wavelength values uniformly distributed over the interval ] .it took about 44 hours to generate the full results using 16 cores of a high performance parallel computer with 256 cores ( it is a shared memory system consisting of 128 processors intel itanium 2 1.6ghz ( dual core ) ) . if the absorptance had to be computed independently for the data points , this would have required many months of computer time .we have studied the convergence with respect to the truncation parameters of the plane wave expansions and array mode expansions in eqs ( [ eq12a ] ) and ( [ eq12b ] ) .the array modes are ordered in decreasing order with respect to .we have used a _circular truncation _ for the plane wave truncation number , i.e. , for a given value of , only the plane wave orders such that are used in the truncated expansions ; this choice is motivated by the fact that , for normal incidence , it is consistent with the ordering of the array modes since the plane wave propagation constants are given by the dispersion relation ( see eq .( [ eq1:pw : dispersion ] ) ) ; the propagation constants are also numbered in decreasing value of .figures [ fig : dilute : sinw:4 ] and [ fig : dilute : sinw:5 ] illustrate the convergence when the number of plane wave orders and the number of array modes used in modal expansions are increased .the wavelength is set to and the corresponding silicon refractive index is , which is taken from ref .the error is estimated by assuming that the result obtained with the highest discretization is exact " ( for and ) .the calculations are based on a highly refined fem mesh consisting of 8088 triangles and 16361 nodes . in fig .[ fig : dilute : sinw:5 ] , there is a sudden jump in error when ( isolated blue dot ) ; this is due to the chosen truncation cutting through a pair of degenerate eigenvalue i.e. , including one member of the pair but excluding the other ) . indeed the solution is well - behaved when .similar behavior has been observed when a pair of conjugate eigenvalues is cut .thus all members of a family of eigenvalues must be included together in the modal expansion , otherwise there is degradation of the convergence , which is particularly strong when it occurs for a low - order eigenvalue which makes a significant contribution to the modal expansions . in practicewe expect the computed absorptance to have about three digits of accuracy when the truncation parameters of the plane wave expansions and array mode expansions are set respectively to ( giving 29 plane wave orders and basis functions for te and tm polarizations ) and , assuming adequate resolution of the fem mesh . the absorptance curve for the dilute sinw array in fig . [ fig : dilute : sinw:1 ]is obtained using these truncation parameters and an fem mesh which has 1982 triangles and 4061 nodes .figure [ fig : dilute : sinw:6 ] presents the absorptance spectrum for off - normal incidence ( ) .the absorptance is sensitive to the angle of incidence and the light polarization .compared with normal incidence , the absorptance peak in the wavelength band $ ] ( low silicon absorption ) has shifted to shorter wavelengths while the peak near ( high silicon absorption ) has shifted to longer wavelengths .fano resonances are well known from the field of particle physics , and they are observable also in photonic crystals .they are notable for their sharp spectral features and so serve as a good benchmark for the accuracy of new numerical methods .we have carried out a calculation of fano resonances using our modal formulation .we present here an example that was first studied by fan and joannopoulos .the photonic crystal slab consists of a square array of air holes in a background material of relative permittivity .figure [ slabt ] shows the transmittance of a photonic crystal slab as a function of the normalized frequency for a plane wave at normal incidence .the parameters of the slab considered in fig .[ slabt ] are identical to those in fig .12(a ) of ref . , and the curves from the two figures are the same to visual accuracy .this is an additional validation of the approach presented here .the transmittance curve reveals a strong transmission resonance which is typical of asymmetric fano resonances .these resonances are very sharp and can be used for switching purposes .we have developed a rigorous modal formulation for the diffraction of plane waves by absorbing photonic crystal slabs .this approach combines the strongest aspects of two methods : the computation of bloch modes is handled numerically , using finite elements in a two - dimensional context where this method excels , while the reflection and transmission of the fields through the slab interfaces are handled semi - analytically , using a generalization of the theory of thin films .this approach can lead to results achieved using a fraction of the computational resources of conventional algorithms : an example of this is given in fig .[ fig : dilute : sinw:3 ] , in which a large number of different computations for different slab thicknesses were able to be computed in a very short time .although the speed - testing of this method against conventional methods ( fdtd and 3d finite element packages such as comsol ) is a subject for future work , we have demonstrated here the method s accuracy and its rapidity of convergence .the method also satisfies all internal checks related to reciprocity and conservation of energy , as well as reproducing known results from the literature in challenging situations , such as the simulation of fano resonances ( fig .[ slabt ] ) .the method is very general with respect to the geometry of the structure .we have demonstrated this by modeling both square and circular shaped inclusions .in addition , because the method is at a fixed frequency , it can handle both dissipative and dispersive structures in a straightforward manner , using tabulated values of the real and imaginary parts of the refractive index .this is in contrast to time - domain methods such as fdtd , or to some formulations of the finite element approach . though we have used a single array type here ( the square array ), other types of structure ( such as hexagonal arrays ) can be dealt with by appropriately adjusting the unit cell , together with the allowed range of bloch modes .it is also easy to see how this method could be extended to multiple slabs containing different geometries , as well as taking into account the effect of one or more substrates ; this extension would involve the inclusion of field expansions for each layer , together with appropriate fresnel matrices , in the equation system ( [ sf1])-([sf4 ] ) . in principlethis approach could then be used to study rods ( or holes ) whose radius or refractive index changed continuously with depth , provided the spacing between the array cells remained constant .our method has an important advantage over purely numerical algorithms in that it gives physical insight into the mechanisms of transmission and absorption in slabs of lossy periodic media .the explanation of the absorption spectrum in arrays of silicon nanorods is vital for the enhancement of efficiency of solar cells , however this spectrum is complicated , with a number of processes , including coupling of light into the structure , fabry - perot effects , and the overlap of the light with the absorbing material , all playing an important role . by expanding in the natural eigenmodes of each layer of the structure , it is possible to isolate these different effects and to identify criteria that the structure must satisfy in order to efficiently absorb light over a specific wavelength range .this , we have discussed in a recent related publication .this research was conducted by the australian research council centre of excellence for ultrahigh bandwidth devices for optical systems ( project number ce110001018 ) .we gratefully acknowledge the generous allocations of computing time from the national computational infrastructure ( nci ) and from intersect australia .here we prove the biorthogonality of the modes and adjoint modes .let us consider set of modes and adjoint modes of photonic crystal .these modes satisfy and we multiply both sides of ( [ me11 ] ) by and ( [ me21 ] ) by and correspondingly we multiply each set of ( [ me12 ] ) by and ( [ me22 ] ) by then adding these we deduce next we separate the transverse and longitudinal ( along the cylinder axes ) components according to after the substitution of ( [ nab ] ) into ( [ maine ] ) we obtain \nonumber \\ = \nabla_\perp \cdot \left [ \bm{e}_{n \perp } \times \bm{h}^\dag_{m \parallel } \right. + \left .\bm{e}_{n \parallel } \times \bm{h}^\dag_{m \perp } \right .\\ + \left .\bm{e}^\dag_{m \perp } \times \bm{h}_{n \parallel } \right .\bm{e}^\dag_{m \parallel } \times \bm{h}_{n\perp } \right ] .\label{efo}\end{aligned}\ ] ] taking into account the -dependence on the modes given by the factors , and integrating ( [ efo ] ) over the unit cell we derive since the integral on the left hand side vanishes due to quasi - periodicity .we finally obtain which holds for arbitrary modes such that .the same relation holds for the counter propagating mode the minus sign in the superscript position indicates the direction of the propagation . taking into account the relations and we can rewrite ( [ efpm ] ) in the form after subtraction of relation ( [ efpmm ] ) from ( [ efpm ] ) the orthogonality relation takes form which states that two distinct modes propagating in the same direction are orthogonal .it is then clear that these modes can always be normalized such that the field expansions we can derive the condition of the modal completeness .the plane waves can be expanded in the following forms : by projecting eq .( [ mc1 ] ) on the modes and using the biorthogonality relations eq .( [ orto4 ] ) we deduce ( see eq .( [ jmat ] ) ) similarly we project ( [ mc2 ] ) onto the adjoint magnetic mode and deduce thus , next we project ( [ mc2 ] ) onto plane wave basis by multiplying both sides of ( [ mc2 ] ) by and integrating over the unit cell .we obtain the equation ( [ mc4 ] ) represents the completeness relation for the modes .if we introduce the vectors of matrices \text { and } \bm{k } = \left [ \begin{array}{l } \bm{k}^e \\ \bm{k}^m \end{array } \right],\end{aligned}\ ] ] where \text { and } \bm{k}^{e / m } = \left [ k_{s n}^{e / m } \right]\end{aligned}\ ] ] then the completeness relation eq .( [ mc4 ] ) can be written in the matrix form where is the identity matrix .the completeness relation of the rayleigh modes can be established in a similar way .the transverse component of electric and magnetic modal fields can be represented as a series in terms of rayleigh modes in the region above the grid as by multiplying ( [ ip01 ] ) on ( [ ip02 ] ) and integrating we deduce the completeness relation eq .( [ ip09 ] ) can be written in matrix form we briefly outline the derivation of energy conservation relations for the situation when there is no absorption .the flux conservation leads to the certain relations between the fresnel interface reflection and transmission matrices .for some value we can write an expansion and similarly the downward flux is defined by = \hspace{2 cm } \label{aie3 } \\\frac{1}{2}\left [ ( \bm{c}^- - \bm{c}^+)^h\bm{u}(\bm{c}^- + c^+)+(\bm{c}^-+\bm{c}^+)^h\bm{u}^h(\bm{c}^- - c^+)\right ] , \nonumber\end{aligned}\ ] ] where the matrix is given by the relation ( [ aie3 ] ) can be written in the form \bm{v}\left [ \begin{array}{c } \bm{c}^- \\ \bm{c}^+\\ \end{array } \right]\right\ } , \label{aie5}\end{aligned}\ ] ] the slab energy conservation relations can be found in the similar way as for the interface relations as above .furthermore the expressions of the energy relations are very similar to the interface relations .the only difference is now the transmission and the reflection matrices in ( [ a310 ] ) need to be replaced by the slab reflection and transmission matrices .the adjoint modes are defined by with anti - quasi - periodicity condition when there is no absorption given and are then self adjoint operators .note that even though the operators and are self adjoint ( when there is no absorption ) the eigenvalue problem eq .( [ modeadj ] ) is not hermitian because the operator is not self adjoint in general .therefore the eigenvalues can be real representing propagating modes , as well as complex ( pure imaginary or complex ) representing evanescent modes .now from eq .( [ modeadj ] ) we deduce where overline means complex conjugation . by comparing eq .( [ modeadj2 ] ) and the original eigenvalue equation we deduce for the real eigenvalues we choose while for complex we must choose which will ensure that the downward evanescent propagating field is decaying .therefore we have for the propagating field and for the evanescent field . from maxwell s equations we deduce that thus , the equations ( [ efoo ] ) and ( [ efpm ] ) can be rewritten as therefore when , by adding and subtracting the relations ( [ efooa ] ) and ( [ efpma ] ) we may deduce which are the elements of the matrix introduced earlier . from ( [ efpma ] ) and for and real deduce that the integral is real .when is pure imaginary and then from ( [ efpma ] ) we deduce that is pure imaginary .now let us consider the case where is complex .the matrix is defined by ( [ aie4 ] ) . for modes and from ( [ aie4 ] ) and ( [ modeadj7 ] ) we deduce by using the orthonormal condition we find . for the modes and from ( [ aie4 ] ) we obtain so the elements .this means that is a diagonal matrix with unit elements on only the corresponding to propagating modal part .s. nishimura , n. abrams , b. a. lewis , l. i. halaoui , t. e. mallouk , k. d. benkstein , j. van de lagemaat , and a. j. frank , `` standing wave enhancement of red absorbance and photocurrent in dye - sensitized titanium dioxide photoelectrodes coupled to photonic crystals , '' j. am .soc . * 125 * , 63066310 ( 2003 ) .r. c. mcphedran , d. h. dawes , l. c. botten , and n. a. nicorovici , `` on - axis diffraction by perfectly conducting capacitive grids , '' journal of electromagnetic waves and applications * 10 * , 10851111(27 ) ( 1996 ) .l. c. botten , r. c. mcphedran , n. a. nicorovici , and a. b. movchan , `` off - axis diffraction by perfectly conducting capacitive grids : modal formulation and verification , '' journal of electromagnetic waves and applications * 12 * , 847882(36 ) ( 1998 ) .b. c. p. sturmberg , k. b. dossou , l. c. botten , a. a. asatryan , c. g. poulton , c. m. de sterke , and r. c. mcphedran , `` modal analysis of enhanced absorption in silicon nanowire arrays , '' opt .express * 19 * , a1067a1081 ( 2011 ) .the dashed blue curves represent solutions such that is not zero . the complex solutions occur as conjugate pairs and , in order to differentiate the pairs ,we use the term for the x - axis instead of . , width=302 ] increases for a fixed wavelength and waveguide truncation number .the computed absorptance for and is and this value is considered as `` exact '' and used to compute the error curve.,width=321 ] increases .the details given in the caption of fig .[ fig : dilute : sinw:4 ] also apply here except that the plane wave truncation number is fixed at .note that is a degenerate eigenvalue .the isolated blue dot corresponds to the truncation value where the error is unexpectedly high ; thus the truncation is instead used for the error curve ., width=321 ] off - normal orientated along -axis ( azimuthal angle = 0 ) .the solid red and dashed blue curves represent an incidence by te - polarized and tm - polarized plane wave respectively . the absorption spectrum for normal incidence ( fig .[ fig : dilute : sinw:1 ] ) is also shown ( dotted black ) ., width=321 ]
a finite element - based modal formulation of diffraction of a plane wave by an absorbing photonic crystal slab of arbitrary geometry is developed for photovoltaic applications . the semi - analytic approach allows efficient and accurate calculation of the absorption of an array with a complex unit cell . this approach gives direct physical insight into the absorption mechanism in such structures , which can be used to enhance the absorption . the verification and validation of this approach is applied to a silicon nanowire array and the efficiency and accuracy of the method is demonstrated . the method is ideally suited to studying the manner in which spectral properties ( e.g. , absorption ) vary with the thickness of the array , and we demonstrate this with efficient calculations which can identify an optimal geometry .
constraint programming models for combinatorial optimization problems consist of variables on finite domains , constraints on those variables and an objective function to be optimized . in general ,constraint programming solvers use domain value enumeration to solve combinatorial optimization problems . by propagation of the constraints ( i.e. removal of inconsistent values ) ,large parts of the resulting search tree may be pruned . because combinatorial optimization problems are np - hard in general , constraint propagation is essential to make constraint programming solvers practically applicable .another essential part concerns the enumeration scheme , that defines and traverses a search tree .variable and value ordering heuristics as well as tree traversal heuristics greatly influence the performance of the resulting constraint programming solver . in this workwe investigate the possibility of using semidefinite relaxations in constraint programming .this investigation involves the extraction of semidefinite relaxations from a constraint programming model , and the actual use of the relaxation inside the solution scheme .we propose to use the solution of a semidefinite relaxation to define search tree ordering and traversal heuristics .effectively , this means that our enumeration scheme starts at the suggestion made by the semidefinite relaxation , and gradually scans a wider area around this solution .moreover , we use the solution value of the semidefinite relaxation as a bound for the objective function , which results in stronger pruning . by applying a semidefinite relaxation in this way, we hope to speed up the constraint programming solver significantly .these ideas were motivated by a previous work , in which a linear relaxation was proved to be helpful in constraint programming .we implemented our method and provide experimental results on the stable set problem and the maximum clique problem , two classical combinatorial optimization problems .we compare our method with a standard constraint programming solver , and with specialized solvers for maximum clique problems .as computational results will show , our method obtains far better results than a standard constraint programming solver .however , on maximum clique problems , the specialized solvers appear to be much faster than our method .this paper is an extended and revised version of . in the current version , a more general view on the proposed methodis presented .namely , in the method was proposed for stable set problems only , while in this paper we propose the method to be applied to any constraint programming problem , although not all problems will be equally suitable .furthermore , in the method uses a subproblem generation framework on which limited discrepancy search is applied . in the current workit has been replaced by limited discrepancy search on single values , which is more concise while preserving the behaviour of the algorithm . finally , more experimental results are presented , including problem characterizations and instances of the dimacs benchmark set for the maximum clique problem .the outline of the paper is as follows .the next section gives a motivation for the approach proposed in this work .then , in section [ sc : prel ] some preliminaries on constraint and semidefinite programming are given .a description of our solution framework is given in section [ sc : framework ] . in section [ sc : formulations ] we introduce the stable set problem and the maximum clique problem , integer optimization formulations and a semidefinite relaxation .section [ sc : results ] presents the computational results .we conclude in section [ sc : conclusions ] with a summary and future directions .np - hard combinatorial optimization problems are often solved with the use of a polynomially solvable relaxation . often ( continuous ) linear relaxations are chosen for this purpose . also within constraint programming , linear relaxationsare widely used , see for an overview .let us first motivate why in this paper a semidefinite relaxation is used rather than a linear relaxation . for some problems , for instance for the stable set problem , standard linear relaxations are not very tight and not informative .one way to overcome this problem is to identify and add linear constraints that strengthen the relaxation .but it may be time - consuming to identify such constraints , and by enlarging the model the solution process may slow down .on the other hand , several papers on approximation theory following have shown the tightness of semidefinite relaxations . however , being tighter , semidefinite programs are more time - consuming to solve than linear programs in practice . hence one has to trade strength for computation time . for some ( large scale ) applications , semidefinite relaxations are well - suited to be used within a branch and bound framework ( see for instance ) . moreover ,our intention is not to solve a relaxation at every node of the search tree .instead , we propose to solve only once a relaxation , before entering the search tree .therefore , we are willing to make the trade - off in favour of the semidefinite relaxation . finally , investigating the possibility of using semidefinite relaxations in constraint programming is worthwile in itself . to our knowledge the cross - fertilization of semidefinite programming and constraint programming has not yet been investigated . hence , this paper should be seen as a first step toward the cooperation of constraint programming and semidefinite programming .in this section we briefly introduce the basic concepts of constraint programming that are used in this paper .a thorough explanation of the principles of constraint programming can be found in .a constraint programming model consists of a set of variables , corresponding variable domains , and a set of constraints restricting those variables . in case of optimization problems , also an objective function is added . in this workthe variable domains are assumed to be finite .a constraint is defined as a subset of the cartesian product of the domains of the variables that are in .constraints may be of any form ( linear , nonlinear , logical , symbolic , etcetera ) , provided that the constraint programming solver contains an algorithm to check its satisfiability , or even to identify globally inconsistent domain values .basically , a constraint programming solver tries to find a solution of the model by enumerating all possible variable - value assignments such that the constraints are all satisfied .because there are exponentially many possible assignments , constraint propagation is needed to prune large parts of the corresponding search tree .constraint propagation tries to remove inconsistent values from variable domains before the variables are actually instantiated .hence , one does nt need to generate the whole search tree , but only a part of it , while still preserving a complete ( exact ) solution scheme .the general solution scheme is an iterative process in which branching decisions are made , and the effects are propagated subsequently. variable and value ordering heuristics , which define the search tree , greatly influence the constraint propagation , and with that the performance of the solver .if no suitable variable and value ordering heuristics are available , constraint programming solvers often use a lexicographic variable and value ordering , and depth - first search to traverse the tree .however , when good heuristics are available , they should be applied .when ` perfect ' value and variable ordering heuristics are followed , it will lead us directly to the optimal solution ( possibly unproven ) .although perfect heuristics are often not available , some heuristics come pretty close . in such cases , one should try to deviate from the first heuristic solution as little as possible .this is done by traversing the search tree using a limited discrepancy search strategy ( lds ) instead of depth - first search .lds is organized in waves of increasing discrepancy from the first solution provided by the heuristic .the first wave ( discrepancy 0 ) exactly follows the heuristic .the next waves ( discrepancy , with ) , explore all the solutions that can be reached when derivations from the heuristic are made .typically , lds is applied until a maximum discrepancy has been reached , say 3 or 4 .although being incomplete ( inexact ) , the resulting strategy often finds good solutions very fast , provided that the heuristic is informative . of courselds can also be applied until all possible discrepancies have been considered , resulting in a complete strategy . in this sectionwe briefly introduce semidefinite programming .a large number of references to papers concerning semidefinite programming are on the web pages and .a general introduction on semidefinite programming applied to combinatorial optimization is given in and .semidefinite programming makes use of positive semidefinite matrices of variables .a matrix is said to be positive semidefinite ( denoted by ) when for all vectors .semidefinite programs have the form here denotes the trace of , which is the sum of its diagonal elements , i.e. . the matrix , the cost matrix and the constraint matrices are supposed to be symmetric . the reals and the matrices define constraints .we can view semidefinite programming as an extension of linear programming .namely , when the matrices and are all supposed to be diagonal matrices , the resulting semidefinite program is equal to a linear program , where the matrix is replaced by a non - negative vector of variables . in particular , then a semidefinite programming constraint tr corresponds to a linear programming constraint , where represents the diagonal of .theoretically , semidefinite programs have been proved to be polynomially solvable to any fixed precision using the so - called ellipsoid method ( see for instance ) . in practice , nowadays fast ` interior point ' methods are being used for this purpose ( see for an overview ) .semidefinite programs may serve as a continuous relaxation for ( integer ) combinatorial optimization problems .unfortunately , it is not a trivial task to obtain a computationally efficient semidefinite program that provides a tight solution for a given problem .however , for a number of combinatorial optimization problems such semidefinite relaxations do exist , for instance the stable set problem , the maximum cut problem , quadratic programming problems , the maximum satisfiability problem , and many others ( see for an overview ) .the skeleton of our solution framework is formed by the constraint programming enumeration scheme , or search tree , as explained in section [ ssc : cp ] . within this skeleton ,we want to use the solution of a semidefinite relaxation to define the variable and value ordering heuristics . in this sectionwe first show how to extract a semidefinite relaxation from a constraint programming model .then we give a description of the usage of the relaxation within the enumeration scheme .we start from a constraint programming model consisting of a set of variables , corresponding finite domains , a set of constraints and an objective function . from this modelwe need to extract a semidefinite relaxation . in general , a relaxation is obtained by removing or replacing one or more constraints such that all solutions are preserved .if it is possible to identify a subset of constraints for which a semidefinite relaxation is known , this relaxation can be used inside our framework .otherwise , we need to build up a relaxation from scratch .this can be done in the following way .if all domains are binary , a semidefinite relaxation can be extracted using a method proposed by , which is explained below . in general , however , the domains are non - binary . in that case , we transform the variables and the domains into corresponding binary variables for and : we will then use the binary variables to construct a semidefinite relaxation . of course , the transformation has consequences for the constraints also , which will be discussed below .the method to transform a model with binary variables into a semidefinite relaxation , presented in , is the following .let be a vector of binary variables , where is a positive integer .construct the variable matrix as then can be constrained to satisfy where the rows and columns of are indexed from 0 to .condition ( [ eq : diag ] ) expresses the fact that , which is equivalent to .note however that the latter constraint is relaxed by requiring to be positive semidefinite .the matrix contains the variables to model our semidefinite relaxation .obviously , the diagonal entries ( as well as the first row and column ) of this matrix represent the binary variables from which we started .using these variables , we need to rewrite ( a part of ) the original constraints into the form of program ( [ eq : sdp_general ] ) in order to build the semidefinite relaxation . in casethe binary variables are obtained from transformation ( [ eq : transform ] ) , not all constraints may be trivially transformed accordingly . especially because the original constraints may be of any form .the same holds for the objective function . on the other hand , as we are constructing a relaxation , we may choose among the set of constraints an appropriate subset to include in the relaxation .moreover , the constraints itself are allowed to be relaxed .although there is no ` recipe ' to transform any given original constraint into the form of program ( [ eq : sdp_general ] ) , one may use results from the literature .for instance , for linear constraints on binary variables a straightforward translation is given in section [ ssc : sdp ] .at this point , we have either identified a subset of constraints for which a semidefinite relaxation exists , or built up our own relaxation .now we show how to apply the solution to the semidefinite relaxation inside the constraint programming framework , also depicted in figure [ fg : method ] . in general , the solution to the semidefinite relaxation yields fractional values for its variable matrix .for example , the diagonal variables of the above matrix will be assigned to a value between 0 and 1 .these fractional values serve as an indication for the original constraint programming variables .consider for example again the above matrix , and suppose it is obtained from non - binary original variables , by transformation ( [ eq : transform ] ) .assume that variable corresponds to the binary variable ( for some integer and ) , which corresponds to , where is a constraint programming variable and .if variable is close to 1 , then also is supposed to be close to 1 , which corresponds to assigning . hence , our variable and value ordering heuristics for the constraint programming variables are based upon the fractional solution values of the corresponding variables in the semidefinite relaxation .our variable ordering heuristic is to select first the constraint programming variable for which the corresponding fractional solution is closest to the corresponding integer solution .our value ordering heuristic is to select first the corresponding suggested value .for example , consider again the above matrix , obtained from non - binary variables by transformation ( [ eq : transform ] ) .we select first the variable for which , representing the binary variable , is closest to 1 , for some and corresponding integer . then we assign value to variable .we have also implemented a randomized variant of the above variable ordering heuristic . in the randomized case ,the selected variable is accepted with a probability proportional to the corresponding fractional value .we expect the semidefinite relaxation to provide promising values .therefore the resulting search tree will be traversed using limited discrepancy search , defined in section [ ssc : cp ] .a last remark concerns the solution value of the semidefinite relaxation , which is used as a bound on the objective function in the constraint programming model .if this bound is tight , which is the case in our experiments , it leads to more propagation and a smaller search space .this section describes the stable set problem and the maximum clique problem ( see for a survey ) , on which we have tested our algorithm .first we give their definitions , and the equivalence of the two problems .then we will focus on the stable set problem , and formulate it as an integer optimization problem . from this, a semidefinite relaxation is inferred .consider an undirected weighted graph , where is the set of vertices and a subset of edges of , with . to each vertex a weight is assigned ( without loss of generality , we can assume all weights to be nonnegative ) .a stable set is a set such that no two vertices in are joined by an edge in .the stable set problem is the problem of finding a stable set of maximum total weight in .this value is called the stable set number of and is denoted by usually denotes the unweighted stable set number .the weighted stable set number is then denoted as . in this work , it is not necessary to make this distinction . ] . in the unweighted case ( when all weights are equal to 1 ), this problem amounts to the maximum cardinality stable set problem , which has been shown to be already np - hard .a clique is a set such that every two vertices in are joined by an edge in .the maximum clique problem is the problem of finding a clique of maximum total weight in .this value is called the clique number of and is denoted by is defined similar to and also not distinguished in this paper . ] .the complement graph of is , with the same set of vertices , but with edge set .it is well known that .hence , a maximum clique problem can be translated into a stable set problem on the complement graph .we will do exactly this in our implementation , and focus on the stable set problem , for which good semidefinite relaxations exist .let us first consider an integer linear programming formulation for the stable set problem .we introduce binary variables to indicate whether or not a vertex belongs to the stable set .so , for vertices , we have integer variables indexed by , with initial domains . in this way , if vertex is in , and otherwise .we can now state the objective function , being the sum of the weights of vertices that are in , as .finally , we define the constraints that forbid two adjacent vertices to be both inside as , for all edges .hence the integer linear programming model becomes : another way of describing the same solution set is presented by the following integer quadratic program note that here the constraint is replaced by , similar to condition ( [ eq : diag ] ) in section [ sc : framework ] .this quadratic formulation will be used below to infer a semidefinite relaxation of the stable set problem .in fact , both model ( [ eq : ilp_form ] ) and model ( [ eq : quadratic ] ) can be used as a constraint programming model .we have chosen the first model , since the quadratic constraints take more time to propagate than the linear constraints , while having the same pruning power . to infer the semidefinite relaxation , however , we will use the equivalent model ( [ eq : quadratic ] ) .the integer quadratic program ( [ eq : quadratic ] ) gives rise to a well - known semidefinite relaxation introduced by lovsz ( see for a comprehensive treatment ) .the value of the objective function of this relaxation has been named the theta number of a graph , indicated by . for its derivation into a form similar to program ( [ eq : sdp_general ] ), we will follow the same idea as in section [ sc : framework ] for the general case .as our constraint programming model uses binary variables already , we can immediately define the matrix variable of our relaxation as where the binary vector again represents the stable set , as in section [ ssc : int_form ] .first we impose the constraints as described in section [ sc : framework ] .next we translate the edge constraints from program ( [ eq : quadratic ] ) into , because represents . in order to translate the objective function, we first define the weight matrix as then the objective function translates into .the semidefinite relaxation thus becomes note that program ( [ eq : theta1 ] ) can easily be rewritten into the general form of program ( [ eq : sdp_general ] ) .namely , is equal to tr where the matrix consists of all zeroes , except for , and , which makes the corresponding right - hand side ( entry ) equal to 0 ( similarly for the edge constraints ) .the theta number also arises from other formulations , different from the above , see . in our implementation we have used the formulation that has been shown to be computationally most efficient among those alternatives . let us introduce that particular formulation ( called in ) .again , let be a vector of binary variables representing a stable set . define the matrix where .furthermore , let the cost matrix be defined as for .observe that in these definitions we exploit the fact that for all .the following semidefinite program has been shown to also give the theta number of , see .when ( [ eq : theta2 ] ) is solved to optimality , the scaled diagonal element ( a fractional value between 0 and 1 ) serves as an indication for the value of ( ) in a maximum stable set ( see for instance ) .again , it is not difficult to rewrite program ( [ eq : theta2 ] ) into the general form of program ( [ eq : sdp_general ] ) .program ( [ eq : theta2 ] ) uses matrices of dimension and constraints , while program ( [ eq : theta1 ] ) uses matrices of dimension and constraints .this gives an indication why program ( [ eq : theta2 ] ) is computationally more efficient .all our experiments are performed on a sun enterprise 450 ( 4 x ultrasparc - ii 400mhz ) with maximum 2048 mb memory size , on which our algorithms only use one processor of 400mhz at a time . as constraint programming solver we use the ilog solver library , version 5.1 . as semidefinite programming solver, we use csdp version 4.1 , with the optimized atlas 3.4.1 and lapack 3.0 libraries for matrix computations .the reason for our choices is that both solvers are among the fastest in their field , and because ilog solver is written in c++ , and csdp is written in c , they can be hooked together relatively easy .we distinguish two algorithms to perform our experiments .the first algorithm is a sole constraint programming solver , which uses a standard enumeration strategy .this means we use a lexicographic variable ordering , and we select domain value 1 before value 0 .the resulting search tree is traversed using a depth - first search strategy . after each branching decision, its effect is directly propagated through the constraints . as constraint programming modelwe have used model ( [ eq : ilp_form ] ) , as was argued in section [ ssc : int_form ] .the second algorithm is the one proposed in section [ sc : framework ] .it first solves the semidefinite program ( [ eq : theta2 ] ) , and then calls the constraint programming solver . in this case, we use the randomized variable ordering heuristic , defined by the solution of the semidefinite relaxation .the resulting search tree is traversed using a limited discrepancy search strategy .in fact , in order to improve our starting solution , we repeat the search for the first solution times , ( where is the number of variables ) , and the best solution found is the heuristic solution to be followed by the limited discrepancy search strategy. we will first identify general characteristics of the constraint programming solver and the semidefinite programming solver applied to stable set problems .it appears that both solvers are highly dependent on the edge density of the graph , i.e. for a graph with vertices and edges .we therefore generated random graphs on 30 , 40 , 50 and 60 vertices , with density ranging from 0.01 up to 0.95 .our aim is to identify the hardness of the instances for both solvers , parametrized by the density . based upon this information, we can identify the kind of problems our algorithm is suitable for .we have plotted the performance of both solvers in figure [ fg : perf_cp ] and figure [ fg : perf_sdp ] . herethe constraint programming solver solves the problems to optimality , while the semidefinite programming solver only solves the semidefinite relaxation .for the constraint programming solver , we depict both the number of backtracks and the time needed to prove optimality . for the semidefinite programming solver we only plotted the time needed to solve the relaxation .namely , this solver does not use a tree search , but a so - called primal - dual interior point algorithm . note that we use a log - scale for time and number of backtracks in these pictures . from these figures, we can conclude that the constraint programming solver has the most difficulties with instances up to density around 0.2 . herewe see the effect of constraint propagation . as the number of constraints increases, the search tree can be heavily pruned . on the other hand ,our semidefinite relaxation suffers from every edge that is added . as the density increases , the semidefinite program increases accordingly , as well as its computation time .fortunately , for the instances up to 0.2 , the computation time for the semidefinite relaxation is very small .consequently , our algorithm is expected to behave best for graphs that have edge density up to around 0.2 . for graphs with a higher density, the constraint programming solver is expected to use less time than the semidefinite programming solver , which makes the application of our method unnecessary .our first experiments are performed on random weighted and unweighted graphs .we generated graphs with 50 , 75 , 100 , 125 and 150 vertices and edge density from 0.05 , 010 and 0.15 , corresponding to the interesting problem area .the results are presented in table [ tb : random ] .unweighted graphs on vertices and edge density are named ` g ' .weighted graphs are similarly named ` wg ' ..computational results on random graphs , with vertices and edges .all times are in seconds .the time limit is set to 1000 seconds .[ cols="<,>,>,^,>,^,>,>,>,^,>,^ , > , > , > " , ] in table [ tb : compare ] we compare our method with two methods that are specialized for maximum clique problems .the first method was presented by stergrd , and follows a branch - and - bound approach .the second method is a constraint programming approach , using a special constraint for the maximum clique problem , with a corresponding propagation algorithm .this idea was introduced by fahle and extended and improved by rgin .since all methods are performed on different machines , we need to identify a time ratio between them .a machine comparison from spec shows that our times are comparable with the times of stergrd .we have multiplied the times of rgin with 3 , following the time comparison made in . in general, our method is outperformed by the other two methods , although there is one instance on which our method performs best ( san200_0.9_3 ) .we have presented a method to use semidefinite relaxations within constraint programming .the fractional solution values of the relaxation serve as an indication for the corresponding constraint programming variables .moreover , the solution value of the relaxation is used as a bound for the corresponding constraint programming objective function .we have implemented our method to find the maximum stable set in a graph .experiments are performed on random weighted and unweighted graphs , structured graphs from coding theory , and on a subset of the dimacs benchmarks set for maximum clique problems .computational results show that constraint programming can greatly benefit from semidefinite programming .indeed , the solution to the semidefinite relaxations turn out to be very informative . compared to a standard constraint programming approach , our method obtains far better results .specialized algorithms for the maximum clique problem however , generally outperform our method .the current work has investigated the possibility of exploiting semidefinite relaxations in constraint programming .possible future investigations include the comparison of our method with methods that use linear relaxations , for instance branch - and - cut algorithms .moreover , one may investigate the effects of strengthening the relaxation by adding redundant constraints .for instance , for the stable set problem so - called clique - constraints ( among others ) can be added to both the semidefinite and the linear relaxation .finally , the proof of optimality may be accelerated similar to a method presented in , by adding so - called discrepancy constraints to the semidefinite or the linear relaxation , and recompute the solution to the relaxation .many thanks to michela milano , monique laurent and sebastian brand for fruitful discussions and helpful comments while writing ( earlier drafts of ) this paper . also thanks to the anonymous referees for useful comments .m. milano and w.j .van hoeve .reduced cost - based ranking for generating promising subproblems . in p.van hentenryck , editor , _ eighth international conference on the principles and practice of constraint programming ( cp02 ) _ , volume 2470 of _ lncs _ , pages 116 .springer verlag , 2002 .m. laurent and f. rendl . .in k.aardal , g. nemhauser , and r. weismantel , editors , _ discrete optimization _ , handbooks in operations research and management science .elsevier , 2004 . also available as technical report pna - r0210 , cwi , amsterdam .e. anderson , z. bai , c. bischof , l.s .blackford , j. demmel , j.j .dongarra , j. du croz , a. greenbaum , s. hammarling , a. mckenney , and d. sorensen . .siam , third edition , 1999 .+ http://www.netlib.org / lapack/.
constraint programming uses enumeration and search tree pruning to solve combinatorial optimization problems . in order to speed up this solution process , we investigate the use of semidefinite relaxations within constraint programming . in principle , we use the solution of a semidefinite relaxation to guide the traversal of the search tree , using a limited discrepancy search strategy . furthermore , a semidefinite relaxation produces a bound for the solution value , which is used to prune parts of the search tree . experimental results on stable set and maximum clique problem instances show that constraint programming can indeed greatly benefit from semidefinite relaxations .
that quantum mechanical phenomena can be effectively exploited for the storage , manipulation and exchange of information is now a widely recognised fact .the whole field of quantum information poses new challenges for the information theory community and involves several novel applications , especially with respect to cryptology .recent interest in quantum cryptography has been stimulated by the fact that quantum algorithms , such as shor s algorithms for integer factorization and discrete logarithm , threaten the security of classical cryptosystems .a range of quantum cryptographic protocols for key distribution , bit commitment , oblivious transfer and other problems have been extensively studied .furthermore , the implementation of quantum cryptographic protocols has turned out to be significantly easier than the implementation of quantum algorithms : although practical quantum computers are still some way in the future , quantum cryptography has already been demonstrated in non - laboratory settings and is well on the way to becoming an important technology .quantum cryptographic protocols are designed with the intention that their security is guaranteed by the laws of quantum physics .naturally it is necessary to prove , for any given protocol , that this is indeed the case .the most notable result in this area is mayers proof of the unconditional security of the quantum key distribution protocol bb84 .this proof guarantees the security of bb84 in the presence of an attacker who can perform any operation allowed by quantum physics ; hence the security of the protocol will not be compromised by future developments in quantum computing .mayers result , and others of the same kind , are extremely important contributions to the study of quantum cryptography .however , a mathematical proof of security of a _ protocol _ does not in itself guarantee the security of an implemented _ system _ which relies on the protocol .experience of classical cryptography has shown that , during the progression from an idealised protocol to an implementation , many security weaknesses can arise .for example : the system might not correctly implement the desired protocol ; there might be security flaws which only appear at the implementation level and which are not visible at the level of abstraction used in proofs ; problems can also arise at boundaries between systems and between components which have different execution models or data representations .we therefore argue that it is worth analysing quantum cryptographic systems at a level of detail which is closer to a practical implementation .computer scientists have developed a range of techniques and tools for the analysis and verification of communication systems and protocols .those particularly relevant to security analysis are surveyed by ryan _this approach has two key features .the first is the use of formal languages to precisely specify the behaviour of the system and the properties which it is meant to satisfy .the second is the use of automated software tools to either verify that a system satisfies a specification or to discover flaws .these features provide a high degree of confidence in the validity of systems , and the ability to analyse variations and modifications of a system very easily . in this paper we present the results of applying the above methodology to the bb84 quantum key distribution protocol .we have carried out an analysis using prism , a probabilistic model - checking system .our results confirm the properties which arise from mayers security proof ; more significantly , they demonstrate the effectiveness of the model - checking approach and the ease with which parameters of the system can be varied . our model could easily be adapted to describe variations and related protocols , such as b92 and ekert s protocol ( describe these protocols in detail ) .also , our model can be modified to account for implementation level concerns , such as imperfections in photon sources , channels , and detectors .the objective of _ key distribution _ is to enable two communicating parties , alice and bob , to agree on a common secret key , without sharing any information initially .once a common secret key has been established , alice and bob can use a symmetric cryptosystem to exchange messages privately . in a classical ( i.e. non quantum ) setting , it is quite impossible to perform key distribution securely unless assumptions are made about the enemy s computational power .the use of quantum channels , which can not be tapped or monitored without causing a noticeable disturbance , makes unconditionally secure key distribution possible .the presence of an enemy is made manifest to the users of such channels through an unusually high error rate .we will now describe the bb84 scheme for quantum key distribution , which uses polarised photons as information carriers .bb84 assumes that the two legitimate users are linked by two specific channels , which the enemy also has access to : 1 . a classical , possibly public channel , which may be passively monitored but not tampered with by the enemy ; 2 . a quantum channel which may be tampered with by an enemy . by its very nature, this channel prevents passive monitoring .the first phase of bb84 involves transmissions over the quantum channel , while the second phase takes place over the classical channel .the pair of quantum states is the _ rectilinear basis _ of the hilbert space , and is denoted by .the pair of quantum states is the _ diagonal basis _ of the hilbert space , and is denoted by .the encoding function where is defined as follows : the bb84 protocol can be summarised as follows : 1 . first phase ( quantum transmissions ) 1 . alice generates a random string of bits , and a random string of bases , where .2 . alice places a photon in quantum state for each bit in and in , and sends it to bob over the quantum channel .3 . bob measures each received , with respect to either or , chosen at random .bob s measurements produce a string , while his choices of bases form .second phase ( public discussion ) 1 . for each bit in : 1 .alice sends the value of to bob over the classical channel .2 . bob responds by stating whether he used the same basis for measurement .if , both and are discarded .2 . alice chooses a subset of the remaining bits in and discloses their values to bob over the classical channel .if the result of bob s measurements for any of these bits do not match the values disclosed , eavesdropping is detected and communication is aborted .the common secret key , , is the string of bits remaining in once the bits disclosed in step _2b ) _ are removed .there are two points to note in order to understand bb84 properly .firstly , measuring with the incorrect basis yields a random result , as predicted by quantum theory .thus , if bob chooses the basis to measure a photon in state , the classical outcome will be either 0 or 1 with equal probability ; if the basis was chosen instead , the classical outcome would be 1 with certainty . secondly , in step _2b ) _ of the protocol , alice and bob perform a test for eavesdropping .the idea is that , wherever alice and bob s bases are identical ( i.e. ) , the corresponding bits should match ( i.e. ) . if not , an external disturbance has occurred , and on a noiseless channel this can only be attributed to the presence of an eavesdropper . for more information ,the reader is referred to .we turn now to the formal security requirements for bb84 . among other things , a protocol such as bb84must ensure that an enemy s presence is always made manifest to the legitimate users and that , if a key does result from the procedure , it is unpredictable and common to both users .but most importantly , the protocol must ensure _ privacy : _ an enemy must never be able to obtain the value of the key .moreover , even if an enemy is able to obtain a certain quantity of information by trying to monitor the classical channel , that quantity has to be minimal ; meanwhile , the enemy s uncertainty about the key , , must be maximised .the conditional entropy of the key ( of length ) given the view is defined as : such requirements are usually expressed in terms of _ security parameters_. for quantum key distribution , the security parameters are conventionally written and .the parameter is the number of quantum states transmitted , while denotes collectively the tolerated error rate , the number of bits used to test for eavesdropping , and related quantities .we use the parameter instead of the key length , as these are assumed to be linearly related .for instance , the value of is some function of and : .the proof stipulates that should be exponentially small in and .formally , noting that the choice of over as the parameter only changes the value of the constant , and not the functional relationship. we will demonstrate later for bb84 that , the probability that an enemy succeeds in obtaining more than key bits correctly is a function of the form ( [ decaying - expo ] ) .mayers security proof of bb84 formalises the notion of privacy by defining a quantum key distribution protocol as , if , for every strategy adopted by an enemy , the average of the quantity is less than or equal to some constant .this definition of privacy merely requires the key to be uniformly distributed , when the key length is known .a more conventional privacy definition would have required that the mutual information be less than or equal to , but this is not entirely satisfactory .the theoretical proof of bb84 s security is a significant and valuable result .however , to prove a similar result for a different scheme or cryptographic task is far from trivial and is likely to involve new , ever more specialised derivations .a more flexible approach to analysing the security of quantum cryptographic protocols is clearly desirable .manufacturers of commercial quantum cryptographic systems , for instance , require efficient and rigorous methods for design and testing .a suitable approach should allow for modelling implementation level details and even minor protocol variations with relative ease .we believe that _ model checking _ is such an approach , and we will demonstrate its application to bb84 . model checking is an automated technique for verifying a finite state system against a given temporal specification . using a specialised software tool ( called a _ model checker _ ) , a system implementor can mechanically prove that the system satisfies a certain set of requirements . to do this , an abstract model , denoted ,is built and expressed in a description language ; also , the desired behaviour of the system is expressed as a set of temporal formulae , .the model and the formulae are then fed into the model checker , whose built in algorithms determine conclusively whether satisfies the properties defined by the ( i.e. whether for each property ) .checking should not be confounded with computer based simulation techniques , which do not involve an exhaustive search of all possibilities . for systems which exhibit probabilistic behaviour ,a variation of this technique is used ; a _ probabilistic model checker , _ such as prism , computes the probability for given and .prism models are represented by probabilistic transition systems , and are written in a simple guarded command programming language .system properties for prism models are written in probabilistic computation tree logic ( pctl ) .prism allows models to be parameterised : .thus the probability ( [ el - probabilidad ] ) may be computed for different values of ; this is termed an _ experiment_. by varying one parameter at a time , it is possible to produce a meaningful plot of the variation of ( [ el - probabilidad ] ) .we have built a model of bb84 for use with prism .it is not possible to present the source code for this model here , due to space limitations ; however , the full source code is available online , and is discussed extensively in .a system description in prism is a computer file containing module definitions , each module representing a component of the system . in our description of bb84, there is a module corresponding to each party involved in the protocol and a module representing the quantum channel .each module has a set of local variables and a sequence of actions to perform ; an action typically takes one of the following two forms : \text { \ } g\text { } \rightarrow\text { } ( v_{1}:=\mathrm{val}_{1});\label{firstcase}\\ & [ s]\text { \ } g\text { } \rightarrow0.5:(v_{1}:=\mathrm{val}_{1})+0.5:(v_{1}:=\mathrm{val}_{2});\label{secondcase}\ ] ] in ( [ firstcase ] ) , the variable is assigned the value ; in ( [ secondcase ] ) , is assigned either the value or with equal probability .part of the expressive power of prism comes from the ability to specify arbitrary probabilities for actions ; for example , one could model a bias in alice s choice of polarisation basis , in bb84 , with an action such as : \text { } \mathbf{true}\rightarrow0 & .7:(\mathit{al\_basis}:=\boxplus)\label{varyingalicebasis}\\ + 0 & .3:(\mathit{al\_basis}:=\boxtimes);\nonumber\end{aligned}\ ] ] in this example , alice is biased towards choosing the rectilinear basis .knowledge of this syntax is sufficient for an understanding of the prism description of bb84 . in what follows, we will discuss the properties which we have been able to investigate . as discussed in section [ bb84-sec] , there are two security requirements for bb84 of interest : 1 ._ an enemy s presence must not go unnoticed ; _ if the legitimate users know that an enemy is trying to eavesdrop , they can agree to use privacy amplification techniques [ 20 ] and/or temporarily abort the key establishment process .2 . _ any quantity of valid information which the enemy is able to obtain through eavesdropping must be minimal ._ we can use our model of bb84 , denoted henceforth by , to compute the probability where is a given pctl property formula .therefore , in order to verify that bb84 satisfies the security requirements just mentioned , we have to reformulate these requirements in terms of probability .firstly , we should be able to compute exactly what the probability of detecting an enemy is . in our prism model, we can vary , the number of photons transmitted in a trial of bb84 , and so this probability is a function of .let us write the probability of detecting an enemy as in ( [ dudakis ] ) , represents the pctl formula whose boolean value is when an enemy is detected .before we give the definition of , we should state the random event that occurs when an enemy is detected ; this will allow us to write as a classical probability . in bb84 , an enemy , eve ,is detected as a result of the disturbance inevitably caused by some of her measurements .just as bob , eve does not know which polarisation bases were used to encode the bits in alice s original bit string .eve has to make a random choice of basis , denoted , which may or may not match alice s original choice , . if , eve is guaranteed to measure the -th photon correctly ; otherwise , quantum theory predicts that her measurement result will only be correct with probability 0.5 . in a so called _ intercept resend attack , _ eve receives each photon on the quantum channel , measures it with her basis , obtaining bit value , and then transmits to bob a new photon , which represents in the basis .if eve s basis choice is incorrect , her presence is bound to be detected .but for detection to occur , bob must choose the correct basis for his measurement .whenever bob obtains an incorrect bit value despite having used the correct basis , this is because an enemy has caused a disturbance .note that we are assuming a perfect quantum channel here ; an imperfect channel would produce noise , causing additional disturbances .so , to summarise , an enemy s presence is made manifest as soon as the following event occurs : or equivalently , as soon as : therefore , the probability of detecting an enemy s presence in bb84 may be written : the corresponding pctl formula for prism is : the prism model of bb84 uses elaborate variable names , e.g. instead of , and instead of . the value of for has been calculated with prism , which computes ( [ dudakis ] ) ; the result is shown in figure 1 .( 1500,750)(0,0 ) ( 181.0,123.0 ) ' '' '' ( 161,123)(0,0)[r]0 ( 181.0,254.0 ) ' '' '' ( 161,254)(0,0)[r ] ( 181.0,385.0 ) ' '' '' ( 161,385)(0,0)[r ] ( 181.0,515.0 ) ' '' '' ( 161,515)(0,0)[r ] ( 181.0,646.0 ) ' '' '' ( 161,646)(0,0)[r ] ( 181.0,777.0 ) ' '' '' ( 161,777)(0,0)[r ] ( 181.0,123.0 ) ' '' '' ( 181,82)(0,0 ) ( 391.0,123.0 ) ' '' '' ( 391,82)(0,0 ) ( 600.0,123.0 ) ' '' '' ( 600,82)(0,0 ) ( 810.0,123.0 ) ' '' '' ( 810,82)(0,0 ) ( 1020.0,123.0 ) ' '' '' ( 1020,82)(0,0 ) ( 1229.0,123.0 ) ' '' '' ( 1229,82)(0,0 ) ( 1439.0,123.0 ) ' '' '' ( 1439,82)(0,0 ) ( 40,450)(0,0) ( 810,21)(0,0) ( 391,442 ) ( 433,483 ) ( 475,520 ) ( 516,552 ) ( 558,580 ) ( 600,605 ) ( 642,626 ) ( 684,645 ) ( 726,662 ) ( 768,676 ) ( 810,689 ) ( 852,700 ) ( 894,709 ) ( 936,718 ) ( 978,725 ) ( 1020,732 ) ( 1062,737 ) ( 1104,742 ) ( 1145,747 ) ( 1187,750 ) ( 1229,754 ) ( 1271,757 ) ( 1313,759 ) ( 1355,761 ) ( 1397,763 ) ( 1439,765 ) ( 228.69,215.14 ) ( 239.13,233.07 ) ( 249.57,251.01 ) ( 260.16,268.85 ) ( 293.87,321.20 ) ( 305.91,338.10 ) ( 318.44,354.65 ) ( 330.54,371.51 ) ( 343.40,387.79 ) ( 356.89,403.56 ) ( 370.48,419.25 ) ( 384.00,435.00 ) ( 398.67,449.67 ) ( 413.21,464.48 ) ( 427.73,479.29 ) ( 443.30,493.02 ) ( 459.14,506.43 ) ( 474.62,520.24 ) ( 491.07,532.90 ) ( 507.26,545.88 ) ( 524.06,558.04 ) ( 541.57,569.17 ) ( 558.74,580.83 ) ( 576.77,591.09 ) ( 594.54,601.81 ) ( 612.71,611.84 ) ( 631.27,621.08 ) ( 649.77,630.43 ) ( 668.97,638.29 ) ( 688.00,646.58 ) ( 707.11,654.57 ) ( 726.54,661.81 ) ( 746.05,668.86 ) ( 765.89,674.97 ) ( 785.64,681.35 ) ( 805.46,687.49 ) ( 825.40,693.17 ) ( 845.61,697.90 ) ( 865.80,702.72 ) ( 886.02,707.39 ) ( 906.21,712.19 ) ( 926.60,715.98 ) ( 946.97,719.92 ) ( 967.36,723.75 ) ( 987.86,726.98 ) ( 1008.36,730.21 ) ( 1028.86,733.48 ) ( 1049.48,735.69 ) ( 1069.97,738.99 ) ( 1090.66,740.59 ) ( 1111.24,743.19 ) ( 1131.83,745.67 ) ( 1152.51,747.46 ) ( 1173.20,749.09 ) ( 1193.89,750.68 ) ( 1214.58,752.35 ) ( 1235.28,753.94 ) ( 1255.96,755.61 ) ( 1276.66,757.20 ) ( 1297.38,758.00 ) ( 1318.08,759.47 ) ( 1338.78,761.00 ) ( 1359.50,761.73 ) ( 1380.23,762.40 ) ( 1400.93,763.99 ) ( 1421.66,764.64 ) ( 1439,765 ) ( 181.0,123.0 ) ' '' '' ( 181.0,123.0 ) ' '' '' the first requirement for bb84 , namely that it should be possible to detect an enemy s presence , clearly is satisfied .as we can see from figure 1 , as the number of photons transmitted is increased , the probability of detection tends toward 1 , i.e. we conclude that we will now consider the second security requirement .let denote the event in which eve measures the -th photon transmitted correctly .the probability that eve measures all photons correctly , and hence is able to obtain the secret key , is the product we will examine the variation of a quantity proportional to , namely the probability that eve measures _ more than half _ the photons transmitted correctly .according to the second security requirement for bb84 , the amount of valid information obtained by an enemy must be minimised ; we will investigate the variation of the probability as a function of the number of photons transmitted .we expect this quantity to grow smaller and smaller with .the prism model of bb84 includes a counter variable , ` nc ` , whose value is the number of times that eve makes a correct measurement .the formula may be written in terms of this variable : given and , prism produces the plot shown in figure 2 ; it can be seen from the figure that decays exponentially with .( 1500,750)(0,0 ) ( 181.0,123.0 ) ' '' '' ( 161,123)(0,0)[r]0 ( 181.0,254.0 ) ' '' '' ( 161,254)(0,0)[r ] ( 181.0,385.0 ) ' '' '' ( 161,385)(0,0)[r ] ( 181.0,515.0 ) ' '' '' ( 161,515)(0,0)[r ] ( 181.0,646.0 ) ' '' '' ( 161,646)(0,0)[r ] ( 181.0,777.0 ) ' '' '' ( 161,777)(0,0)[r ] ( 181.0,123.0 ) ' '' '' ( 181,82)(0,0 ) ( 391.0,123.0 ) ' '' '' ( 391,82)(0,0 ) ( 600.0,123.0 ) ' '' '' ( 600,82)(0,0 ) ( 810.0,123.0 ) ' '' '' ( 810,82)(0,0 ) ( 1020.0,123.0 ) ' '' '' ( 1020,82)(0,0 ) ( 1229.0,123.0 ) ' '' '' ( 1229,82)(0,0 ) ( 1439.0,123.0 ) ' '' '' ( 1439,82)(0,0 ) ( 40,450)(0,0) ( 810,21)(0,0) ( 391,534 ) ( 433,458 ) ( 475,479 ) ( 516,418 ) ( 558,428 ) ( 600,378 ) ( 642,383 ) ( 684,341 ) ( 726,344 ) ( 768,309 ) ( 810,310 ) ( 852,281 ) ( 894,281 ) ( 936,257 ) ( 978,257 ) ( 1020,236 ) ( 1062,237 ) ( 1104,219 ) ( 1145,219 ) ( 1187,204 ) ( 1229,204 ) ( 1271,192 ) ( 1313,192 ) ( 1355,181 ) ( 1397,181 ) ( 1439,172 ) ( 181,718 ) ( 181.00,718.00 ) ( 194.59,702.31 ) ( 208.28,686.72 ) ( 222.81,671.90 ) ( 237.12,656.88 ) ( 251.80,642.20 ) ( 266.47,627.53 ) ( 281.59,613.30 ) ( 296.42,598.80 ) ( 312.27,585.39 ) ( 327.87,571.71 ) ( 343.53,558.09 ) ( 359.39,544.70 ) ( 375.72,531.90 ) ( 392.20,519.32 ) ( 408.83,506.90 ) ( 425.52,494.57 ) ( 442.58,482.75 ) ( 460.06,471.58 ) ( 476.97,459.56 ) ( 494.64,448.68 ) ( 512.07,437.42 ) ( 530.14,427.22 ) ( 548.18,416.98 ) ( 566.22,406.73 ) ( 584.50,396.89 ) ( 602.96,387.41 ) ( 621.49,378.08 ) ( 640.15,369.01 ) ( 659.00,360.31 ) ( 677.86,351.64 ) ( 697.11,343.88 ) ( 716.10,335.54 ) ( 735.35,327.79 ) ( 754.73,320.34 ) ( 774.23,313.26 ) ( 793.73,306.16 ) ( 813.24,299.15 ) ( 832.98,292.78 ) ( 852.74,286.42 ) ( 872.56,280.29 ) ( 892.64,275.08 ) ( 912.40,268.72 ) ( 932.47,263.51 ) ( 952.41,257.83 ) ( 972.64,253.16 ) ( 992.81,248.30 ) ( 1013.03,243.61 ) ( 1033.20,238.72 ) ( 1053.43,234.06 ) ( 1073.80,230.12 ) ( 1094.18,226.28 ) ( 1114.50,222.08 ) ( 1134.81,217.88 ) ( 1155.31,214.62 ) ( 1175.76,211.13 ) ( 1196.14,207.29 ) ( 1216.63,203.98 ) ( 1237.16,200.90 ) ( 1257.76,198.50 ) ( 1278.27,195.34 ) ( 1298.76,192.04 ) ( 1319.39,189.86 ) ( 1340.00,187.54 ) ( 1360.52,184.38 ) ( 1381.18,182.52 ) ( 1401.76,179.94 ) ( 1422.37,177.61 ) ( 1439,176 ) ( 181.0,123.0 ) ' '' '' ( 181.0,123.0 ) ' '' '' figures 1 and 2 each contain two superimposed plots : the data points marked with crosses are actual values produced by prism , and the dotted curves are nonlinear functions to which the data points have been fitted .we have used the levenberg marquardt nonlinear fitting algorithm to compute values and such that : \\ p_{>1/2}(n ) & \approx c_{3}\exp[-c_{4}n]\end{aligned}\ ] ] in particular , the values obtained are ( to three decimal places ) : , , , and .it is evident that , increasing the number of photons transmitted , or equivalently , the length of the bit sequence generated by alice , increases bb84 s capability to avert an enemy : the probability of detecting the enemy increases exponentially , while the amount of valid information the enemy has about the key decreases exponentially .these results are in agreement with mayers claim ( see ) , that in an information theoretic setting , which is our case , a quantity such as the amount of shannon s information available to eve must decrease exponentially fast as increases .remember , we have assumed that the number of transmissions , , is linearly related to .variations in the protocol can be accommodated easily by modifying the prism model .for example , in a bias in alice s choice of basis is introduced , and this can be described by a prism action such as ( [ varyingalicebasis ] ) .this influences the performance of bb84 ; it alters the variation of both and .it is also possible to vary _ a posteriori _ probabilities with prism , such as the probability that , for any given transmission , the enemy s choice of measurement basis matches alice s original choice .this probability is not usually taken into consideration in manual proofs , and is likely to be useful for modelling more sophisticated eavesdropping attacks .it should be noted that the results presented here are not as general as mayers. for instance , we have assumed that a noiseless channel is being used , and we have only considered a finite number of cases ( namely , where ) .related techniques from computer science , which are better suited for a full proof of unconditional security , do exist ; the most appropriate of these is _ automated theorem proving _ ; we will leave this for future work .this technique is not restricted to finite scenarios , and can provide the generality needed for a more extensive analysis .in this paper we have analysed the security of the bb84 protocol for quantum key distribution by applying _formal verification techniques , _ which are well established in theoretical computer science . in particular , an automated model - checking system , prism , was used to obtain results which corroborate mayers unconditional security proof of the protocol . compared to manual proofs of security ,our approach offers several advantages .firstly , it is easily adapted to cater for other quantum protocols .it also allows us to analyse composite systems , which include both classical and quantum mechanical components .finally , we are not only able to model abstract protocols as presented here but concrete implementations as well .
this paper discusses the use of computer aided verification as a practical means for analysing quantum information systems ; specifically , the bb84 protocol for quantum key distribution is examined using this method . this protocol has been shown to be unconditionally secure against all attacks in an information theoretic setting , but the relevant security proof requires a thorough understanding of the formalism of quantum mechanics and is not easily adaptable to practical scenarios . our approach is based on _ probabilistic model checking ; _ we have used the prism model checker to show that , as the number of qubits transmitted in bb84 is increased , the equivocation of the eavesdropper with respect to the channel decreases exponentially . we have also shown that the probability of detecting the presence of an eavesdropper increases exponentially with the number of qubits . the results presented here are a testament to the effectiveness of the model checking approach for systems where analytical solutions may not be possible or plausible .
the cosmic web is a dynamic system evolving under the action of gravity towards the configuration of minimum energy .tiny density fluctuations in the primordial density field grew accentuated by gravity to form the galaxies and their associations we observe today .the distribution of galaxies forms an interconnected network of flat walls , elongated filaments and compact clusters of galaxies delineating vast empty void cells .voids , walls , filaments and clusters can be intuitively associated to cells , faces , edges and vertices in cellular systems ( see fig .1 ) . the geometry and topology of the network of voids has a tantalizing similarity to other cellular systems observed in nature .even its evolution resembles the coarsening of soap foam where large bubbles grow by the collapse of adjacent smaller bubbles in a similar way as voids grow by their own expansion and by the collapse of smaller voids in their periphery .this similarity in structure and dynamics between voids and other cellular systems is the motivation for this study . in this workwe focus on three well - known scaling laws observed in cellular systems .i).- the lewis law , discovered first in cucumber cells , relates the area of a cell and its number of neighbors ( also known as its degree ) as , where is the area of the central cell and are constants .ii).- the aboav - weaire law was found while studying cellular domains in metallic crystals .it states that there is an inverse relation between the degree of a cell and the degree of its adjacent cells .this is usually expressed as , where is the degree of its adjacent cells .the von neumann law describing the evolution of two - dimensional foam - cells is expressed in its simplest form as : , where is the change in the area of the cell and is a constant .this remarkable relation is purely topological , depending only on the connectivity of cells .the long - sought extension to three dimensions of the von neumann relation was recently derived by ( see also for a recent study of scaling relations on voronoi systems ) .our results are based on a dark - matter cosmological n - body simulation with particles inside a box of mpc side ( see appendix for details ) .figure 2 shows three scaling relations ( lewis , aboav and von neumann laws ) computed from the void network at two different times .we found that , consistent with the lewis law , large voids have on average a higher degree than smaller voids ( figure 2a ) . at the present timethe relation is linear while at high redshift and small radius ( ) there is a small departure from linearity .the peak of the void radius distribution is mpc at ( figure 3 , appendix ) corresponding to in figure 2a .this value is close to the mean number of neighbors for voronoi distributions ( * ? ? ?* ; * ? ? ?* for an extensive study on voronoi cells ) .predictions for a void network arising from a phase transition give also within our measured values . at early timesthere is a stronger dependence of on than at the present time .in fact at the mean degree of voids is closer to the voronoi case .this is consistent ( at least qualitatively ) with the study by who found that the mean degree of voronoi cells from a poisson distribution decreases as the poisson seeds become correlated , this being a crude approximation to the correlating effect of gravity on an initial poisson distribution of voronoi seeds .figure 2b shows the average neighbor number of the neighbors cells as function of the number of neighbors of the central cell ( aboav law ) .voids follow the aboav law at all times . at high redshifts and low a small decrease in the slopecan be observed , although we do not see the full downturn measured in voronoi distributions .the higher values of at early times , closer to , suggest a more uniform initial distribution of void sizes .figure 2c shows the rate of change in the void size with respect to their degree ( von neumann law ) .while there is a clear relation at all times , the dispersion increases at low redshift possibly reflecting non - linear processes in the void evolution .there is a small indication that , at very early times voids larger than a couple of mpc did not collapse . at latter timesvoids with a lower number of adjacent voids than a critical value will in average collapse .this critical degree is at the present time .the lewis law in addition to von neumann law implies that below a critical radius mpc voids collapse and above it they expand ( ) .the geometry and dynamics of our own local cosmic environment has some similarities to a collapsing void scenario . in particular the existence of a population of luminous galaxies off the plane of the local wall .if our local wall was formed by the collapse of a small void one would expect galaxies at opposite walls of the collapsing void to pass through the newly formed wall .this scenario is plausible given our position at the edge of a large supercluster in which the milky way and its surrounding voids are embedded inside a large shallow overdensity .we in fact have a dramatic example of such collapsing structures in our own cosmic backyard .just opposite to the local void there is a filament of galaxies , the leo spur , on the farther side of the southern void " ( the small void opposite to the local void ) at a distance of mpc approaching to us as a whole with a radial velocity of km/ . at that distancethe velocity discrepancy with the unperturbed hubble flow is of the order of 700 km / s ! this local velocity anomaly " can not be fully explained by the nearby massive structures .it is , however , consistent with the void - in - cloud scenario described by from figures 2a and 2c we have that and mpc comoving .if we add the hubble expansion factor to convert to an observable velocity then is of the order of mpc which is significantly lower than the radius of the southern void " ( mpc , ) . is the mean radius and so we should not be surprised to find variations due to particular lss configurations .this high value in the rate of collapse of the southern void is worth further study. in general cellular scaling relations assume a euclidean geometry , however other space metrics are possible .the derivation of the von neumann law assumes a flat geometry where the integral of the mean curvature around a cell is . generalized the von neumann law to curved spaces as , where is the radius of curvature . found a departure from the euclidean von neumann law in 2d froth on a spherical dome interpreted as a result of the positively curved space being able to accommodate larger angles than the flat space . in the case of the network of voidswe should expect a dependence on the space metric resulting from different geometrical constraints .the lewis , aboav and von neumann laws directly reflect the metric of space by means of the void - connectivity and size and therefore can provide a standard ruler for cosmology .the lewis and aboav relations can be measured from galaxy catalogues with accurate galaxy distance estimations and enough spatial sampling to resolve individual voids . in order to measure the von neumann relationwe need actual physical distances ( in contrast to redshift distances ) which can complicate its measurement .furthermore , the role of spatial curvature is expected to be marginal .for instance , the two - dimensional von neumann law has a dependence on spatial curvature as .the radius of curvature of the universe is given by , where and is the total density of the universe .any effect of spatial curvature on the von neumann law will be very small for reasonable values of .the simulations we present here can not account for non - euclidian geometries .more complete simulations , better theoretical understanding and future massive 3d galaxy surveys with accurate distance estimates are required to apply this relations as an independent standard ruler for cosmology .this research was partly funded by the betty and gordon moore foundation and by a new frontiers of astronomy and cosmology grant from the templeton foundation .the author would like to thank mark neyrinck for stimulating discussions .the results presented in this work are based on a n - body computer simulation containing dark matter particles inside a box of h side with the standard cosmology with , , , .starting at we evolved the box to the present time using the n - body code gadget-2 and stored 32 snapshots in logarithmic intervals of the expansion factor starting at . while the particle number may seem too low in fact for lss studies it is sufficient since the smallest voids we are interested in are larger than a few mpc in radius .the low particle number imposes a low - pass filter , removing unwanted structures arising from over - segmentation in the watershed method used to identify voids . from the particle distribution at each of the 32 snapshots we computed a continuous density field on a regular grid of voxel size using a lagrangian sheet approach as described in .the lagrangian nature of this density estimation and interpolation method allows us to compute accurate densities at very early times and also at latter times inside voids where the particle arrangement is still close to a regular grid ( see fig [ fig : density fields ] ) .next we identified voids using the floating - point implementation of the watershed transform in the spine pipeline . in order to further minimize over - segmentation of voids arising from spurious splitting of voids between snapshots we merged voids in adjacent snapshots if a void at given snapshot had more than of its volume in the next snapshotthe distribution of void sizes at two times is shown in fig .[ fig : radius_distribution ] .the effective void radius was computed from its volume as . for each void we identified its adjacent voids ( voids that share a common wall ) and created a _void - graph _ with nodes corresponding to void centers and edges joining adjacent voids ( this _ void - graph _ is a triangulation and is the dual of the _ cosmic web - graph _ which is a cellular system ) .this void - graph was used to compute the lewis and aboav relations in fig .2 . in order to trace the evolution of individual voids ( for the von neumann relation ) we created a _ void progenitor line _ in a similar way as in done by but performing the linking across time ( adjacent snapshots ) instead of scale ( hierarchical space ) .abel , t. , hahn , o. , & kaehler , r. 2012 , mnras , 427 , 61 d.a .aboav , mettalography , 13 , 1970 , _ the arrangement of grains in a polycrystal metallography _ , 3 , 383 .aragn - calvo , m. a. , platen , e. , van de weygaert , r. , & szalay , a. s. 2010 , apj , 723 , 364 aragon - calvo , m. a. , van de weygaert , r. , araya - melo , p. a. , platen , e. , & szalay , a. s. 2010 , mnras , 404 , l89 aragon - calvo , m. a. , & yang , l. f. 2014 , mnras , 440 , l46 peebles , p.j. e. , & nusser , a. 2010 , nat , 465 , 565 j. a. f. , plateau , _ statique experimentale et theoretique des liquides suomis aux seules forces moleculaires _ , gauthier - villas ,trubner et f. clemn , paris , 1873 shandarin , s. , habib , s. , & heitmann , k. 2012 , prd , 85 , 083005 sheth , r. k. , & van de weygaert , r. 2004 , mnras , 350 , 517 springel , v. 2005 , mnras , 364 , 1105 sutter , p. m. , elahi , p. , falck , b. , et al .2014 , arxiv:1403.7525 d. , weaire ._ phil . mag ._ , 79:491 495 , 1999 .d. , weaire . & s. , hutzler , 1999 _ the physics of foams _ , clarendon , oxford , 1999 van de weygaert , r. , & icke , v. 1989 , aap , 213 , 1 van de weygaert , r. 1994 , aap , 283 , 361
cellular systems are observed everywhere in nature , from crystal domains in metals , soap froth and cucumber cells to the network of cosmological voids . surprisingly , despite their disparate scale and origin all cellular systems follow certain scaling laws relating their geometry , topology and dynamics . using a cosmological n - body simulation we found that the cosmic web , the largest known cellular system , follows the same scaling relations seen elsewhere in nature . our results extend the validity of scaling relations in cellular systems by over 30 orders of magnitude in scale with respect to previous studies . the dynamics of cellular systems can be used to interpret local observations such as the local velocity anomaly " as the result of a collapsing void in our cosmic backyard . moreover , scaling relations depend on the curvature of space , providing an independent measure of geometry . [ firstpage ] cosmology : large - scale structure of universe ; galaxies : kinematics and dynamics , local group ; methods : data analysis , n - body simulations
oil supply and increasing environmental concerns strongly motivate research efforts toward the electrification of transportation , and technological advances have fostered a rapid arrival of electric vehicles ( evs ) in the market . however , the charging of evs has a tremendous impact on the stakeholders in both the electricity and transportation domains , such as electricity producers , power grid operators , policy makers , retailers , and customers .the ev load can drive electricity prices up , and alter the producers generation portfolios , resulting in an increase of co emission .additionally , high penetration with uncontrolled charging threatens the sustainability of distribution networks .for example , for an ev penetration of 25% , almost 30% of network facilities would need to be upgraded , while this ratio drops to 5% if the charging load can be shifted to less crowded time periods .these research works reach a consensus that ev charging should be controlled to avoid distribution congestion and higher peak - to - average ratios ( i.e. , demand sporadicity ) . at the same time, the power grid is witnessing one of its major evolutions since its conception at the beginning of the past century .the classical structure of electricity being produced in a small number of big , centralized , power plants , and flowing through the transmission and distribution networks to be consumed by end users is being challenged by the increasing penetration of renewable energy sources . the possibility to communicate bidirectionally with all elements of the grid and as a consequence to achieve unprecedented levels of monitoring and control serves as a major technological enabler of the new smart grid , allowing to accommodate new types of demand and production sources as illustrated in figure [ fig : smartgrid ] . in this context ,evs impose new burdens due to the extra demands they constitute , but also open opportunities thanks to the fact that their demands are relatively flexible , and that their batteries can be temporarily used to support the power grid : evs can be active contributors in the smart grid instead of passive consumers .the important aspect stressed in this paper is that evs can not be assumed to be directly coordinated by a central entity controlling all charging processes . indeed , evs belong to individuals with specific preferences and constraints , who would not relinquish control of the charging process without being properly compensated .instead , it is reasonable to assume that they react selfishly to management schemes : only when sufficient incentives are offered may ev owners coordinate their charging time and power , i.e. , reschedule ( directly or by giving some control to an external entity ) the charging process rather than recharging their batteries within the shortest delay , which is convenient for them but problematic in the grid operator perspective .those incentives can take several forms , from fixed rewards for letting the grid control the charging , to auctions for energy , or through time - varying prices set by grid operators .therefore , we think ev charging must be managed using market mechanisms , where participants are assumed to have different objectives .hence an appropriate framework to study the ev management schemes is that of economy , and more precisely game theory which provides specific tools to model and analyze the interactions among self - interested actors .this paper reviews the economy - driven schemes for ev charging management proposed in the literature . while the research on that topic is quite flourishing in the last years , there is to our knowledge no work presenting a comprehensive overview of the different approaches considered .this paper classifies the existing models , highlights their main assumptions and results , in order to compare them and identify the most promising types of mechanisms together with the directions that deserve further research .ev charging management requires the support of a corresponding communication structure . in some algorithms, information is broadcasted from grid operators to evs ; bidirectional unicast is sometimes needed to coordinate the charging behaviors of specific evs ; finally evs multicasting to charging stations ( with or without station relaying ) and stations responding ( by unicast , multicast or broadcast ) are necessary in reservation - based systems .the importance of information and communication technologies on the implementation of a so - called smart grid can never be overemphasized , and specially designed communication systems for vehicles are also relevant for better scheduling the charging of evs . hence charging algorithms and the corresponding communication systems should be considered simultaneously to make the best of their economical and environmental potentials .existing works in the literature provide general overviews of the requirements and challenges ; here we further investigate the economic properties of the charging algorithms , but keep track of their prerequisites on communication systems in terms of the volume and the frequency of information exchanges .the remainder of this paper is organized as follows . section [ sec : evssystems ] briefly discusses the technical environment of the charging problem , introduces the economic vocabulary and the desirable properties of an ev management scheme .the next two sections present and classify the charging schemes proposed in the literature to exploit the benefits and avoid undesirable outcomes from evs entering the grid ecosystem : section [ sec : unidirectional charging ] focuses on _ unidirectional charging _( energy only goes from the grid to the ev batteries ) while section [ sec : bidirectional energy trading ] allows _ bidirectional energy trading _ ( the grid can also take energy from the on - board ev batteries ) .section [ sec : communication ] summarizes the communication aspects of the schemes ( type of exchanges , volume and frequency ) , while section [ sec : mechanism ] provides a general classification of all models and approaches , stressing their limitations to highlight the need for further research in specific directions .section [ sec : conclusion ] concludes the paper .the term electric vehicle " can refer to a broad range of technologies . generally speaking, the extension of this concept covers all vehicles using electric motor(s ) for propulsion , including road and rail vehicles , surface and underwater vessels , even electric aircrafts . since our paper concerns the charging management schemes and their impacts on the grid as well as on their owners from an economic perspective ,we narrow the use of electric vehicle " to mention a passenger car with a battery that needs refills of electricity from external sources . battery electric vehicles ( bev ) and plug - in hybrid electric vehicles ( phev ) are two types of plug - in electric vehicles ( pev ) ; phevs differ from bevs in that the former have a gasoline or diesel engine coexisting with an electric motor. the economic mechanisms evoked in this paper mainly differ in the way prices are defined , in the mobility models ( if any ) of evs , in the time scale considered , and in the directions for power flows ( from the grid to evs , or both ways ) .the specificities of evs being bevs or phevs do not play a major role with regard to the economic aspects , and often schemes are proposed that can be indifferently applicable to each type of ev . hence in this survey we present mechanisms without always specifying the ev type ; we do it when it has an influence on the performance or applicability of the scheme .note that charging can be performed in diverse ways : evs can use an on - board or off - board charger , or use inductive charging while parked , thanks to inductive power transmission ( ipt ) technology . the ultimate experience of ipt is charging while in motion , of which a prototype named on - line electric vehicle ( _ olev _ ) has been designed in the korea advanced institute of science & technology . those cases being rare, we can consider in this paper that the charging is done via a physical connection with an on - board plug . to insure safe electricity delivery to an ev from the source , some particular ev supply equipment ( evse )is needed , which puts tight constraints on how evs can be recharged ( or discharged if possible ) . the charging rate limit , battery capacity and ac / dc conversion efficiency vary among the different charging facilities and patterns .two levels for ac charging and three levels for dc charging are approved by the sae j1772 standard , as shown in table [ fig : sae_j1772 ] , giving the estimated time needed to fully recharge a battery with usable pack size , starting from an initial state of charge ( soc ) of .c|c|c| & level 1&level 2 + ac & & + & & + + c|p.17|c|c| & level 1&level 2&level 3 + dc& & & + & && + there are other charging standard proposals , which roughly correspond to the categories in table [ fig : sae_j1772 ] . for examplechademo falls into dc level 2 , and tesla supercharger overlaps with dc level 3 . at the other end of the wire stretching out from an ev socket is a charging station . figure [ fig : charging_facilities ] summarizes the main categories in which we can divide the charging stations . _ individual stations _ capable of charging a single ev refer to those located in individual homes . _parking lots _ for evs are yet to be developed to their full potential : they contain many individual evses in physical proximity , belonging to the same entity .public ev parkings are open to any ev , while private ev parkings provide access only to a specific fleet of evs , e.g. , owned by a single company . _ on - road stations _ are relays for evs on long journeys , they can generally charge evs at the highest possible rate to minimize the delay. _ roadbed infrastructures _ for evs are based on ipt technology .we already witness roadbed infrastructures that charge evs at traffic intersections or even without stopping . as some evscan use other types of energy sources , they can be replenished in _ refueling station _, e.g. classical petrol stations , compressed air stations , or _ battery - swapping stations_. those charging solutions are out of the scope of this paper due to the fact that they are either to some extent overlapping with refueling problems for conventional cars , or still in experimental stage .the smart grid is an evolution of the power grid which is expected to lead to a more efficient use of the grid resources , for example with a reduced peak - to - average power consumption ratio , faster repairs , self - healing and self - optimizing possibilities , and full integration of renewable energy sources .demand response ( dr ) is the possibility for the power grid to alter the consumption patterns of end users ; it can be implemented through various mechanisms .dr was initially used primarily toward large electricity consumers , but the transition to the smart grid provides a paradigm shift , where every load , no matter how small , can participate in a dr program .energy storage is a key technology for the integration of renewable energy sources to the grid .pumped - storage hydroelectricity ( psh ) accounts for 99% of the world bulk storage capacity , but there are physical limitations to the quantity of energy that these types of storage can hold .electric vehicles can both participate in dr and serve as energy storage facilities .they can respond to dr signals , such as price variations or direct control messages by modulating their power consumption , thus providing necessary flexibility to the grid operator . in some cases, evs can also inject electricity back to the grid , thus serving as distributed energy sources .these can be leveraged by the network operators to improve renewable energy integration , to help self - healing or to provide ancillary services , so as to reduce the dependency on specialized equipments like diesel generators .figure [ fig : evactors ] shows the major entities related to ev charging .a transmission system operator ( tso , in europe)or in some contexts ( in north america ) an independent system operator ( iso)is responsible for operating , ensuring the maintenance of and , if necessary , developing the transmission system in a given area .consumers equipped with energy sources that can deliver electricity to the distribution network are called prosumers . in a classical electricity market ,end - users have contracts with an electricity _ retailer _ , who buys the electricity produced by _ generators_.the transaction can be brokered via a bilateral agreement or on a wholesale market . as the aggregated energy consumption of a big regioncan be known with satisfactory precision well in advance , contracts for buying the bulk of the necessary electricity can be done a year or a month ahead on the _ futures market_. however , electricity consumption is heavily dependent on the weather , thus requires a significant amount of energy to be traded hours in advance on the _ day - ahead market_. finally , fine adjustments can be made up to an hour ahead , which are traded on the _ intra - day market_. to match supply and demand for electricity instantaneously , iso / tsos operate ancillary services markets ( generally using auctions ) where they purchase ancillary services from generators and/or consumers who have the ability to vary their generation or consumption powers .iso / tsos also keep a close watch on the efficiency and effectiveness of those markets . as elaborated before , ev charging involves many smart grid actors , whose objectives are not necessarily aligned : ev owners want to store enough energy as quickly as possible , and at the lowest cost , whatever the impact on other evs or on generation costs ; electricity producers and retailers are mainly driven by net benefits ; while isos generally aim to ensure the most efficient use of resources and to maintain the supply - demand balance .therefore , when designing mechanisms to decide allocations and prices paid , one has to anticipate that the actors may try to play the system at their advantage .for example , if decisions are made based on signals from users such as their willingness - to - pay , the rules should ensure that reporting untruthful values does not bring any gain to the corresponding actors : such a property is called _ incentive compatibility_. more generally , an appropriate framework to study the interactions among several decision - makers is that of game theory .a key notion is the nash equilibrium , that is an outcome ( a decision made by each actor ) such that no actor can improve his individual payoff ( utility ) through an unilateral move . as stable situations ,nash equilibria are often considered to be the expected outcomes from interactions .hence many of the mechanisms described in this paper rely on that notion .nash equilibria can be attained when all actors have perfect knowledge of their opponents , their decision sets , and their preferences .but those strong ( and often unrealistic ) conditions are not necessary : in several cases the nash equilibria can be reached or approached via some limited information exchanges among actors , or even without such exchanges but just by trying out decisions and _ learning _ the best ones . to summarize the ev charging problem setting , we recall the relevant actors and set up the vocabulary as below : * ev : a physical electric vehicle or its owner who will generally be assumed to have a _ utility function _ ( or benefit ) , that represents his preferences .we will mostly use the classical _ quasi - linear utility _model : for a given price and energy allocation , the ev owner utility will be the difference between the owner s _ willingness - to - pay _ ( or _ valuation _ , i.e. , the value of energy for him , expressed in monetary units ) and the price actually paid . *aggregator : an entity acting as an intermediary between the demand ( retailers / users ) and supply ( generators , iso / tso or charging stations in some scenarios ) sides of the electricity market . when an aggregator is designed to be a representative of a group of ev owners, its utility will be the aggregated user utility .otherwise , when it acts in its own interest as an intermediate energy supplier , the measure of utility will similarly be the difference between _ revenues _ ( the monetary gains from their clients ) and _ costs_. that difference is often called _ benefit_. * ev charging station : the owner and/or operator of one or several evses in physical proximity , who allows ev recharging and/or discharging with the aim of maximizing revenue , but always under some physical constraints such as local transformer capacity and standard recharging power level . * iso ( or tso ) : an entity in charge of operating and maintaining the transmission system in a given area .it sets a constraint for the aggregated ev load according to the transformer capacity , and purchases ancillary services when necessary , in order to maintain the supply - demand balance .the aggregated utility of all users ( here , evs ) is called _ user welfare _ , andthe aggregated utility of all suppliers ( ev charging stations or aggregator ) is the _ supplier welfare_. _ social welfare _ ( = user welfare+supplier welfare ) quantifies the global value of the system for the society , and is computed as the sum of all users valuations minus all costs ( production , transportation , if any ) .note that money exchanges do not appear in that measure , since they stay within the society . to provide a guideline for future proposals , we list in table [ tab : criteria ] the main questions raised by ev charging , and summarize from our point of view , the criteria that make a good charging management scheme .also , we indicate in which sections of this paper those points are addressed .[ tab : criteria ] [ cols="<,<,<,<",options="header " , ] & (a plan for a whole day is made on priori knowledge of price or consumption ) ; +we summarize in table [ tab : aggregator ] the economic approaches described in section [ sec : unidirectional charging ] and section [ sec : bidirectional energy trading ] .firstly the models are classified into two categories , namely _static _ and _ dynamic _ ones , defined in subsection [ subsubsec : sharingfuture ] .static models deal with an isolated time interval in which the performance is determined by actions taken during this time , and optimal actions can be found based on current state information .contrarily , in a dynamic model where system information varies over time , actions should be updated based on state perturbations caused by such sequential revelations , leading to dynamic optimization methods as illustrated in figure [ fig : mdp ] . as such, the static setting could be seen as a special case where the state is constant ( but still depends on the action taken ) .= [ circle , thick , minimum size=1.1 cm , draw = purple!80 , fill = purple!20 ] = [ circle , thick , minimum size=1.1 cm , draw = blue!80 , fill = blue!20 ] = [ circle , thick , minimum size=1.1 cm , draw = orange!80 , fill = orange!25 ] = [ circle , thick , minimum size=1.1 cm , draw = yellow!85!black , fill = yellow!40 , decorate , decoration = random steps , segment length=2pt , amplitude=2pt ] = [ rectangle , fill = gray!10 , inner sep=0.05 cm , rounded corners=4 mm ] ; ( s_t-2 ) edge[thick ] ( s_t-1 ) ( s_t-1 ) edge[thick ] ( s_t ) ( s_t ) edge[thick ] ( s_t+1 ) ( n_t-1 ) edge[thick ] ( s_t-1 ) ( n_t ) edge[thick ] ( s_t ) ( n_t+1 ) edge[thick ] ( s_t+1 ) ( s_t-1 ) edge ( r_t-1 ) ( s_t ) edge ( r_t ) ( s_t+1 ) edge ( r_t+1 ) ( s_t-1 ) edge ( a_t-1 ) ( s_t ) edge ( a_t ) ( s_t+1 ) edge ( a_t+1 ) ( a_t-1 ) edge ( r_t-1 ) ( a_t ) edge ( r_t ) ( a_t+1 ) edge ( r_t+1 ) ( a_t-1 ) edge ( s_t ) ( a_t ) edge ( s_t+1 ) ; = [ rectangle , fill = gray!40 , inner sep=0.05 cm , rounded corners=4 mm ] background ; ; ; ; ; we then distinguish the ways decision - makers interact : _ optimization - based approaches _ correspond to the cases where one central controller imposes his decisions about allocations and/or prices , and is not influenced by any other actor s actions .ideally , such a central controller has access to all the information needed , thus the management problem reduces to a classical optimization problem : the room for research is therefore * for static models , in improving the optimization methods in terms of computational efficiency and/or approximation of the optimum ; * for dynamic models , in increasing the prediction accuracy and designing algorithms that are robust to unpredictable residuals .in contrast , _ game - theoretic approaches _ refer to the cases where interactions among several rational actors are considered : even if resources are still dispatched by a central controller , the allocations are affected by other actors selfish behaviors ( e.g. , bids sent by evs ) . here, static problems already lead to complex models , and even for those approaches , analytical proofs of incentive - compatibility are only valid for some very specific utility functions .while the need seems to be for incentive compatible mechanisms in dynamic settings , designing such schemes is still an open research question in many cases .the difficulty often lies in the evolution of knowledge and beliefs ( and thus actions ) of actors over time , since the actions taken partly reveal one s private information ; analyzing the equilibria of such games is extremely complex .the last main criterion is related to the implementation type of the schemes : _ revelation schemes _ imply that actors have to exchange information ( such as the willingness - to - pay ) , and can choose strategically what to reveal , hence the importance of properties such as incentive compatibility . on the opposite ,_ ttonnement schemes _ involve a convergence of allocations ( and often prices ) through iterative methods .a key aspect in several ttonnement - based mechanisms is the convergence of the method : here the limits we found are in the convergence speed ( especially in dynamic settings : do prices and allocations have time to converge before the setting changes , say , before another ev arrives ? ) .this is barely addressed in the literature , where in addition convergence is only established for some specific types of utility functions , which need validation .the classification highlights the need for game - theoretic models in dynamic settings .while it is extremely difficult to design incentive - compatible schemes in dynamic settings , it seems capital to us to develop game - theoretic approaches , even if based on ttonnement schemes .& & revelation + schemes & + auction based on willingness - to - pay , incentive compatibility assumed + + auction based on willingness - to - pay , incentive compatiblility proved + & & ttonnement + schemes & , + stackelberg game between aggregator and evs , leader is not omniscient ( i.e. unaware of user utility function ) + + oligopoly game among charging stations + + & & revelation + schemes & + incentive compatible for dynamically arriving clients .+ & & ttonnement + schemes & + dynamically relaunch a static algorithm + + sellers learn users willingness - to - pay dynamically + [ tab : aggregator ]electric vehicles , in addition to the prospect of being wholly driven by renewable energy , are not only energy - efficient but also cost - efficient , and emit less greenhouse gas than fossil - fuel based transportation .the main risk they incur comes from the negative impacts they may have on the grid , mostly caused by uncontrolled recharging superimposing on other loads , which exacerbates the grid aging . coordinating recharging and/or discharging not only alleviates those negative effects , but can also help improve the grid by participating to services such as frequency regulation and energy storage for ( intermittent ) renewable energy generation .these opportunities can be realized in the smart grid realm , so evs and smart grid are mutually reinforcing . from the ev owner s point of view ,organized recharging and discharging offer the possibility to reduce energy costs or even generate profits .this paper surveys the charging managements schemes of the literature , with a focus on economy - driven mechanisms .the proposed models , often based on optimization and/or game - theory tools , range from the simple sharing of a given energy amount among several customers ( a classical problem ) to more complex settings covering aspects such as uncertainty about future events , user mobility constraints , charging station positions , and new grid services like regulation .while some interesting mechanisms have been proposed , and perform well on simulation scenarios , we observed a quite limited amount of analytical results due to the increasing complexity of the settings ( large number of actors , specific constraints of distribution networks and ev batteries ) and the economic constraints ( nonalignment of actors objectives ) .hence we think that further research is needed to better understand the key principles to apply when designing charging management schemes .the present survey highlights the potential of v2 g technology to benefit both ev owners and the grid operator , but also the difficulty of distributing those gains to ev owners to incentivize them to cooperate with the grid operator . from the literature review, we witness that management of ev charging processes in smart grids has attracted researchers from diverse domains , and we envision more effort will be devoted to this topic .several research perspectives are promising from our point of view .firstly , we consider the trend is pointing at microgrids , which are systems with multiple distributed generators and consumers that can switch between island mode and connected mode : the presence of evs is likely to increase the autonomy of such systems .another research perspective regards the charging management of fleets of evs , from a fleet owner perspective .for example , with the technology of driverless cars getting matured , driverless taxi fleet may emerge , offering new possibilities for charging ( and service providing ) management . electric vehiclesis an extremely fast - developing field .technology innovations can reform charging management schemes , for example the roadbed infrastructure would enable charging in motion , which would greatly reduce the reliance on battery capacity and change the understanding ( and modeling ) of plug - in " time .economic models for such scenarios are still to be defined .p. balram , t. le anh , and l. bertling tjernberg , `` effects of plug - in electric vehicle charge scheduling on the day - ahead electricity market price , '' in _ ieee pes international conference and exhibition on innovative smart grid technologies ( isgt europe ) _ , oct .2012 , pp . 18 .r. c. green , l. wang , and m. alam , `` the impact of plug - in hybrid electric vehicles on distribution networks : a review and outlook , '' _ renewable and sustainable energy reviews _ , vol .15 , no . 1 ,544553 , 2011 .s. shafiee , m. fotuhi - firuzabad , and m. rastegar , `` investigating the impacts of plug - in hybrid electric vehicles on power distribution systems , '' _ ieee trans. smart grid _ , vol . 4 , no . 3 , pp .13511360 , 2013 .l. dow , m. marshall , l. xu , j. aguero , and h. willis , `` a novel approach for evaluating the impact of electric vehicles on the power distribution system , '' in _ ieee power and energy society general meeting _, july 2010 , pp . 16 .z. fan , p. kulkarni , s. gormus , c. efthymiou , g. kalogridis , m. sooriyabandara , z. zhu , s. lambotharan , and w. h. chin , `` smart grid communications : overview of research challenges , solutions , and standardization activities , '' _ commun .surveys tuts ._ , vol . 15 , no . 1 , pp2138 , first quarter 2013 .y. yan , y. qian , h. sharif , and d. tipper , `` a survey on smart grid communication infrastructures : motivations , requirements and challenges , '' _ commun .surveys tuts ._ , vol .15 , no . 1 , pp .520 , first quarter 2013 .m. yilmaz and p. krein , `` review of integrated charging methods for plug - in electric and hybrid vehicles , '' in _ ieee international conference on vehicular electronics and safety ( icves ) _ , 2012 , pp .346351 . , `` review of battery charger topologies , charging power levels , and infrastructure for plug - in electric and hybrid vehicles , '' _ ieee trans . power electron ._ , vol . 28 , no . 5 , pp .21512169 , may 2013 .h. wu , a. gilchrist , k. sealy , p. israelsen , and j. muhs , `` a review on inductive charging for electric vehicles , '' in _ ieee international electric machines and drives conference ( iemdc ) _ , 2011 , pp .143147 .a. khaligh and s. dusmez , `` comprehensive topological analysis of conductive and inductive charging solutions for plug - in electric vehicles , '' _ ieee trans . veh ._ , vol .61 , no . 8 , pp . 34753489 , oct .j. shin , b. song , s. lee , s. shin , y. kim , g. jung , and s. jeon , `` contactless power transfer systems for on - line electric vehicle ( olev ) , '' in _ ieee international electric vehicle conference ( ievc ) _ , mar .2012 , pp . 14 .s. mohrehkesh and t. nadeem , `` toward a wireless charging for battery electric vehicles at traffic intersections , '' in _14th international ieee conference on intelligent transportation systems ( itsc ) _ , oct .2011 , pp . 113118 .s. lee , j. huh , c. park , n .- s .choi , g .- h .cho , and c .- t .rim , `` on - line electric vehicle using inductive power transfer system , '' in _ ieee energy conversion congress and exposition ( ecce ) _ , sept .2010 , pp . 15981601 .l. gkatzikis , i. koutsopoulos , and t. salonidis , `` the role of aggregators in smart grid demand response markets , '' _ ieee journal on selected areas in communications _ , vol .31 , no . 7 , pp . 12471257 , 2013 .p. samadi , a .-mohsenian - rad , r. schober , v. wong , and j. jatskevich , `` optimal real - time pricing algorithm based on utility maximization for smart grid , '' in _ ieee international conference on smart grid communications ( smartgridcomm ) _ , 2010 , pp .415420 .o. ardakanian , s. keshav , and c. rosenberg , `` real - time distributed control for smart electric vehicle chargers : from a static to a dynamic study , '' _ ieee trans .smart grid _, vol . 5 , no . 5 , pp . 22952305 , sept .h. qin and w. zhang , `` charging scheduling with minimal waiting in a network of electric vehicles and charging stations , '' in _ proc . of the 8th acm international workshop on vehicular inter - networking _ , 2011 , pp .q. guo , s. xin , h. sun , z. li , and b. zhang , `` rapid - charging navigation of electric vehicles based on real - time power systems and traffic data , '' _ ieee trans . smart grid _ ,vol . 5 , no . 4 , pp .19691979 , july 2014 .j. hu , s. you , m. lind , and j. ostergaard , `` coordinated charging of electric vehicles for congestion prevention in the distribution grid , '' _ ieee trans . smart grid _, vol . 5 , no . 2 ,703711 , mar .2014 .j. franco , m. rider , and r. romero , `` a mixed - integer linear programming model for the electric vehicle charging coordination problem in unbalanced electrical distribution systems , '' _ ieee trans .smart grid _ , vol . 6 , no . 5 , pp . 22002210 , sept .o. beaude , s. lasaulce , and m. hennebel , `` charging games in networks of electrical vehicles , '' in _6th international conference on network games , control and optimization ( netgcoop ) _ , 2012 ,. 96103 .mohsenian - rad , v. wong , j. jatskevich , r. schober , and a. leon - garcia , `` autonomous demand - side management based on game - theoretic energy consumption scheduling for the future smart grid , '' _ ieee trans .smart grid _ , vol . 1 , no . 3 , pp .320331 , 2010 .w. leterme , f. ruelens , b. claessens , and r. belmans , `` a flexible stochastic optimization method for wind power balancing with phevs , '' _ ieee trans . smart grid _ , vol . 5 , no . 3 , pp .12381245 , may 2014 .m. vasirani , r. kota , r. cavalcante , s. ossowski , and n. jennings , `` an agent - based approach to virtual power plants of wind power generators and electric vehicles , '' _ ieee trans . smart grid _ ,vol . 4 , no . 3 , pp .13141322 , sept .2013 .g. binetti , a. davoudi , d. naso , b. turchiano , and f. lewis , `` scalable real - time electric vehicles charging with discrete charging rates , '' _ ieee trans . smart grid _ ,vol . 6 , no . 5 , pp . 22112220 , sept .2015 . c. hutson ,g. venayagamoorthy , and k. corzine , `` intelligent scheduling of hybrid and electric vehicle storage capacity in a parking lot for profit maximization in grid power transactions , '' in _ ieee energy 2030 conference _ , 2008 , pp .18 .h. liang , b. j. choi , w. zhuang , and x. shen , `` optimizing the energy delivery via v2 g systems based on stochastic inventory theory , '' _ ieee trans .smart grid _ , vol . 4 , no . 4 , pp . 22302243 , dec .2013 . c. quinn , d. zimmerle , and t. h. bradley , `` the effect of communication architecture on the availability , reliability , and economics of plug - in hybrid electric vehicle - to - grid ancillary services , '' _j. power sources _ , vol .195 , no . 5 , pp . 15001509 , 2010 .s. kamboj , k. decker , k. trnka , n. pearre , c. kern , and w. kempton , `` exploring the formation of electric vehicle coalitions for vehicle - to - grid power regulation , '' in _aamas workshop on agent technologies for energy systems ( ates ) _ , 2010 , pp .s. kamboj , w. kempton , and k. s. decker , `` deploying power grid - integrated electric vehicles as a multi - agent system , '' in _ proc .of the 10th international conference on autonomous agents and multiagent systems ( aamas ) _ , 2011 , pp. 1320 . j. escudero - garzas , a. garcia - armada , and g. seco - granados , `` fair design of plug - in electric vehicles aggregator for v2 g regulation , '' _ ieee trans ._ , vol .61 , no . 8 , pp . 34063419 , 2012 .e. h. gerding , v. robu , s. stein , d. c. parkes , a. rogers , and n. r. jennings , `` online mechanism design for electric vehicle charging , '' in _ proc . of the 10th international conference on autonomous agents and multiagent systems ( aamas )_ , 2011 , pp .811818 .e. h. gerding , s. stein , v. robu , d. zhao , and n. r. jennings , `` two - sided online markets for electric vehicle charging , '' in _ proc . of the 12th international conference on autonomous agents and multiagent systems ( aamas )_ , 2013 , pp . 989996. j. escudero - garzas and g. seco - granados , `` charging station selection optimization for plug - in electric vehicles : an oligopolistic game - theoretic framework , '' in _ ieee pes innovative smart grid technologies ( isgt ) _ , 2012 , pp .18 . f. p.kelly , a. k. maulloo , and d. k. h. tan , `` rate control in communication networks : shadow prices , proportional fairness and stability , '' _ journal of the operational research society _ ,49 , pp . 237252 , 1998 . h. akhavan - hejazi , h. mohsenian - rad , and a. nejat , `` developing a test data set for electric vehicle applications in smart grid research , '' in _ieee 80th vehicular technology conference ( vtc fall ) _ , sept .2014 , pp . 16 .j. gonder , t. markel , m. thornton , and a. simpson , `` using global positioning system travel data to assess real - world energy use of plug - in hybrid electric vehicles , '' _ transportation research record : journal of the transportation research board _ , vol .2017 , pp . 2632 , 2007 .[ online ] .available : http://dx.doi.org/10.3141/2017-04 l. s. shapley , `` a value for -person games , '' in _ contributions to the theory of games , volume ii , annals of mathematical studies _, h. kuhn and a. tucker , eds.1em plus 0.5em minus 0.4emprinceton university press , 1953 , pp .307317 .andersson , a. elofsson , m. galus , l. gransson , s. karlsson , f. johnsson , and g. andersson , `` plug - in hybrid electric vehicles as regulating power providers : case studies of sweden and germany , '' _ energy policy _ , vol .38 , no . 6 , pp .27512762 , 2010 . c. budischak , d. sewell , h. thomson , l. mach , d. e. veron , and w. kempton , `` cost - minimized combinations of wind power , solar power and electrochemical storage , powering the grid up to 99.9% of the time , '' _ j. power sources _ , vol . 225 , pp .6074 , 2013 .m. soshinskaya , w. h. crijns - graus , j. m. guerrero , and j. c. vasquez , `` microgrids : experiences , barriers and success factors , '' _ renewable and sustainable energy reviews _ , vol .40 , pp . 659672 , 2014 .wenjing shuai received her b.s .degree from northwestern polytechnical university , china , in 2008 and the m.s .degree from xidian university , china , in 2011 , both in telecommunication .she is currently a ph.d .candidate in telecom bretagne , france .her research interests include electric vehicle charging management and electricity pricing in smart grid .patrick maill graduated from ecole polytechnique and telecom paristech , france , in 2000 and 2002 , respectively .he has been an assistant professor at the networks , security , multimedia department of telecom bretagne since 2002 , where he obtained his ph.d .in applied mathematics in 2005 , followed by a 6-month visit to columbia university in 2006 .his research interests are on game theory and economic concepts applied to telecommunication ecosystems : resource pricing , routing , consequences of user selfishness on network performance .alexander pelov is an associate professor of computer networks in the `` networking , multimedia and security '' department a telecom bretagne , france .his research focuses on networking protocols for machine - to - machine communications , energy efficiency in wireless networks , and protocols and algorithms for smart grid applications , most notably related to smart meters , sub - metering and electrical vehicles .he received his m.sc .( 2005 ) from the university of provence , france and ph.d .( 2009 ) from the university of strasbourg , france , both in computer science .
electric vehicles ( evs ) , as their penetration increases , are not only challenging the sustainability of the power grid , but also stimulating and promoting its upgrading . indeed , evs can actively reinforce the development of the smart grid if their charging processes are properly coordinated through two - way communications , possibly benefiting all types of actors . because grid systems involve a large number of actors with nonaligned objectives , we focus on the economic and incentive aspects , where each actor behaves in its own interest . we indeed believe that the market structure will directly impact the actors behaviors , and as a result the total benefits that the presence of evs can earn the society , hence the need for a careful design . this survey provides an overview of economic models considering unidirectional energy flows , but also bidirectional energy flows , i.e. , with evs temporarily providing energy to the grid . we describe and compare the main approaches , summarize the requirements on the supporting communication systems , and propose a classification to highlight the most important results and lacks .
the atmospheric optical turbulence is an effect that acts on the propagation of light waves through the atmosphere .its origin is on random variations of the refractive index associated to temperature fluctuations .atmospheric optical turbulence drastically affects to astronomical observations , limiting the capabilities of ground - based telescopes .the refractive index structure parameter , c , constitutes a quantitative measure of the atmospheric optical turbulence strength [e.g . ] which depends on the position .c as a function of the altitude is commonly referred to as optical turbulence profile , being a relevant variable in the definition of adaptive optics systems in astronomy .the scintillation detection and ranging ( scidar ) technique is an efficient technique to measure the optical turbulence profiles in astronomical observatories .scidar is based on the analysis of the intensity distribution ( scintillation patterns ) at the pupil plane ( classical - scidar ) or at a virtual plane ( generalized - scidar ) of a telescope when observing a double - star .atmospheric turbulence profiles are derived through the inversion of the normalized autocovariance of a large number of scintillation patterns .the scidar technique , in its classical or generalized versions , has been extensively explained in previous papers .the classical - scidar is not sensitive to turbulence at the observatory level while generalized - scidar is able to measure this turbulence by locating the analysis plane in a virtual position a few kilometers below the pupil plane .generalized scidar has been extensively used in astronomical sites to study the atmospheric optical turbulence for applications in the development of adaptive optics systems and site characterization . from the early experimental implementations of the technique, it was noted that generalized - scidar data processing leads to an overestimation of the optical turbulence strength that was assumed as negligible at any altitude . demonstrated that this overestimation might be negligible or relevant strongly depending on the selected observational parameters namely telescope diameter , double - star angular separation and analysis plane conjugation altitude . an analytical expression to calculate the actual errors induced during generalized - scidar data processing as well as a procedure for the correct recalibration of c profiles derived from generalized - scidar observations were also provided in .this procedure has been already applied to re - calibrate atmospheric optical turbulence profiles retrieved in mt graham , el teide , and san pedro mrtir observatories . the atmospheric optical turbulence monitoring programme at the roque de los muchachos observatory ( orm hereafter ) started in 2004 .the classical data processing [see e.g. ] was performed to retrieved the c profiles from the generalized - scidar observations assuming negligible errors induced in the data treatment .results derived from the c profiles in the orm database were already published before the work . in this paper , we present the database of atmospheric optical turbulence profiles recorded at the orm through generalized - scidar observations , the largest database of c(h ) for an astronomical site that has been published so far .we also perform the recalibration of the c profiles in the orm database to compensate for the errors introduced during data processing .we analyze the implications in the statistical results derived from this database before and after the recalibration . section 2presents the database of c profiles derived from generalized - scidar measurements at the orm .we also calculate the impact of the data processing error on retrieved c profiles and re - calibrate the full database following the proposed procedure in .section 3 analyzes the implications of the c database recalibration on results derived from profiles .conclusions are summarized in section 4 .the orm is located at an altitude of meters above sea level ( _ asl _ hereafter ) , at latitude n and longitude w on the island of la palma ( canary islands , spain ) .this astronomical site was one of the final candidates to locate the european extremely large telescope ( 42m - eelt ) .a monitoring program of the atmospheric turbulence structure at the orm began in 2004 using the generalized - scidar technique .the 1-m jacobus kapteyn telescope was used in combination with the cute - scidar instrument developed at the instituto de astrofsica de canarias ( tenerife , canary islands , spain ) .each detector pixel of cute - scidar instrument covers a square 1.935 cm in size on the 1-m jacobus kapteyn telescope pupil .the generalized - scidar data were processed using the traditional procedure [see e.g. ] of deriving the normalized autocovariance from a series of scintillation patterns ( 1000 images in orm case ) .the autocovariance peaks allow the determination of c(h ) using a numerical inversion .the c(h ) systematic campaigns at orm were carried out from february 2004 to october 2006 and from january 2008 to august 2009 with a frequency of about 4 - 6 nights per month .useful generalized - scidar observations were obtained in 211 nights during these campains and 197035 individual c profiles constitute the database of turbulence profiles at orm ( see table [ tab1 ] ) , the largest c database published until now .the dome and mirror turbulence contribution was removed from all the profiles using the procedure in . [ cols= " <, > , > , > , > , > , > , > , > , > , > , > , > , > , < , > " , ] [ stat_high_low ] figure [ median_profiles ] shows the average turbulence profile derived from the combination of c(h ) in high- ( fig .[ median_profiles]a ) , and low - resolution ( fig .[ median_profiles]b ) modes before ( dashed - line ) and after ( solid - line ) the recalibration of the profiles . as it has been already shown in previous recalibrations , the effect of the induced error during data processing consists of an overestimation of the turbulence strength , but following the actual turbulence structure .generalized scidar observations in high - resolution mode were obtained when the normalized autocovariance showed no evidence of high - altitude turbulence layers , but stratified turbulence at low level ( the selection of this mode was subject to observer criteria and experience ) .moreover , the use of high - resolution mode presents a seasonal bias according to fig .[ relfre ] . for these reasons , the median high - resolution turbulence profile ( fig .[ median_profiles]a ) derived for orm could not be representative of a statistical turbulence profile at this site .[ median_profiles]a shows the median recalibrated profile where % of the detected turbulence is concentrated at the boundary layer ( b - l ) , being 96.6% of this turbulence located in the first 500 meters above the observatory level .a clear turbulence layer is resolved at about 5.2 km _ asl _ , that constitute about 12% of the turbulence in the median high - resolution c profile ( integrating c(h ) from 3.4 to 7 km ) .the turbulence upwards 7 km _ asl _ represents only 11% of the total turbulence measured in the derived high - resolution c profile .the median low - resolution c(h ) derived for orm ( fig . [ median_profiles]b ) presents a smoother structure in altitude compared with the high - resolution profile .76% of the turbulence is concentrated in the b - l , while turbulence above 5 km represents only a % of the total turbulence .a turbulence feature appears at .2 km _ asl _ with a strength at the peak of about 6 m .any other turbulence layer is not clear in this median low - resolution turbulence profile , being the background turbulence at any altitude above 10 km always bellow 3.5 m .there is a 2 km difference in altitude for a mid - altitude ( 3 km h 10 km ) turbulence layer in the median high- and low - vertical resolution profiles .this difference is well - explained taking into account the seasonal variation of the turbulence structure already reported for the canary islands observatories .most of the high - vertical resolution profiles ( 53 % ) were recorded between june and august ( see fig . [relfre]a ) : turbulence structure in these months is characterized by a relatively strong turbulence layer at - 6 km _asl _ that is stable from year to year .this turbulence layer evolves in altitude and strength along the year .the relative frequency of turbulence profiles recorded in low - vertical resolution mode ( fig .[ relfre]b ) has not a clear peak at any month , better smoothing the seasonal evolution of the turbulence structure above orm .the optical atmospheric turbulence structure has been monitoring since 2004 at the roque de los muchachos observatory ( la palma , canary islands , spain ) .useful generalized scidar measurements were obtained during 211 nights .the total number of individual c(h ) profiles recorded at this site is 197035 .the error induced during generalized scidar data processing has been calculated , being more significant when using high - vertical resolution mode ( 500 meters ) .following the procedure proposed by , we have re - calibrated the full database of turbulence profiles recorded at orm , showing the effects of theses errors in the calculation of statistical atmospheric parameters relevant for adaptive optics . combining the corrected turbulence profiles ,we have obtained the statistical high- ( 500 meters ) and low - vertical ( 500 meters ) resolution turbulence profiles to have a view of the turbulence structure at orm .the main conclusions that we have derived from this work can be summarized as follows . *the generalized scidar data processing leads to an overstimation of the optical turbulence strength that it is not negligible for most of the observational configurations used at roque de los muchachos observatory . *the error introduced during the processing of the generalized scidar data can drastically affect the statistical values derived for atmospheric parameters relevant for adaptive optics ( namely total seeing , boundary - layer , free - atmosphere contributions and isoplanatic angle ) , being as large as 50 per cent for high - altitude layers in some generalized scidar configurations used at orm . *both the high- and low - vertical resolution profiles obtained for orm show that most of the optical turbulence is concentrated in the first 5 km .the most intense turbulence layer is at the observatory level .a lower strength turbulence layer is detected in mid - altitude levels ( 4 h 8 km ) .the c(h ) set recorded at the roque de los muchachos observatory constitutes the largest database of optical atmospheric turbulence profiles so far .this paper is based on observations obtained at the jacobus kapteyn telescope operated by the isaac newton group at the observatorio de roque de los muchachos of the instituto de astrofsica de canarias .the authors thank all the staff at the observatory for their kind support .thanks are also due to all the observers that have recorded generalized scidar data at this site ( j. castro - almazn , s. chueca , j.m .delgado , e. sanroma , c. hoegemann , m.a.c .rodrguez - hernndez , and h. vzquez - rami ) .we also thank a. eff - darwich for help and useful discussions .we are grateful to the referee , remy avila , whose comments helped to improve this paper .this work was partially funded by the instituto de astrofsica de canarias and by the spanish ministerio de educacin y ciencia ( aya2006 - 13682 and aya2009 - 12903 ) . b. garca - lorenzo thanks the support from the ramn y cajal program by the spanish ministerio de ciencia e innovacin .castro - almazn , j. a. , garca - lorenzo , b. , & fuensalida , j. j. 2009 , , optical turbulance : astronomy meets meteorology , proceedings of the astronomy meets meteorology , proceedings of the optical turbulence characterization for astronomical applications sardinia , italy , 15 - 18september 2008 , edited by elena masciadri ( instituto nazionale di astrofisica , italy ) & marc sarazin ( european southern observatory , germany ) , pp . 350 - 357 ( 2010 ) garca - lorenzo , b. , fuensalida , j. j. , castro - almazn , j. , & rodriguez - hernndez , m. a. c. 2009 , optical turbulance : astronomy meets meteorology , proceedings of the astronomy meets meteorology , proceedings of the optical turbulence characterization for astronomical applications sardinia , italy , 15 - 18 september 2008 , edited by elena masciadri ( instituto nazionale di astrofisica , italy ) & marc sarazin ( european southern observatory , germany ) , pp . 66 - 73 ( 2010 ) . , 66
we present the largest database so far of atmospheric optical - turbulence profiles ( 197035 individual c ) for an astronomical site , the roque de los muchachos observatory ( la palma , spain ) . this c(h ) database was obtained through generalized - scidar observations at the 1 meter jacobus kapteyn telescope from febrary 2004 to august 2009 , obtaining useful data for 211 nights . the overestimation of the turbulence strength induced during the generalized scidar data processing has been analysed for the different observational configurations . all the individual c(h ) have been re - calibrated to compensate the introduced errors during data treatment following . comparing results from profiles before and after the recalibration , we analyse its impact on the calculation of relevant parameters for adaptive optics . [ firstpage ] atmospheric effects - instrumentation : adaptive optics - site testing
covariance matrix estimation is of fundamental importance in multivariate analysis . driven by a wide range of applications in science and engineering , the high - dimensional setting , where the dimension can be much larger than the sample size , is of particular current interest . in such a setting , conventional methods and results based on fixed and large no longer applicable , and in particular , the commonly used sample covariance matrix and normal maximum likelihood estimate perform poorly .a number of regularization methods , including banding , tapering , thresholding and minimization , have been developed in recent years for estimating a large covariance matrix or its inverse .see , for example , , , , , bickel and levina ( ) , , , friedman , hastie and tibshirani ( ) , , , , , , , and , among many others .let be independent copies of a dimensional gaussian random vector .the goal is to estimate the covariance matrix and its inverse based on the sample .it is now well known that the usual sample covariance matrix where , is not a consistent estimator of the covariance matrix when , and structural assumptions are required in order to estimate consistently .one of the most commonly considered classes of covariance matrices is the `` bandable '' matrices , where the entries of the matrix decay as they move away from the diagonal .more specifically , consider the following class of covariance matrices introduced in : \\[-8pt ] \nonumber & & \hspace*{80pt}\mbox{and } 0 < m_0^{-1 } \le \lambda_{\min } ( \sigma ) , \lambda_{\max } ( \sigma)\le m_0 \biggr\}.\end{aligned}\ ] ] such a family of covariance matrices naturally arises in a number of settings , including temporal or spatial data analysis .see for further discussions .several regularization methods have been introduced for estimating a bandable covariance matrix . suggested banding the sample covariance matrix and estimating by where is a banding matrix and represents the schur product , that is , for two matrices of the same dimensions .see figure [ figbandtaper](a ) for an illustration . proposed to choose and showed that the resulting banding estimator attains the rate of convergence uniformly over , where stands for the spectral norm .this result indicates that even when , it is still possible to consistently estimate , so long as . established the minimax rate of convergence for estimation over and introduced a tapering estimator where the tapering matrix is given by with .see figure [ figbandtaper](b ) for an illustration .it was shown that the tapering estimator with achieves the rate of convergence uniformly over , which is always faster than the rate in ( [ blrate ] ) .this implies that the rate of convergence given in ( [ blrate ] ) for the banding estimator with is in fact sub - optimal .furthermore , a lower bound argument was given in which showed that the rate of convergence given in ( [ optrate ] ) is indeed optimal for estimating the covariance matrices over .[ cols="^,^ " , ] the rationale behind our block thresholding approach is that although the sample covariance matrix is not a reliable estimator of , its submatrix , , could still be a good estimate of .this observation is formalized in the following theorem .[ thunion ] there exists an absolute constant such that for all , in particular , we can take .theorem [ thunion ] enables one to bound the estimation error block by block .note that larger blocks are necessarily far away from the diagonal by construction .for bandable matrices , this means that larger blocks are necessarily small in the spectral norm . from theorem [ thunion ] , if , with overwhelming probability , for blocks with sufficiently large sizes .as we shall show in section [ secproof ] , and in the above inequality can be replaced by their respective sample counterparts .this observation suggests that larger blocks are shrunken to zero with our proposed block thresholding procedure , which is essential in establishing ( [ eqlarge ] ) .the treatment of smaller blocks is more complicated . in light of theorem [ thunion ] ,blocks of smaller sizes can be estimated well , that is , is close to for of smaller sizes . to translate the closeness in such a blockwise fashion into the closeness in terms of the whole covariance matrix, we need a simple yet useful result based on a matrix norm compression transform .we shall now present a so - called norm compression inequality which is particularly useful for analyzing the properties of the block thresholding estimators .we begin by introducing a matrix norm compression transform .let be a symmetric matrix , and let be positive integers such that .the matrix can then be partitioned in a block form as where is a submatrix .we shall call such a partition of the matrix a regular partition and the blocks regular blocks . denote by a norm compression transform following theorem shows that such a norm compression transform does not decrease the matrix norm .[ thcomp ] for any matrix and block sizes such that , together with theorems [ thunion ] and [ thcomp ] provides a very useful tool for bounding .note first that theorem [ thcomp ] only applies to a regular partition , that is , the divisions of the rows and columns are the same .it is clear that corresponds to regular blocks of size with the possible exception of the last row and column which can be of a different size , that is , .hence , theorem [ thcomp ] can be directly applied .however , this is no longer the case when . to take advantage of theorem [ thcomp ] ,a new blocking scheme is needed for .consider the case when .it is clear that does not form a regular blocking .but we can form new blocks with , that is , half the size of the original blocks in .denote by the collection of the new blocks .it is clear that under this new blocking scheme , each block of size consists of four elements from .thus applying theorem [ thcomp ] to the regular blocks yields which can be further bounded by where stands for the matrix norm .observe that each row or column of has at most 12 nonzero entries , and each entry is bounded by because implies .this property suggests that can be controlled in a block - by - block fashion .this can be done using the concentration inequalities given in section [ secconcentration ] .the case when can be treated similarly .let and for it is not hard to see that each block in of size occupies up to four blocks in this regular blocking . and following the same argument as before , we can derive bounds for . the detailed proofs of theorems [ thmain ] and [ thmain1 ] are given in section [ secproof ] .the block thresholding estimator proposed in section [ secmeth ] is easy to implement . in this sectionwe turn to the numerical performance of the estimator .the simulation study further illustrates the merits of the proposed block thresholding estimator .the performance is relatively insensitive to the choice of , and we shall focus on throughout this section for brevity .we consider two different sets of covariance matrices .the setting of our first set of numerical experiments is similar to those from .specifically , the true covariance matrix is of the form where the value of is set to be to ensure positive definiteness of all covariance matrices , and are independently sampled from a uniform distribution between and .the second settings are slightly more complicated , and the covariance matrix is randomly generated as follows .we first simulate a symmetric matrix whose diagonal entries are zero and off - diagonal entries ( ) are independently generated as .let be its smallest eigenvalue .the covariance matrix is then set to be to ensure its positive definiteness .for each setting , four different combinations of and are considered , and , and for each combination , 200 simulated datasets are generated . on each simulated dataset , we apply the proposed block thresholding procedure with . for comparison purposes , we also use the banding estimator of and tapering estimator of on the simulated datasets . for both estimators, a tuning parameter needs to be chosen .the two estimators perform similarly for the similar values of . for brevity , we report only the results for the tapering estimator because it is known to be rate optimal if is appropriately selected based on the true parameter space .it is clear that for both our settings , with .but such knowledge would be absent in practice . to demonstrate the importance of knowing the true parameter space for these estimators and consequently the necessity of an adaptive estimator such as the one proposed here , we apply the estimators with five different values of , and .we chose for the tapering estimator following .the performance of these estimators is summarized in figures [ figsimres ] and [ fignewsimres ] for the two settings , respectively . and dimension .in each panel , boxplots of the estimation errors , measured in terms of the spectral norm , are given for the block thresholding estimator with and the tapering estimator with , and . ] and dimension . in each panel , boxplots of the estimation errors , measured in terms of the spectral norm , are given for the block thresholding estimator with and the tapering estimator with , and . ]it can be seen in both settings that the numerical performance of the tapering estimators critically depends on the specification of the decay rate .mis - specifying could lead to rather poor performance by the tapering estimators .it is perhaps not surprising to observe that the tapering estimator with performed the best among all estimators since it correctly specifies the true decay rate and therefore , in a certain sense , made use of the information that may not be known a priori in practice .in contrast , the proposed block thresholding estimator yields competitive performance while not using such information .in this paper we introduced a fully data - driven covariance matrix estimator by blockwise thresholding of the sample covariance matrix .the estimator simultaneously attains the optimal rate of convergence for estimating bandable covariance matrices over the full range of the parameter spaces for all .the estimator also performs well numerically .as noted in section [ thresholdingsec ] , the choice of the thresholding constant is based on our theoretical and numerical studies .similar to wavelet thresholding in nonparametric function estimation , in principle other choices of can also be used .for example , the adaptivity results on the block thresholding estimator holds as long as where the value comes from the concentration inequality given in theorem [ thunion ] .our experience suggests the performance of the block thresholding estimator is relatively insensitive to a small change of .however , numerically the estimator can sometimes be further improved by using data - dependent choices of . throughout the paper ,we have focused on the gaussian case for ease of exposition and to allow for the most clear description of the block thresholding estimator .the method and the results can also be extended to more general subgaussian distributions .suppose that the distribution of the s is subgaussian in the sense that there exists a constant such that let denote the collection of distributions satisfying both ( [ paraspace ] ) and ( [ subgau ] ) . then for any given , the block thresholding estimator adaptively attains the optimal rate of convergence over for all , and whenever is chosen sufficiently large . in this paperwe have focused on estimation under the spectral norm . the block thresholding procedure ,however , can be naturally extended to achieve adaption under other matrix norms .consider , for example , the frobenius norm . in this case , it is natural and also necessary to threshold the blocks based on their respective frobenius norms instead of the spectral norms .then following a similar argument as before , it can be shown that this frobenius norm based block thresholding estimator can adaptively achieve the minimax rate of convergence over every for all .it should also be noted that adaptive estimation under the frobenius norm is a much easier problem because the squared frobenius norm is entrywise decomposable , and the matrix can then be estimated well row by row or column by column .for example , applying a suitable block thresholding procedure for sequence estimation to the sample covariance matrix , row - by - row would also lead to an adaptive covariance matrix estimator .the block thresholding approach can also be used for estimating sparse covariance matrices . a major difference in this case from that of estimating bandable covariance matricesis that the block sizes can not be too large . with suitable choices of the block size and thresholding level, a fully data - driven block thresholding estimator can be shown to be rate - optimal for estimating sparse covariance matrices .we shall report the details of these results elsewhere .in this section we shall first prove theorems [ thunion ] and [ thcomp ] and then prove the main results , theorems [ thmain ] and [ thmain1 ] . the proofs of some additional technical lemmas are given at the end of the section .let and be independent copies of .let be its sample covariance matrix .it is clear that .hence note that therefore , observe that similarly , the proof is now complete . without loss of generality , assume that .let be a matrix , and where is the unit sphere in the dimensional euclidean space .observe that where as before , we use to represent the spectral norm for a matrix and norm for a vector .as shown by brczky and wintsche [ ( ) , e.g. , corollary 1.2 ] , there exists an -cover set of such that for some absolute constant .note that in other words , now consider .let and .then where similarly , .therefore , clearly the distributional properties of are invariant to the mean of . we shall therefore assume without loss of generality that in the rest of the proof .for any fixed , we have where , , and similarly , , .it is not hard to see that and is simply the difference between the sample and population covariance of and .we now appeal to the following lemma : applying lemma [ lecorr ] , we obtain where . by the tail bound for random variables , we have see , for example , lemma 1 of laurent and massart ( ) . in summary , denote by the left and right singular vectors corresponding to the leading singular value of , that is , .let and be partitioned in the same fashion as , for example , .denote by and .it is clear that .therefore , with the technical tools provided by theorems [ thunion ] and [ thcomp ] , we now show that is an adaptive estimator of as claimed by theorem [ thmain ] .we begin by establishing formal error bounds on the blocks using the technical tools introduced earlier .first treat the larger blocks . when , large blocks can all be shrunk to zero because they necessarily occur far away from the diagonal and therefore are small in spectral norm .more precisely , we have : together with theorem [ thunion ] , this suggests that with probability at least .therefore , when for a large enough constant , the following lemma indicates that we can further replace and by their respective sample counterparts .in the light of lemma [ letune ] , ( [ eqshrunken ] ) implies that , with probability at least , for any such that and ( [ eqdefbig ] ) holds , whenever is sufficiently large . in other words , with probability at least , for any such that ( [ eqdefbig ] ) holds , .observe that by the definition of , by lemma [ letune ] , the spectral norm of and appeared in the first term on the rightmost - hand side can be replaced by their corresponding population counterparts , leading to where we used the fact that .this can then be readily bounded , thanks to theorem [ thunion]: together with ( [ eqncompsmall ] ) , we get to put the bounds on both small and big blocks together , we need only to choose an appropriate cutoff in ( [ eqsmalllarge ] ) .in particular , we take where stands for the smallest integer that is no less than . if , all blocks are small . from the boundderived for small blocks , for example , equation ( [ eqsmallbd ] ) , we have with probability at least .hereafter we use as a generic constant that does not depend on , or , and its value may change at each appearance .thus it now suffices to show that the second term on the right - hand side is . by the cauchy schwarz inequality , observe that where stands for the frobenius norm of a matrix .thus , when and , by the analysis from section [ seclarge ] , all large blocks will be shrunk to zero with overwhelming probability , that is , when this happens , recall that stands for the matrix norm , that is , the maximum row sum of the absolute values of the entries of a matrix .hence , as a result , it remains to show that by the cauchy schwarz inequality , observe that where the second inequality follows from the fact that or .it is not hard to see that on the other hand , therefore , together with theorem [ thunion ] , we conclude that finally , when is very large in that , we can proceed in the same fashion .following the same argument as before , it can be shown that the smaller blocks can also be treated in a similar fashion as before . from equation ( [ eqsmallbd ] ) , with probability at least .thus , it can be calculated that combining these bounds , we conclude that in summary , for all . in other words , the block thresholding estimator achieves the optimal rate of convergence simultaneously over every for all .observe that where denotes the smallest eigenvalue of a symmetric matrix . under the event that is positive definite and .note also that therefore , by theorem [ thmain ] .on the other hand , note that it suffices to show that now consider the case when observe that for each , it can then be deduced from the norm compression inequality , in a similar spirit as before , that by the triangle inequality , and under the event that we have now by lemma [ letail ] , for some constant , which concludes the proof .note that for any , there exists an integer such that .we proceed by induction on . when , it is clear by construction , blocks of size are at least one block away from the diagonal .see figure [ figblock ] also .this implies that the statement is true for . from to , one simply observes that all blocks of size is at least one block away from blocks of size .therefore , which implies the desired statement .we are now in position to prove lemma [ leblocknorm ] which states that big blocks of the covariance matrix are small in spectral norm .recall that the matrix norm is defined as .similarly the matrix norm is defined as immediately from lemma [ lemdist ] , we have which implies . for any , write .then the entries of are independent standard normal random variables . from the concentration bounds on the random matrices [ see , e.g. , ] , we have with probability at least where is the sample covariance matrix of . applying the union bound to all yields that with probability at least , for all observe that thus which implies the desired statement .
estimation of large covariance matrices has drawn considerable recent attention , and the theoretical focus so far has mainly been on developing a minimax theory over a fixed parameter space . in this paper , we consider adaptive covariance matrix estimation where the goal is to construct a single procedure which is minimax rate optimal simultaneously over each parameter space in a large collection . a fully data - driven block thresholding estimator is proposed . the estimator is constructed by carefully dividing the sample covariance matrix into blocks and then simultaneously estimating the entries in a block by thresholding . the estimator is shown to be optimally rate adaptive over a wide range of bandable covariance matrices . a simulation study is carried out and shows that the block thresholding estimator performs well numerically . some of the technical tools developed in this paper can also be of independent interest .
neuroscientists and cognitive scientists have many tools at their disposal to study the brain and neural networks in general , including electroencephalography ( eeg ) , single - photon emission computed tomography ( spect ) , functional magnetic resonance imaging ( fmri ) and microelectrode arrays ( mea ) , to name a few .however , the amount of information and level of control afforded by these tools do not remotely resemble what is available to an engineer working on an artificial neural network .the engineer can manipulate any neuron at any time , force certain excitations , intervene in ongoing processes , and collect as much data about the network as needed , at any level of detail .this wealth of information has enabled reverse engineering research on artificial neural networks , leading to insights into the inner workings of trained artificial neural networks .this suggests an indirect approach to studying the brain : training a biologically plausible neural network model to exhibit complex behavior observed in real brains , and reverse engineering the result . in line with this approach, we designed an artificial visual system based on a fully recurrent unlayered neural network that learns to perform saccadic eye movements .saccadic eye movements are quick , unconscious , task - dependent motions following the demand of attention , that direct the eye to new targets that require the high resolution of the fovea .these targets are usually detected within the peripheral visual system .once a target is centered at the fovea , the eye fixates for a fraction of a second while the visual system extracts the necessary information .most eye movements are proactive rather than reactive , predict actions in advance and do not merely respond to visual stimuli .there is good evidence that much of the active vision in humans results from reinforcement learning ( rl ) , as part of and organism s attempt to maximize its performance while interacting with the environment .accordingly , we train the artificial visual system within the rl paradigm .the network was not explicitly engineered to perform a certain task , and does not contain an explicit memory component rather it has memory only by virtue of its recurrent topology .learning takes place in a model - free setting using policy gradient techniques .we find that the network displays attributes of human learning such as : ( a ) decision making and gradual confidence increase along with accumulated evidence , ( b ) skill transfer , namely the ability to use a pre - learned skill in a certain task in order to improve learning on a related but more difficult task , ( c ) selectively attending information relevant for the task at hand , while ignoring irrelevant objects in the field of view .we designed an artificial visual system ( avs ) with the task of learning an attention model to control saccadic eye movements , and subsequent classification of digits .we refer to this task as the _ attention - classification task_. the avs is similar in many ways to that presented in .it is a simplified model of the human visual system , consisting of a small region in the center with high resolution , analogous to the human fovea , and two larger concentric regions which are sub - sampled to lower resolution and are analogous to the peripheral visual system in humans .the avs was trained and tested on the classification of handwritten digits from the mnist data set .only a small part of the image is visible to the avs at any one time .specifically , full resolution is only available at the fovea , which is 9-by-9 pixels , as in , or 5-by-5 pixels ( about 69% smaller ) .the first peripheral region is double the size of the fovea , but sub - sampled with period 2 to match the size of the fovea in pixels .similarly , the second peripheral region is quadruple the size of the fovea but sub - sampled with period 4 . for comparison , a typical digit in the mnistdatabase occupies about 20-by-20 pixels of the image .the location of the observation within the image is not available to the avs ( unlike ) , and movements of the observation location are not constrained to image boundaries .instead , locations outside the image boundaries are observed as black pixels . ]the avs consists of inputs ( observations ) projected upon the network via input weights , a neural network consisting of neurons connected by the recurrent weights , and two outputs : a classifier , responsible for identifying the digit after the avs has explored the image , and the attention model output , responsible for directing the eye to new locations based on the information represented in the network state ( see figure [ fig : artificial - visual - system ] ) .the output consists of one neuron for each possible digit . at the end of the trial ,the identity of the highest valued neuron is interpreted as the network s classification .the progression of a single trial follows these principal stages : 1 .a random digit is selected from the mnist training database .2 . a location across the image is randomly selected .the observation ( called ` glimpse ' ) from the current location is projected upon the network through , along with any pre - existing information within the network state through the recurrent weights .the attention model output is fed back as a saccade of the eye , i.e. as the size of movement from the current location in the horizontal and vertical axes .5 . if a predefined number of glimpses has passed ( or by network decision ) , compare the classifier output to the true label , otherwise return to stage 3 .reward the avs if the classification was correct , and continue to the next trial .the avs is implemented by a fully recurrent neural network .its network topology is similar the echo state network ( esn ) in that the recurrent neural connections are drawn randomly and are not constrained to a particular topology such as in layered feedforward networks , or long short - term memory networks .the network state evolves according to where * is the state of network at time step , each element representing the state of a single neuron , * $ ] is the leak rate , * is the internal connections weight matrix , * is the input weight matrix , * is the observation ( network input ) , * and are independent discrete - time gaussian white noise processes with independent components , each having variance respectively , * is the state of the output neurons , * is the output weight matrix ( consisting of blocks for the corresponding output components ) .the gradient of the expected reward is estimated as in , where is a random trajectory of glimpses , is the probability of trajectory , the observed ( usually binary ) reward , a fixed baseline computed as in , is the random location of the first glimpse , and indicates averaging over trajectories . viewed as a partially observable markov decision process ( pomdp ) , we can write the distributions describing the agent : where are the probability density functions of respectively. the pomdp dynamics are deterministic : the glimpse position evolves as . for the avsthe probability density of a trajectory is here only the output probabilities depend on , and , using and the gaussian distribution of the noise , we find that the log likelihood gradient with respect to takes the form where is the number of glimpses .stochastic gradient ascent is performed only for the output weights .recurrent weights are randomly selected , with spectral radius 1 , and remain fixed throughout training . the log likelihood with respect tothe internal weight matrix takes a similar form .however , the recurrent connections were not learned in our simulations .since information accumulated by the neural network over time is mixed into the state of the network , it is not obvious that the potential to extract useful historic information can be exploited within the attention model solution .training uses gradient ascent to the local maxima of the estimated expected reward and therefore may converge to sub - optimal maxima that do not make use of the full potential of the system . in order to test the use of memory by the trained network ,two similar avs were trained on the attention - classification task . in the first avs ,recurrent weights were random , whereas the second avs was set to ` forget ' historic information , by setting the recurrent weights matrix to zero . ]use of memory was found to depend on the size of the fovea .[ fig : exploitation - of - memory - r5 ] shows the performance of the system across training epochs , for the case of a large ( 9 pixels ) fovea .initially , the avs with memory has the advantage as the attention model is still poor at this stage , leading to relatively uninformative glimpses , so the use of information from several glimpses results in better classification .however , as the attention mechanism improves the last glimpse becomes highly informative , so the memoryless network , where information from the last glimpse is not corrupted by memory of previous glimpses , has the advantage .in fact , we found that information from a well - placed glimpse suffices to classify the digit with over 90% success rate in this case , driving the network to a solution of finding a single good glimpse location across the digit , and classification based on that glimpse , without regard to the rest of the trajectory .the situation is different with a smaller fovea ( 5 pixels ) , where classification from a single glimpse becomes harder .as seen in figure [ fig : exploitation - of - memory - r3 ] , the avs with memory outperforms the one without memory in the small fovea case . ]the human visual system acts to maximize the information relevant to the task . in order to assesswhether our avs behaves similarly , we have to characterize the relevant information in the context of our task .since the network classification depends linearly on the network state in the last time step , we quantify the task - relevant information as the best linear separation of the network state , between each class and the other classes .accordingly , we use linear discriminant analysis ( lda ) , which acts to find the projection that minimizes the distance between samples of the same cluster while at the same time maximizes the distance between clusters . the distance within each classis measured by the variance of samples belonging to that class , and is taken to be the mean of these distances across all classes .the distance between classes is defined as the variance of the set of class centers .we trained an avs on the attention - classification task with 5 glimpses per digit .after the avs was trained , it was tested in two cases . in the first case, the system was run as usual and the network state vector was recorded after the last glimpse of each digit in the test set . in the second case ,the location of the last glimpse was chosen randomly rather than following the learned attention model .the results are illustrated in figure [ fig : gathering - information ] , where the state of the network is projected on the first two eigenvectors of .separation is significantly better with the full attention model compared to the one with the random last glimpse .we conclude that , at the very least , the attention model acts to maximize task - relevant information in the last glimpse better than a random walk .results of linear discriminant analysis of the avs state at the last time step .each dot corresponds to single trial and represents the projection of the network state on the first two eigenvectors of .dots are colored according to the digit presented to the network .left : random last glimpse .right : full attention model . ]biological learning often displays the ability to use a skill learned on a simple task in order to improve learning of a harder yet related task , e.g. , proficiency at tennis is beneficial when learning racquetball and even seemingly unrelated tasks such as skiing for example . to test whether transfer learning is possible in the avs, we trained it to learn the attention model and classification of 3 digits ( out of 10 in the mnist database ) .the resulting solution served as an initial condition for learning the full task of classifying all 10 digits .as seen in fig .[ fig : transferable - skill ] , not only did the avs with pre - learned attention learn much faster , but it also achieved a better result at the end of training . ] the eyes are not directed to the most visually salient points in the field of view , but rather to the ones that are most relevant for the task at hand .accordingly , we introduced an highly salient object into the training images .the object is a square , approximately the size of a digit but with maximum brightness , whereas the digits are handwritten and displayed in grayscale .the object is inserted at a fixed position relative to the digit , always on the right hand side of the digit .the trained network successfully avoids unnecessary fixations on the salient object . in cases where the first glimpse falls upon an area where both the digit and the object are within the peripheral visual region, the object seems to be completely ignored .perhaps more interesting is the case where only the object is visible in the first glimpse , within the peripheral view .in such a case , the avs learned to exploit the fact that the digit is always located on the left of the object and consistently performs saccades to the left .thus , not only was the presence of a distracting object not harmful to performance , but it was actually beneficial . ] in fig .[ fig : fixed - distracting - object ] , the colored squares represent the foveal view of 5-by-5 pixels at each time step going from blue to red .the left and middle images show cases where the first glimpse only observes the distracting object within the peripheral view .the left image shows a case where the first glimpse observes both the distracting object and the digit within the peripheral view .next , we test the network with a distracting object in a random position around the digit .the observed behavior was similar when the first glimpse happened to fall on a location where both the object and digit are within the peripheral view : the avs ignored the distraction and directed itself towards the digit .however , in the case where the first glimpse falls on a location where only the distracting object is visible in the peripheral view , the avs failed to locate the digit . ]an example is seen in fig .[ fig : free - distracting - object ] .the lines are trajectories of the avs each starting from a different point on a test grid and followed until the last glimpse ( blue / magenta dot ) .green lines are trajectories that led to a correct classification while red lines are trajectories that led to a false classification .when the avs happen to fall at a location where the digit is not seen , it directs its gaze towards the square , which it then chooses to classify as `` 1 '' thus earning 10% expected reward which is better than nothing .learning by demonstration ( or learning with guidance ) was implemented in the avs .demonstration differs from supervision in two key ways .first , demonstration is not continuous , and is applied sparsely in time in order to suggest new trajectories to the system .second , demonstration is not required to provide the best solution to the system , because the system maintains its freedom to explore and even improve upon it .demonstration was achieved by providing the network with a sparse and naive suggestion for the attention model . for example , on of trajectories , the system was directed to the center of the digit on the last glimpse .such partial direction resulted in a significant improvement of both speed of learning and the final success rate , as can be observed in figure [ fig : demonstraion - learning ] .demonstration in the avs system was made possible by manipulating the exploration noise .the exploration noise is a gaussian white noise and as such has probability greater than zero to accept any value .since the output of the system at any given time is a function of that noise , we can force the output to a specific value by setting the exploration noise in that particular time step to be where is now the demonstrated output of the attention model and is the determined exploration noise that will bring the system to that desired output .as long as the demonstration is kept sparse enough , it would in practice not break the assumption that the noise is a gaussian white noise .the noise in the system is an essential part of the log likelihood gradient and therefore the system would not only arrive to the desired output at that particular time step , but also learn from that experience . ]we have shown that a simple artificial visual system , implemented through a recurrent neural network using policy gradient reinforcement learning , can be trained to perform classification of objects that are much larger than its central region of high visual acuity .while receiving only classification based reward , the system develops an active vision solution , which directs attention towards relevant parts of the image in a task - dependent way .importantly , the internal network memory plays an essential role in maintaining information across saccades , so that the final classification is achieved by combing information from the current visual input and from previous inputs represented in the network state . within a generic active vision system , without any specifically crafted features ,we have been able to explain several features characteristic of biological vision : _ ( i ) _ good classification performance using reinforcement learning based on highly limited central vision and low resolution peripheral vision , _ ( ii ) _ gathering task - relevant information through active search , _ ( iii ) _ transfer learning , _( iv ) _ ignoring task - irrelevant distractors , _ ( v ) _ learning through guidance . beyond providing a model for biological vision ,our results suggest possible avenues for cost - effective image recognition in artificial vision systems .
the human visual perception of the world is of a large fixed image that is highly detailed and sharp . however , receptor density in the retina is not uniform : a small central region called the fovea is very dense and exhibits high resolution , whereas a peripheral region around it has much lower spatial resolution . thus , contrary to our perception , we are only able to observe a very small region around the line of sight with high resolution . the perception of a complete and stable view is aided by an attention mechanism that directs the eyes to the numerous points of interest within the scene . the eyes move between these targets in quick , unconscious movements , known as `` saccades '' . once a target is centered at the fovea , the eyes fixate for a fraction of a second while the visual system extracts the necessary information . an artificial visual system was built based on a fully recurrent neural network set within a reinforcement learning protocol , and learned to attend to regions of interest while solving a classification task . the model is consistent with several experimentally observed phenomena , and suggests novel predictions .
computations of lyapunov exponents continue to play a significant role in the study of nonlinear systems .the computations are often numerical because most nonlinear systems prove analytically intractable . in the case of real systems, there may be no mathematical model that adequately describes the system ; hence one is often confronted with a limited amount of data from which to estimate lyapunov exponents .finite observations may afford computations of finite time lyapunov exponents , but not global lyapunov exponents .unlike global lyapunov exponents , finite time lyapunov exponents depend on the initial conditions .this limits their value in characterising the underlying system .fortunately , provides a theorem that guarantees convergence of finite time lyapunov exponents to the global lyapunov exponents .the hypothesis of the theorem ( also found in ) requires the underlying system to have an invariant measure .it behoves us , therefore , to satisfy ourselves that the system of interest has an invariant measure and that our estimates converge to the global lyapunov exponent .yet examples abound in which lyapunov exponent estimates are provided with neither of these two requirements confirmed ( e.g. ) .some have emphasised the need to provide confidence limits along with the estimates , but confidence limits are valuable only when there is evidence of convergence .this paper presents two maps through which it highlights the importance of demonstrating evidence of convergence of lyapunov exponent estimates .one map has a parameter that takes non - negative integer values . for all possible values of this parameter, we can compute the invariant measure in closed form .the other map presents computational challenges , despite its illusive simplicity .the two maps are presented in [sec : two ] , with computations of lyapunov exponents .we conclude with a discussion and summary of the results in [ sec : disc ] .lyapunov exponents are briefly discussed in the following section .lyapunov exponents measure the average rate of separation of two trajectories that are initially infinitely close to each other .consider a map where .for an initial state , the dynamics of its small perturbation , , are governed by the linear propagator , , which is a product of jacobians so that .the finite time average separation / growth rate of two initially nearby trajectories is then given by oseledec provides a theorem that guarantees that the limit is unique .denote this limit by . if is a member of the right singular vectors of , then is a finite time lyapunov exponent and is a global lyapunov exponent .it has been noted that corresponds to non - commuting matrices . on the other hand , when , we take logs of positive scalars so that the ergodic theorem may be used to yield where is the invariant distribution of the dynamics .if a dynamical system settles onto an invariant distribution , then the effect of transients on dynamical invariants averages out .moreover , equation ( [ inv : lya ] ) provides an alternative to computing lyapunov exponents when ; hence , simple bootstrap resampling techniques may be applied in a straight forward way to make lyapunov exponent estimates , with no need for block resampling approaches suggested by ziehmann et al . . given a set of observations ,bootstrapping is accomplished by repeatedly sampling randomly from the data with replacement to estimate the statistic of interest .this would give a distribution of the statistic of interest , which in this case is .since , for a given , finite time lyapunov exponents are functions of the initial states , it is useful to report values corresponding to a distribution of initial states to determine convergence . for an assessment of convergence of finite time lyapunov exponents , one can sample the initial states from the invariant distribution , . in numerical computations ,the invariant distribution is often not accessible in closed form .nonetheless one can iterate forward an initial distribution that after iterations is an estimate of the invariant distribution . one would then sample initial states from to estimate s .the aim here is to sample the initial states according to the invariant measure , if it exists . as a consequence of oseledec s theorem , the distribution of the s will converge to a delta distribution centred at as .when one is faced with real data , the approach of the previous paragraph can not be used unless there is a reliable mathematical model of the system .nonetheless , one may use bootstrap approaches suggested in for various values of to assess convergence in the distribution of the s .in this section , two novel maps with contrasting behaviours are considered .analytic computations are complemented with numerical results and the importance of establishing convergence is highlighted .let us consider the infinitely piece - wise linear map ,\\\\2^i(x-\frac{1}{2^i } ) , & x\in[2^{-i},2^{1-i}),\quad i=2,3,\ldots\end{array}\right .\label{eqn : inf}\ ] ] defined for fixed .a graph of the map corresponding to is shown in figure [ fig : piece ] . the lyapunov exponents for this map may be sought either analytically or numerically . for a given value of ,an exact knowledge of the invariant measure of this map is sufficient for one to determine the corresponding global lyapunov exponent .denote the invariant density corresponding to a given value of by and the associated invariant measure by .for , the invariant distribution of this map is the uniform distribution ] under the map .again we considered and evolved forward points sampled from ] do not converge to the origin .however , it can be shown that iterates of distributions that are initially uniform converge to a delta distribution centred at the origin .these two contrasting behaviours of trajectories and densities are paradoxical and are due to the weak repeller at the origin .numerically , the invariant density of this map may be determined by evolving forward in time the uniform distribution ] yields the graphs shown in figure [ fig : liapr1 ] on the right panel . from the graphs , ) ] when via equation ( [ inv : lya ] ) .unfortunately , the dynamics are trapped into very long periodic orbits after about iterations , the most dominant period being of the order of .distributions of finite time lyapunov exponents for varying are shown in figure [ fig : few ] on the right panel .the initial conditions of trajectories used to estimate each were sampled from )$ ] . whereas it seems the distribution of lyapunov exponents has converged when , the distribution converged to is not a delta distribution .in fact , if iterations are continued further , the distribution ultimately becomes tri - modal , each mode corresponding to each periodic orbit that the dynamics settle onto .if one had only a finite sample of data without a knowledge of the data - generating process , as is typical in practical situations , they could mistakenly attribute the bell shape to uncertainty . finally , graphs of , computed along trajectories starting from different initial conditions , versus indicated that there is indeed no convergence ( see figure [ fig : few ] on the left panel ) .each line on the graph corresponds to a number of initial conditions , .this raises suspicion over lyapunov exponent estimates obtained from a single initial condition .an example of where this was done is , albeit using real data .the aim of this paper was to highlight the importance of demonstrating convergence of finite time lyapunov exponents estimates of the global lyapunov exponent .two contrasting maps were thus presented , one of which its dynamical properties are known accurately , and the other not . the former map has a parameter that takes non - negative integer values .as the sole parameter is varied across all non - negative integer values , the global lyapunov exponent remains fixed at . at a sample of parameter values ,numerical computations confirmed a fast convergence of distributions of finite time lyapunov exponents to a delta distribution .the centre of the delta distributions coincided with the analytically found value of 2 .the other map was more challenging .detailed insights into its dynamics were sought numerically .numerical distributions of finite time lyapunov exponent estimates did not converge to a delta distribution ; even with the use of trajectories as long as .if the length of trajectories is increased , the dynamics ultimately fall onto very long periodic orbits due to finite machine precision .the lack of convergence of distributions of finite time lyapunov exponents to a delta distribution should alarm us against reporting the mode as an estimate of the global lyapunov exponent .indeed any arising confidence limits would equally be nonsensical .in fact , following analytic considerations of a similar map in would indicate that the map has no invariant distribution .hence , for a randomly selected initial condition , the dynamics are transient with probability one .the failure of distributions of finite time lyapunov exponents to converge to a delta distribution is a caution against reporting estimates of lyapunov exponents and confidence limits . in numerical computations ,convergence of the distributions to a delta distribution is achievable if the dynamics settle onto an invariant distribution before being trapped onto a numerical periodic orbit .a higher machine precision may be necessary to ensure this .when using data from a real nonlinear system , stationarity has to be established first .convergence of distributions of finite time lyapunov estimates should then be verified by progressively increasing the length of trajectories from which the estimates are made .block resampling approaches suggested in may help provide the distributions that are used to assess convergence to a delta distribution .i would like to thank prof l. a. smith for useful discussions and contributions .i am grateful for comments from two anonymous reviewers that helped improve the manuscript .this work was supported by the rcuk digital economy programme via epsrc grant ep / g065802/1 the horizon digital economy hub .9 j .- p .eckmann , d. ruelle , ergodic theory of chaos and strange attractors , rev .57 ( 1985 ) 617 - 653 .v. i. oseledec , a multiplicative ergodic theorem .lyapunov characteristic numbers for dynamical systems , trans .moscow math .( 1968 ) 179 - 210 . c. ziehmann ,l. a. smith , j. kurths , localised lyapunov exponents and the prediction of predictability , physics letters a 271 ( 2000 ) 237 - 251 .j. q. zhang , a. v. holden , o. monfredi , m. r. boyett , h. zhang , stochastic vagal modulation of cardiac pacemaking may lead to erroneous identification of cardiac chaos " , chaos 19 ( 2009 ) 028509 .h. millan , b. ghanbarian - alavijeh , i. garcia - fornaris , nonlinear dynamics of mean daily temperature and dewpoint time series at babolsar , iran , 1961 - 2005 , atmospheric research 98 ( 2010 ) 89 - 101 .m. d. martinez , x. lana , a. burgueno , c. serra , predictability of the monthly north atlantic oscillation index based on fractal analyses and dynamic system theory , nonlin .processes geophys .17 ( 2010 ) 93 - 101 .m. s. bruijn , j. h. dieen , o. g. meijer , p. j. beek , is slow walking more stable ?journal of biomechanics 42 ( 2009 ) 1506 - 1512 .j. d. reiss , i. djurek , a. petosic , d. djurek , verification of chaotic behaviour in experimental loudspeaker , j. acoust .124 ( 2008 ) 2031 - 2041 .e. d. ubeyli , lyapunov exponents / probabilistic neural networks for analysis of eeg signals , expert systems with applications 37 ( 2010 ) 985 - 992 .r. gencay , a statistical framework for testing chaotic dynamics via lyapunov exponents , physica d 89 ( 1996 ) 261 - 266 .c. ziehmann , l. a. smith , j. kurths , the bootstrap and lyapunov exponents in deterministic chaos , physica d 126 ( 1999 ) 49 - 59 .a. lasota , m. c. mackey , probabilistic properties of deterministic systems , cambridge university press , 1985 .a. lasota , m. c. mackey , chaos , fractals , and noise : stochastic aspects of dynamics , springer , 1994 .* proof of proposition 1 * : when , we note that the invariant density is piecewise constant with , \end{array } \right.\ ] ] where and are constants . in order to determine the constants and , we note that by definition an invariant measure satisfies the property , where is the set of all the points that are mapped onto after iterations of the map .it thus follows that for a probability measure , it is also true that solving ( [ sec1:eq6 ] ) and ( [ sec1:eq7 ] ) yields hence and .+ + * proof of proposition 2 * : let us now consider the general case of . in this case , the invariant density is constant on three intervals . that is .\end{array } \right . \ ] ] again , we use the underlying invariant measure to compute the s . if we let , then +\mu_3^{(k)},\\\nonumber\\\label{sec1:eq10 } \mu_2^{(k)}&=&\left(\frac{1}{2}-\frac{1}{2^k}\right)\left[\mu_1^{(k)}+\mu_2^{(k)}\right],\\\nonumber\\ \mu_3^{(k)}&=&\frac{1}{2}\left[\mu_1^{(k)}+\mu_2^{(k)}\right ] , \label{sec1:eq11}\end{aligned}\ ] ] notice that the sum of the first two equations yields the last equation .therefore , we need another equation in order to determine the s , which is using any two of equations ( [ sec1:eq9 ] ) , ( [ sec1:eq10 ] ) and ( [ sec1:eq11 ] ) and equation ( [ sec1:eq12 ] ) yields from these it follows that proof of this proposition considers each of the cases , and separately . when , we get similarly , for we get for general , +\frac{(1-k)}{3}=2.\end{aligned}\ ] ] in the above derivation , we used the identity
in many applications , there is a desire to determine if the dynamics of interest are chaotic or not . since positive lyapunov exponents are a signature for chaos , they are often used to determine this . reliable estimates of lyapunov exponents should demonstrate evidence of convergence ; but literature abounds in which this evidence lacks . this paper presents two maps through which it highlights the importance of providing evidence of convergence of lyapunov exponent estimates . the results suggest cautious conclusions when confronted with real data . moreover , the maps are interesting in their own right . * keywords * : chaos ; invariant measure ; lyapunov exponents ; nonlinear time series
tumor cells sense very small concentration gradients and act in a collective manner . herewe review the basic theory of concentration and gradient sensing by cells and cell collectives .this theory places physical bounds on sensory precision and allows us to quantitatively compare the capabilities of tumor cells to other cell types .theoretical limits to the precision of concentration sensing were first introduced by berg and purcell almost 40 years ago .berg and purcell began by considering an idealized cell that acts as a perfect counting instrument .their simplest model assumed that the cell is a sphere in which molecules can freely diffuse in and out ( fig .[ sensing]a ) .the concentration of these molecules is uniform in space , and the cell derives all its information about the concentration by counting each molecule inside its spherical body .the expected count is where is the mean concentration and is the cell volume .however , since molecules arrive and leave via diffusion , there will be fluctuations around this expected value .diffusion is a poisson process , meaning that the variance in this count equals the mean .therefore the relative error in the cell s concentration estimate is .the cell can improve upon the relative error in its concentration estimate by time - averaging over multiple measurements .however , consecutive measurements are only statistically independent if they are separated by a sufficient amount of time such that the molecules inside the cell volume are refreshed .the amount of time required is characterized by the diffusion time , , where is the diffusion constant and is the cell diameter . in a time period the cell makes independent measurements , and the variance is reduced by the factor .this gives the long - standing lower limit for the cell s relative error in estimating a uniform concentration .the relative error decreases with and , since the molecule count is larger , and also with and , since more independent measurements can be made .berg and purcell derived this limit more rigorously , and the problem has been revisited more recently to account for binding kinetics , spatiotemporal correlations , and spatial confinement . in all cases a term of the form in eq .[ eq : singleconc ] emerges as the fundamental limit for three - dimensional diffusion .do cells reach this limit ?berg and purcell themselves asked this question in the context of several single - celled organisms , including the _ escheria coli _bacterium .motility of _e. coli _ has two distinct phases : the run phase in which a cell swims in a fixed direction , and the tumble phase in which the cell erratically rotates in order to begin a new run in a different direction .the bacterium biases its motion by continually measuring the chemoattractant concentration , and extending the time of runs for which the change in concentration is positive .the change in concentration over a run time depends on the concentration gradient and the bacterium s velocity .berg and purcell argued that for a change in concentration to be detectable , it must be larger than the measurement uncertainty , . together with eq .[ eq : singleconc ] , this places a lower limit on the run time , ^{1/3}$ ] . using typical values for the sensory threshold of _e. coli _ of mm , mm / cm , m , m/s , and /s , we find s. actual run times are on the order of s. thus we see that _ e. coli _ chemotaxis is consistent with this physical bound . although the end goal of concentration sensing in_ e. coli _ is chemotaxis by temporally sampling changes in the chemical concentration , we would like to focus the reader s attention on the remarkable fact that the bacterium s concentration sensing machinery operates very near the predicted physical limits .if _ e. coli _ were to use any shorter run times , chemotaxis would be physically impossible . consequently , the time period for measuring the chemical concentration , in eq .[ eq : singleconc ] , would be so short that the bacterium would be unable to make an accurate measurement of the chemical concentration .cells are not only able to detect chemical concentrations , they are also able to measure spatial concentration gradients .many cells , including amoeba , epithelial cells , neutrophils , and neurons , sense gradients by comparing concentrations between compartments in different locations of the cell body .these compartments are typically receptors or groups of receptors on the cell surface , but in a simple model we may treat these compartments as idealized counting volumes as we did before ( fig .[ sensing]b ) .the difference in counts between two such compartments provides the cell with an estimate of the gradient . what is the relative error in this estimate ?consider two compartments of linear size on either side of a cell with diameter ( fig . [ sensing]b ) .if the compartments are aligned with the gradient of a linear concentration profile , then the mean concentrations at each compartment are and .the mean molecule counts in the two compartments are roughly and , and the difference is .the variance in this difference is , where the first step assumes the two compartments are independent , and the second step uses eq .[ eq : singleconc ] for the variance in each compartment s measurement . for shallow gradients , where the limits on sensing are generally probed , we have , and therefore we may assume , where is the mean concentration at the center of the cell .thus , and the relative error in the cell s estimate of the gradient is then where the factor of is neglected in this simple scaling estimate . as in eq .[ eq : singleconc ] , we see that the relative error decreases with , since the molecule counts in each compartment are larger , and also with and , since more independent measurements can be made .additionally , the relative error decreases with , since the concentrations measured by the two compartments are more different from each other .however , we see that unlike in eq .[ eq : singleconc ] , the relative error increases with the background concentration .the reason is that the cell is not measuring a concentration , but rather a _ difference _ in concentrations , and it is more difficult to measure a small difference on a larger background than on a smaller background .eq . [ eq : g ] has been derived more rigorously , and the problem has been extended to describe rings of receptors or detectors distributed over the surface of a circle or a sphere . in all cases a term of the form in eq .[ eq : g ] emerges as the fundamental limit , with the lengthscale dictated by the particular sensory mechanism and geometry .it is clear that the optimal mechanism would result in an effective compartment size that is roughly half of the cell volume , in which case .do cells reach this limit on gradient sensing ? this question has been directly addressed for the amoeba _dictyostelium discoideum_. experiments have shown that _ dictyostelium _cells exhibit biased movement when exposed to gradients of cyclic adenosine monophosphate as small as nm / mm , on top of a background concentration of nm . bias is typically quantified in terms of the chemotactic index ( ci ) , which is the cosine of the angle between the gradient direction and the direction of a cell s actual motion . by relating the error in gradient sensing ( a term of the form in eq .[ eq : g ] with ) to the error in this angle , endres and wingreen obtained an expression for the optimal ci , which they then fit to the experimental data with one free parameter , the integration time . the inferred value of s serves as the physical lower bound on the response time required to perform chemotaxis .actual response times of _ dictyostelium _ cells , as measured by the time from the addition of a chemoattractant to the peak activity of an observable signaling pathway associated with cell motility , are about s. taken together , these results imply that _ dictyostelium _ operates remarkably close to the physical limit to sensory precision set by the physics of molecule counting .the precision of gradient sensing is often reported in terms of percent concentration change across a cell body .for example , both amoeba and tumor cells are sensitive to a roughly change in concentration across the cell body. however , this method of reporting sensitivity may be misleading .experiments imply very different sensory thresholds for these cells in terms of absolute molecule numbers , as we will now see .the key is that it takes two numbers to specify the conditions for gradient sensing : the mean gradient and the mean background concentration .for the amoeba _ dictyostelium _ , these numbers are nm / mm and nm at the sensory threshold . given a typical cell size of m, these values imply a mean percent concentration change of ( table [ sense_table ] ) . however , we may also compute from these values the mean molecule number difference from one side of the cell to the other , within the effective compartments of size .taking gives the maximal molecule number difference of for _ dictyostelium _ ( table [ sense_table ] ) .together and specify the sensing conditions as completely as and do .experiments have shown that breast cancer tumor cells exhibit a chemotactic response in a gradient nm / mm of the cytokine ccl21 , on top of a background concentration of nm .given a typical cell size of m , this corresponds to a percent difference of , similar to _ dictyostelium_. yet , this also corresponds to a maximal molecule number difference of , which is much higher than that of _ dictyostelium _ ( table [ sense_table ] ) .even though the sensitivities are similar in terms of percent change , they are very different in terms of absolute molecule number .lower molecule numbers correspond to higher relative error .we can see this explicitly by writing eq .[ eq : g ] in terms of the percent change .defining and taking , we have . accounting for the fact that tumor cells ( tc ) have roughly twice the diameter as _dictyostelium _ cells ( dc ) , this expression implies that the sensitivities of the two cell types over the same integration time to chemoattractants with the same diffusion constant satisfy .we see that because the _ dictyostelium _ experiments were performed at lower background concentration , corresponding to lower absolute molecule numbers , the relative error in gradient sensing is times that of the tumor cells , despite the fact that both cell types are responsive to concentration gradients . therefore , it is important to take note of the background concentration when studying the precision of gradient sensing .these data imply that _ dictyostelium _ cells can sense noisier gradients than tumor cells . however , _ dictyostelium _cells have been studied more extensively than tumor cells as exemplars of gradient detection .it remains an interesting open question what is the minimum gradient that tumor cells can detect , not only in terms of percent concentration change , but also in terms of absolute molecule number differences .we see that although cancerous cells and _ dictyostelium _ cells are of similar size , their sensory responses to absolute molecule numbers can be very different ( table [ sense_table ] ) .this difference is also reflected in their migration speeds : carcinoma and epithelial cells migrate at whereas _ dictyostelium _ can migrate at speeds of . [ cols="<,^,^,^,^ " , ] although cancer cells may be very sensitive to small drug concentrations , that does not translate to successful treatment . in order to achieve cell death ,much larger drug concentrations are required . in the same study on lung carcinoma cells ,cell death was observed for drug concentrations on the order of nm and greater .more typical drug concentrations required for cell death are on the order of micromolars .for instance , it has been shown _ in vitro _ that anticancer drug concentrations on the order of m are required to kill at least of tumor cells .with a cell length of m , m corresponds to tens of millions of drug molecules in the volume of a cell , four orders of magnitude greater than drug concentrations required to affect cell functionality ( table [ drug_table ] ) . in order to effectively kill a solid tumor ,very high drug doses are required .complicating matters is the fact that the tumor and its surrounding microenvironment comprise a complex and heterogeneous system .although most cells in the human body are naturally within a few cell lengths of a blood vessel , due to high proliferation tumor cells may be upwards of tens of cell lengths away from a vessel .this makes it difficult for drugs to reach the tumor .moreover , the high density of many solid tumors causes gradients of drug concentration to form as a function of tumor radius .this results in a reduced drug concentration at the center of the tumor and makes innermost tumor cells the most difficult to kill . a promising way to overcomethis difficulty is through the use of nanoparticle drug delivery systems , which increase both the specificity and penetration of drugs to the tumor .nanoparticle delivery has been shown to achieve cell death with concentrations as low as nm . although this concentration is lower than delivery without nanoparticles , it is still two orders of magnitude higher than the minimum concentration that causes physical changes in the cell ( table [ drug_table ] ) .even with targeted delivery , achieving drug - induced tumor cell death remains a challenging task .given this challenge , we hope to draw upon the physical insights reviewed herein to devise therapeutic strategies that are alternative or complementary to comprehensive cell death .specifically , we imagine focusing on the metastatic invasion phase , and targeting the functions of invading tumor cells , including communication and migration , in addition to targeting cells overall viability , to produce better treatment ( fig .[ overview]c ) .communication is a particularly promising candidate , since it has recently been shown that cell - to - cell communication makes cancer cells more resistant to therapy and helps sustain tumor growth .indeed , the exchange of extracellular vesicles , which is a form of communication observed between tumor cells and stromal cells , has been linked to immune suppression , tumor development , angiogensis , and metastasis .this suggests that disrupting cell - to - cell communication could be an effective strategy for stopping tumor progression or curbing metastatic invasion . disrupting communicationmay not require concentrations as large as those necessary for cell death , which are difficult to maintain _ in vivo _ across the whole tumor .for example , as little as nm of the gap - junction - blocking drug endothelin-1 is sufficient to remove collective responses in epithelial cells .this concentration is several orders of magnitude smaller than that required for comprehensive cell death , and it is on the order of concentrations that are effective with targeted nanoparticle delivery ( table [ drug_table ] ) .therefore , it is tempting to suggest that managing metastatic invasion by blocking communication or other cell functions is a more accessible therapeutic strategy than eradicating a tumor outright . although blocking intercellular communication pathways could curb the invasive potential of metastatic cells it is also important to address the ulterior consequences of this strategy. gap junction intercellular communication ( gjic ) is an important way for the environment to affect change on cells , maintaining tissue homeostasis and balancing cell growth and death . in cancerous cellsgjic is reduced , causing unregulated cell growth .interestingly , many existing cancer - combatting drugs are small enough to pass through cell gap junctions which permit molecules of sizes up to 1000 dalton , but there is a lack of _ in vivo _ studies concerning the benefits and effects of gap junctions on cancer treatment . it has been shown _ in vitro _ that gjic can propagate cell - death signals through cancerous cells and that high connexin expression , the proteins that compose gap junctions , corresponds to high anticancer drug sensitivity .therefore , it is important to consider the potential negative consequences of blocking intercellular communication in reducing metastatic invasion . it may be sufficient to administer an anticancer drug and a communication - blocking drug at different times in order to avoid negative side - effects .although this puts a caveat on our proposal of communication - blocking drugs as a viable option for treating metastatic invasion it is important to recall that gjic is not the only communication pathway available to cancerous cells : extracellular vesicle - meditated signaling pathways are potential alternates which could be targeted in place of gjic .in this review , we have taken a quantitative look at metastatic invasion as a sensing - and - migration process , which has allowed us to compare metastatic cells to other cell types in terms of their physical capabilities .we have seen that tumor cells can sense very shallow chemoattractant gradients , which may help guide metastatic invasion , but it remains unclear whether tumor cells operate near fundamental sensing limits , as bacteria and amoeba do . recognizing that metastatic invasion can be collective , we have reviewed recent results on the physical limits to collective sensing , and we have identified the overarching mechanisms of collective migration .a key insight that emerges is that collective capabilities rely critically on cell - to - cell communication .this insight opens up alternative strategies for therapy that target specific cell capabilities such as communication , in addition to strategies that aim for comprehensive cell death .a detailed presentation of the underlying physical mechanics for cell motility and chemotaxis are outside the scope of this review .readers interested in these topics are referred to these excellent resources .it is also important to note that in deriving the limits to concentration sensing we have assumed that the molecules of interest diffuse normally with fixed , space - independent diffusion coefficients .however , this may not always be the case in the tumor environment , where molecules can also experience anomalous diffusion . moving forward, it will be important to identify whether the physical theory of sensing reviewed herein can be applied in a predictive manner to tumor cells , and whether gradient sensing plays a dominant role during metastatic invasion .more generally , it will be necessary to integrate the theory of sensing with models of collective migration to predict quantitatively what groups of migratory cells can and can not do . finally , controlled experiments with metastatic cells are required to validate these predictions , and to assess the viability of alternative therapies that target specific cell functions in order to combat metastatic invasion .our hope is that the integrated , physics - based perspective presented herein will help generate innovative solutions to the pervasive problem of metastatic disease .this work was supported by the ralph w. and grace m. showalter research trust and the purdue research foundation .nicola aceto , aditya bardia , david t miyamoto , maria c donaldson , ben s wittner , joel a spencer , min yu , adam pely , amanda engstrom , huili zhu , brian w brannigan , ravi kapur , shannon l stott , toshi shioda , sridhar ramaswamy , david t ting , charles p lin , mehmet toner , daniel a haber , and shyamala maheswaran .circulating tumor cell clusters are oligoclonal precursors of breast cancer metastasis . , 158(5):11101122 , 2014 .markus basan , jens elgeti , edouard hannezo , wouter - jan rappel , and herbert levine .alignment of cellular motility forces with tissue flow as a mechanism for efficient wound healing ., 110(7):24522459 , 2013 .mirjam c boelens , tony j wu , barzin y nabet , bihui xu , yu qiu , taewon yoon , diana j azzam , christina twyman - saint victor , brianne z wiemann , hemant ishwaran , petra j ter brugge , jos jonkers , joyce slingerland , and andy j minn .exosome transfer from stromal to breast cancer cells regulates therapy resistance pathways ., 159(3):499513 , 2014 .david ellison , andrew mugler , matthew d brennan , sung hoon lee , robert j huebner , eliah r shamir , laura a woo , joseph kim , patrick amar , ilya nemenman , et al .cell communication enhances the capacity of cell ensembles to sense shallow gradients during morphogenesis ., page 201516503 , 2016 .christine gilles , myriam polette , jean - marie zahm , jean - marie tournier , laure volders , jean - michel foidart , and philippe birembaut .vimentin contributes to human mammary epithelial cell migration ., 112(24):46154625 , 1999 .rama grantab , shankar sivananthan , and ian f tannock .the penetration of anticancer drugs through tumor tissue as a function of cellular adhesion and packing density of tumor cells ., 66(2):10331039 , 2006 .claire legrand , christine gilles , jean - marie zahm , myriam polette , anne - ccile buisson , herv kaplan , philippe birembaut , and jean - marie tournier .airway epithelial cell migration dynamics : mmp-9 role in cell extracellular matrix remodeling . , 146(2):517529 , 1999 .pengfei lu , andrew j ewald , gail r martin , and zena werb .genetic mosaic analysis reveals fgf receptor 2 function in terminal end buds during mammary gland branching morphogenesis ., 321(1):7787 , 2008 .gema malet - engra , weimiao yu , amanda oldani , javier rey - barroso , nir s gov , giorgio scita , and loc dupr .collective cell motility promotes chemotactic prowess and resistance to chemorepulsion ., 25(2):242250 , 2015 .athanasius f. m. mare , vernica a. grieneisen , and paulien hogeweg .the cellular potts model and biophysical properties of cells , tissues and morphogenesis . in alexander r.a. anderson , mark a. j. chaplain , and katarzyna a. rejniak , editors , _ single - cell - based models in biology and medicine _ ,mathematics and biosciences in interaction .birkhuser basel , 2007 .doi : 10.1007/978 - 3 - 7643 - 8123 - 3_5 .marten postma , jeroen roelofs , joachim goedhart , theodorus wj gadella , antonie jwg visser , and peter jm van haastert .uniform camp stimulation of dictyostelium cells induces localized patches of signal transduction and pseudopodia ., 14(12):50195027 , 2003 .alberto puliafito , alessandro de simone , giorgio seano , paolo armando gagliardi , laura di blasio , federica chianale , andrea gamba , luca primo , and antonio celani .three - dimensional chemotaxis - driven aggregation of tumor cells . , 5 , 2015 . william j rosoff , jeffrey s urbach , mark a esrick , ryan g mcallister , linda j richards , and geoffrey j goodhill .a new chemotaxis assay shows the extreme sensitivity of axons to molecular gradients ., 7(6):678682 , 2004 .jacqueline d shields , mark e fleury , carolyn yong , alice a tomei , gwendalyn j randolph , and melody a swartz .autologous chemotaxis as a mechanism of tumor cell homing to lymphatics via interstitial flow and autocrine ccr7 signaling ., 11(6):526538 , 2007 . loling song , sharvari m nadkarni , hendrik u bdeker , carsten beta , albert bae , carl franck , wouter - jan rappel , william f loomis , and eberhard bodenschatz .dictyostelium discoideum chemotaxis : threshold for directed motion . , 85(9):981989 , 2006 .katarina wolf , irina mazo , harry leung , katharina engelke , ulrich h von andrian , elena i deryugina , alex y strongin , eva - b brcker , and peter friedl .compensation mechanism in tumor cell migration mesenchymal amoeboid transition after blocking of pericellular proteolysis ., 160(2):267277 , 2003 .
metastasis is a process of cell migration that can be collective and guided by chemical cues . viewing metastasis in this way , as a physical phenomenon , allows one to draw upon insights from other studies of collective sensing and migration in cell biology . here we review recent progress in the study of cell sensing and migration as collective phenomena , including in the context of metastatic cells . we describe simple physical models that yield the limits to the precision of cell sensing , and we review experimental evidence that cells operate near these limits . models of collective migration are surveyed in order understand how collective metastatic invasion can occur . we conclude by contrasting cells sensory abilities with their sensitivity to drugs , and suggesting potential alternatives to cell - death - based cancer therapies . metastasis is one of the most intensely studied stages of cancer progression because it is the most deadly stage of cancer . the first step of metastasis is invasion , wherein cells break away from the tumor and invade the surrounding tissue . our understanding of metastatic invasion has benefited tremendously from genetic and biochemical approaches . however , the physical aspects of metastatic invasion are still unclear . we know that at a fundamental level , metastatic invasion is a physical process . tumor cells sense and respond to chemical gradients provided by surrounding cells or other features of the tumor environment ( fig . [ overview]a ) . indeed , tumor cells are highly sensitive , able to detect a difference in concentration across the cell length . sensing is ultimately a physical phenomenon . therefore , can we build a simple physical theory to understand the sensory behavior of tumor cells , and can this physical theory inform treatment options ? metastatic invasion involves coordinated migration of tumor cells away from the tumor site . in many types of cancer , migration is collective and highly organized , involving the coherent motion of connected groups of cells ( fig . [ overview]b ) . collective migration is ultimately a physical phenomenon , since it relies on mechanical coupling and can often be understood as emerging from simple physical interactions at the cell - to - cell level . can we understand the collective migration of tumor cells with simple physical models ? here we review recent progress on modeling sensing and migration in cells and cell collectives . we discuss metastatic cells explicitly , and emphasize that physical insights gained from other cellular systems can inform our understanding of metastatic invasion . we focus on simple physical models and order - of - magnitude numerical estimates in order to quantitatively probe the extent of , and the limits to , cell sensory and migratory behavior . our hope is that a more quantitative understanding of metastatic invasion will inform treatment protocols , and to that end we conclude by discussing drug sensitivity and potential treatment strategies ( fig . [ overview]c ) .
compressive sensing ( cs ) enables one to sample signals that admit a sparse representation in some transform basis well - below the nyquist rate , while still enabling their faithful recovery .since many natural and man - made signals exhibit a sparse representations , cs has the potential to reduce the costs associated with sampling in numerous practical applications . the single pixel camera ( spc ) and its multi - pixel extensions are spatial - multiplexing camera ( smc ) architectures that rely on cs . in this paper , we focus on such smc designs , which acquire random ( or coded ) projections of a ( typically static ) scene using a spatial light modulator ( slm ) in combination with a small number of optical sensors , such as single photodetectors or bolometers .the use of a small number of optical sensors in contrast to full - frame sensors having millions of pixel elements turns out to be advantageous when acquiring scenes at non - visible wavelengths .since the acquisition of scene information beyond the visual spectrum often requires sensors built from exotic materials , corresponding full - frame sensor devices are either too expensive or cumbersome .obviously , the use of a small number of sensors is , in general , not sufficient for acquiring complex scenes at high resolution .hence , existing smcs assume that the scenes to be acquired are static and acquire multiple measurements over time . for static scenes ( i.e. , images ) and for a single - pixel smc architecture , this sensing strategy has been shown to deliver good results typically at a compression of 2 - 8 .this approach , however , fails for time - variant scenes ( i.e. , videos ) .the main reason is due to the fact that the time - varying scene to be captured is ephemeral , i.e. , _ each _ measurement acquires information of a ( slightly ) _ different _ scene .the situation is further aggravated when we deal with smcs having a very small number of sensors ( e.g. , only one for the spc ) .virtually all existing methods for cs - based video recovery ( e.g. , ) seem to overlook the important fact that scenes are changing while one acquires compressive measurements .in fact , all of the mentioned smc video systems treat scenes as a sequence of _ static _ frames ( i.e. , as piece - wise constant scenes ) as opposed to a continuously changing scene .this disconnect between the real - world operation of smcs and the assumptions commonly made for video cs motivates novel smc acquisition systems and recovery algorithms that are able to deal with the ephemeral nature of real scenes .figure [ fig : static ] illustrates the effect of assuming piece - wise static scenes . put simply , grouping too few measurements for reconstruction results in poor spatial resolution ; grouping too many measurements results in severe temporal aliasing artifacts . high - quality video cs recovery methods for camera designs relying on temporal multiplexing ( in contrast to spatial multiplexing as it is the case for smcs ) are generally inspired by video compression schemes and exploit motion estimation between individually recovered frames .applying such techniques for smc architectures , however , results in a fundamental problem : on the one hand , obtaining motion estimates ( e.g. , the optical flow between pairs of frames ) requires knowledge of the individual video frames . on the other hand ,recovering the video frames in absence of motion estimates is difficult , especially when using low sampling rates and a small number of sensor elements ( cf .[ fig : static ] ) .attempts to address this `` chicken - and - egg '' problem either perform multi - scale sensing or sense separate patches of the individual video frames . however , both approaches ignore the time - varying nature of real - world scenes and rely on a piecewise static scene model . pixels .the scene , similar to figure [ fig : static ] , consists of a pendulum with the letter ` r ' swinging from right to left .a total of 16,384 measurements were acquired and videos were reconstructed under the three different signal models . also shown are and slices corresponding to the lines marked . in all , cs - muvi delivers high spatial as well as temporal resolution unachievable by both naive frame - to - frame wavelet sparsity as well as the more sophisticated 3d total variations model .to the best of our knowledge , cs - muvi is the first demonstration of successful video recovery at 128 super - resolution on _ real data _ obtained from an spc . ] in this paper , we propose a novel sensing and recovery method for videos acquired by smc architectures , such as the spc .we start ( in sec .[ sec : overview ] ) with an overview of our sensing and recovery framework . in sec .[ sec : tradeoff ] , we study the recovery performance of time - varying scenes and demonstrate that the performance degradation caused by violating the static - scene assumption is severe , even at moderate levels of motion .we then detail a novel video cs strategy for smc architectures that overcomes the static - scene assumption .our approach builds upon a co - design of scene acquisition and video recovery .in particular , we propose a novel class of cs matrices that enables us to obtain a low - resolution `` preview '' of the scene at low computational complexity .this preview video is used to extract robust motion estimates ( i.e. , the optical flow ) of the scene at full - resolution ( in sec .[ sec : designmeas ] ) .we exploit these motion estimates to recover the full - resolution video by using off - the - shelf convex - optimization algorithms typically used for cs ( in sec .[ sec : optical ] ) .we demonstrate the performance and capabilities of our smc video - recovery algorithm for a different scenes in sec .[ sec : experiments ] , show video recovery on real data in sec .[ sec : real ] , and discuss our findings in sec .[ sec : discuss ] . given the multi - scale nature of our framework , we refer to it as cs multi - scale video , or short cs - muvi .we note that a short version of this paper appeared at the ieee international conference on computational photography and computational optical sensing and imaging meeting .this paper contains an improved recovery algorithm , a more detailed performance analysis , and a larger number of experimental results .most importantly , we show to the best of our knowledge the first high - quality video recovery results from real data obtained with a laboratory spc ; see fig .[ fig : signalmodels ] for corresponding results .suppose that we have a signal acquisition system characterized by where is the signal to be sensed and is the measurement obtained using the matrix .the entries of the measurement matrix are usually restricted to ] .the spc leverages the high operating speed of the dmd , i.e. , the mirror s orientation patterns on the dmd can be reprogrammed at khz rates .the dmd s operating speed defines the measurement bandwidth ( i.e. , the number of measurements / second ) , which is one of the key factors that define the achievable spatial and temporal resolutions .there have been many recovery algorithms proposed for video cs using the spc .wakin et al . use 3-dimensional wavelets as a sparsifying basis for videos and recover all frames of the video jointly under this prior . unlike images , videos are not well represented using wavelets since they have additional temporal properties , like brightness constancy , that are better represented using motion - flow models .park and wakin analyzed the coupling between spatial and temporal bandwidths of a video .in particular , they argue that reducing the spatial resolution of a scene implicitly reduces its temporal bandwidth and hence , lowers the error caused by the static scene assumption .this builds the foundation for the multi - scale sensing and recovery approach proposed in , where several compressive measurements are acquired at multiple scales for each video frame . the recovered video at coarse scales ( low spatial resolution )is used to estimate motion , which is then used to boost the recovery at finer scales ( high spatial resolution ) . other scene models and recovery algorithms for video cs with the spc use block - based models , sparse frame - to - frame residuals , linear dynamical systems , and low rank plus sparse models . to the best of our knowledge ,all of them report results only on synthetic data and use the assumption that each frame of the video remains static for a certain duration of time ( typically of a second)an assumption that is violated when operating with an actual spc .in contrast to smcs that use sensors having low - spatial resolution and seek to spatially super - resolve images and videos , temporal multiplexing cameras ( tmcs ) have low frame - rate sensors and seek to temporally super - resolve videos .in particular , tmcs use slms for temporal multiplexing of videos and sensors with high spatial resolution , such that the intensity observed at each pixel is coded temporally by the slm during each exposure .veeraraghavan et al . showed that periodic scenes could be imaged at very high temporal resolutions by using a global shutter or a `` flutter shutter '' .this idea was extended to non - periodic scenes in where a union - of - subspace models was used to temporally super - resolve the captured scene .reddy et al . proposed the per - pixel compressive camera ( p2c2 ) which extends the flutter shutter idea with per - pixel shuttering .inspired from video compression standards such as mpeg-1 and h.264 , the recovery of videos from the p2c2 camera was achieved using the optical flow between pairs of consecutive frames of the scene .the optical flow between pairs of video frames is estimated using an initial reconstruction of the high frame - rate video using wavelet priors on the individual frames . a second reconstructionis then performed that further enforces the brightness constancy expressions provides by the optical flow fields .the implementation of the recovery procedure described in is tightly coupled to the imaging architecture and prevents its use for smc architectures .nevertheless , the use of optical - flow estimates for video cs recovery inspired the recovery stage of cs - muvi as detailed in sec .[ sec : optical ] .gu et al . propose to use the rolling shutter of a cmos sensor to enable higher temporal resolution .the key idea there is to stagger the exposures of each row randomly and use image / video statistics to recover a high - frame rate video .hitomi et al . uses a per - pixel coding , similar to p2c2 , that is implementable in modern cmos sensors with per - pixel electronic shutters ; however , a hallmark of their approach is the use of a highly over - complete dictionary of video patches to recovery the video at high frame rates .this results in highly accurate reconstructions even when brightness constancy the key construct underlying optical flow estimation is violated .llull et al . propose a tmc that uses a translating mask in the sensor plane to achieve temporal multiplexing .this approach avoids the hardware complexity involved with dmds and lcos , and enjoys other benefits including low operational power consumption . in yang et al . , a gaussian mixture model ( gmm ) is used as a signal prior to recovery high - frame rate videos for tmcs ; a hallmark of this approach is that the gmm parameters are not just trained offline but also adapted and tuned in situ during the recovery process .harmany et al . extend coded aperture systems by incorporating a flutter shutter or a coded exposure ; the resuling tmc provides immense flexibility in the choice of measurement matrix .they also show the resulting system provides measurement matrices that satisfy the rip .state - of - the - art video compression methods rely on estimating the motion in the scene , compress a few reference frames , and use the motion vectors that relate the remaining parts of a scene to these reference frames . while this approach is possible in the context of video compression , i.e. , where the algorithm has prior access to the entire video , it is significantly more difficult in the context of compressive sensing . a general strategy to enablethe use of motion flow - based signal models for video cs is to use a two - step approach . in the first step ,an initial estimate of the video is generated by recovering each frame individually using sparse wavelet or gradient priors .the initial estimate is used to derive motion flow between consecutive frames ; this enables a powerful description in terms of relating intensities at pixels across frames . in the second step ,the video is re - estimated but now with the aid of enforcing the extracted motion flow constraints in addition to the measurement constraints .the success of this two step strategy critically depends on the ability to obtain reliable motion estimates , which , in turn , depends on obtaining robust initial estimates in the first step .unfortunately , in the context of smcs , obtaining reliable initial estimates of the frames of the video , in absence of motion knowledge , is inherently hard due to the violation of the static scene model ( recall fig . [fig : static ] ) .the proposed framework , referred to as cs - muvi , enables a robust initial estimate by obtaining the individual frames at a _lower spatial resolution_. this approach has two important benefits towards reducing the violation of the static scene model .first , obtaining the initial estimate at a lower spatial resolution reduces the dimensionality of the video significantly . as a consequence, we can estimate individual frames of the video from _ fewer _ measurements . in the context of an smc , this implies a _ smaller _ time window over which these measurements are obtained , and hence , _ reduced _ misfit to the static scene model .second , spatial downsampling naturally reduces the temporal resolution of the video ; this is a consequence of the additional blur due to spatial - downsampling .this implies that the violation of the static scene assumption is naturally _ reduced _ when the video is downsampled . in sec .[ sec : tradeoff ] , we study this strategy in detail and characterize the error in estimating the initial estimates at a lower resolution . specifically , given consecutive measurements from an smc , we are interested in estimating a _ single static _ image at a resolution of pixels .note that varying , which denotes the window length , varies both the spatial resolution of the recovered frame ( since it has a resolution of ) as well as its temporal resolution ( since the acquisition time is proportional to ) .we analyze various sources of error in the recovered low - resolution frame .this analysis provides conditions for stable recovery of the initial estimates that leads to the design of measurement matrices in sec .[ sec : designmeas ] .the proposed cs - muvi framework for video cs relies on three steps .first , we recover a low - resolution video by reconstruction each frame of the video , individually , using simple least - squares techniques .second , this low - resolution video is used to obtain motion estimates between frames .third , we recover a high - resolution video by enforcing a spatio - temporal gradient prior , the constraints induced by the compressive measurements as well as the constraints due to motion estimates . fig . [fig : opflowdiag ] provides an overview schematic of these steps .we now study the recovery error that results from the static - scene assumption while sensing a time - varying scene ( video ) with an smc .we also identify a fundamental trade - off underlying a multi - scale recovery procedure , which is used in sec .[ sec : designmeas ] to identify novel sensing matrices that minimize the spatio - temporal recovery errors . since the spc is the most challenging smc architecture as it only provides a single pixel sensor , we solely focus on the spc in the following .generalizing our results to other smc architectures with more than one sensor is straightforward .the compressive measurements taken by a single - pixel smc at the sample instants can be modeled as where is the total number of acquired samples , is the measurement vector , represents measurement noise , and is the scene ( or frame ) at sample instant . in the remainder of the paper , we assume that the 2-dimensional scene consists of spatial pixels , which , when vectorized , results in the vector of dimension . we also use the notation to represent the vector consisting of a window of successive compressive measurements ( samples ) , i.e , = \left [ \begin{array}{c } { \ensuremath{\left\langle\phi_1,{{\bf x}}_1\right\rangle } } + e_1 \\ { \ensuremath{\left\langle\phi_2,{{\bf x}}_2\right\rangle } } + e_2\\ \vdots \\ { \ensuremath{\left\langle\phi_w,{{\bf x}}_w\right\rangle } } + e_w \end{array } \right].\end{aligned}\ ] ]suppose that we rewrite our ( time - varying ) scene for a window of consecutive sample instants as follows : here , is the static component ( assumed to be invariant for the considered window of samples ) , and is the error at sample instant caused by the static - scene assumption . by defining ,we can rewrite as where is the sensing matrix whose -th row corresponds to the transposed measurement vector .we now investigate the error caused by spatial downsampling of the static component in . to this end , let be the down - sampled static component , and assume with . by defining a linear up - sampling and down - sampling operator as and , respectively , we can rewrite as follows : since .inspection of reveals three sources of error in the cs measurements of the low - resolution static scene : the _ spatial - approximation error _ caused by down - sampling , the _ temporal - approximation error _ caused by assuming the scene remains static for samples , and the _ measurement error _ note that when , the matrix has at least as many rows as columns and hence , we can get an estimate of .we next study the error induced by this least - squares estimate in terms of the relative contributions of the spatial - approximation and temporal - approximation terms . in order to analyze the trade - off that arises from the static - scene assumption and the down - sampling procedure ,we consider the scenario where the effective matrix is of dimension with ; that is , we aggregate at least as many compressive samples as the down - sampled spatial resolution . if has full ( column ) rank , then we can obtain a least - squares ( ls ) estimate of the low - resolution static scene from as where denotes the pseudo inverse . fromwe observe the following facts : the window length controls a trade - off between the spatial - approximation error and the error induced by assuming a static scene , and the least squares ( ls ) estimator matrix ( potentially ) amplifies all three error sources . the spatial approximation error and the temporal approximation error are both functions of the window length .we now show that carefully selecting minimizes the combined spatial and temporal error in the low - resolution estimate . a close inspection of shows that for , the temporal - approximation error is zero , since the static component is able to perfectly represent the scene at each sample instant .as increases , the temporal - approximation error increases for time - varying scenes ; simultaneously , increasing reduces the error caused by down - sampling ( see fig . [fig : spacevtimea ] ) . for is no spatial approximation error ( as long as is invertible ) .note that characterizing both errors analytically is , in general , difficult as they heavily depend on the on the scene under consideration .figure [ fig : spacevtime ] illustrates the trade - off controlled by and the individual spatial and temporal approximation errors , characterized in terms of the recovery signal - to - noise - ratio ( snr ) .the figure highlights our key observation that there is an optimal window length for which the total recovery snr is maximized .in particular , we see from fig .[ fig : spacevtimeb ] that the optimum window length increases ( i.e. , towards higher spatial resolution ) when the scene changes slowly ; in contrary , when the scene changes rapidly , the window length ( and consequently , the spatial resolution ) should be low . since , the optimal window length dictates the resolution for which accurate low - resolution motion estimates can be obtainedhence , the optimal window length depends on the scene to be acquired , the rate of which measurements can be acquired , and the sensing matrix itself .in order to bootstrap cs - muvi , a low - resolution estimate of the scene is required .we next show that carefully designing the cs sensing matrix enables us to compute high - quality low - resolution scene estimates at low complexity , which improves the performance of video recovery .the choice of the sensing matrix and the upsampling operator are critical to arrive at a high - quality estimate of the low - resolution image .indeed , if the effective matrix is ill - conditioned , then application of the pseudo - inverse amplifies all three sources of errors in , eventually resulting in a poor estimate . forvirtually all sensing matrices commonly used in cs , such as i.i.d .( sub-)gaussian matrices , as well as sub - sampled fourier or hadamard matrices , right multiplying them with an upsampling operator often results in an ill - conditioned matrix or even a rank - deficient matrix .hence , well - established cs matrices are a poor choice for obtaining a high - quality low - resolution preview .figures [ fig : lenal1_compmotion](a ) and [ fig : lenal1_compmotion](b ) show recovery results for nave recovery using ( p1 ) and least - squares ( ls ) , respectively , using a random sensing matrix .we immediately see that both recovery methods result in poor performance , even for large window sizes or for a small amount of motion . and -based recovery algorithms for varying object motion .* the underlying scene corresponds to translating cross over a static background of lena .the speed of translation of the cross is varied across different rows .comparison between ( a ) -norm recovery , ( b ) ls recovery using a random matrix , and ( c ) ls recovery using a dual - scale sensing ( dss ) matrix for various relative speeds ( of the cross ) and window lengths . ] in order to achieve good cs recovery performance _ and _ have minimum noise enhancement when computing a low - resolution preview according to , we propose a novel class of sensing matrices , referred to as _ dual - scale sensing _ ( dss ) matrices .these matrices will ( i ) satisfy the rip to enable cs and ( ii ) remain well - conditioned when right - multiplied by a given up - sampling operator .such a dss matrix enables robust low - resolution as shown in fig .[ fig : lenal1_compmotion](c ) .we next discuss the details .in this section , we detail a particular design that is suited for smc architectures . in smc architectures , we are constrained in the choice of the entries of the sensing matrix .practically , the dmd limits us to matrices having binary - valued entries ( e.g. , ) if we are interested in the highest possible measurement rate .we propose the matrix to satisfy , where is a hadamard matrix is chosen such that a hadamard matrix exists . ] and is a predefined up - sampling operator . recall from section [ sec : hadamard ] , hadamard matrices have the following advantages : they have orthogonal columns , they exhibit optimal snr properties over matrices restricted to entries , and applying the ( forward and inverse ) hadamard transform requires very low computational complexity ( i.e. , the same complexity as a fast fourier transform ) .we now show the construction of a such a dss matrix ( see fig .[ fig : meas_design](a ) ) .a simple way is to start with a hadamard matrix and to write the cs matrix as where is a down - sampling matrix satisfying , and is an auxiliary matrix that obeys the following constraints : the entries of are , the matrix has good cs recovery properties ( e.g. , satisfies the rip ) , and should be chosen such that .note that an easy way to ensure that be is to interpret as sign flips of the hadamard matrix .note that one could chose to be an all - zeros matrix ; this choice , however , results in a sensing matrix having poor cs recovery properties .in particular , such a matrix would inhibit the recovery of high spatial frequencies .choosing random entries in such that ( i.e. , by using random patterns of high spatial frequency ) provides excellent performance .to arrive at an efficient implementation of cs - muvi , we additionally want to avoid the storage of an entire matrix .to this end , we generate each row of as follows : associate each row vector to an image of the scene , partition the scene into blocks of size , and associate an -dimensional vector with each block .we can now use the same vector for each block and choose such that the full matrix satisfies .we also permute the columns of the hadamard matrix to achieve better incoherence with the sparsifying bases used in sec .[ sec : optical ] ( see fig . [fig : meas_design](b ) for the details ) . pixels .preview frames are obtained at low computational cost using an inverse hadamard transform , which opens up a variety of new real - time applications for video cs . ]the use of hadamard matrices for the low - resolution part in the proposed dss matrices has an additional benefit .hadamard matrices have fast inverse transforms , which can significantly speed up the recovery of the low - resolution preview frames .such a `` fast '' dss matrix has the key capability of generating a high - quality _ preview _ of the scene ( see fig .[ fig : preview ] ) with very low computational complexity ; this is beneficial for video cs as it allows one to easily and quickly extract an estimate of the scene motion .the motion estimate can then be used to recover the video at its full resolution ( see sec .[ sec : optical ] ) .in addition to this , the use of fast dss matrices can be beneficial in various other ways , including ( but not limited to ) : [ [ digital - viewfinder ] ] digital viewfinder + + + + + + + + + + + + + + + + + + conventional smc architectures do not enable the observation of the scene until cs recovery is performed . due to the high computational complexity of most existing cs recovery algorithms , there is typically a large latency between the acquisition of a scene and its observation .fast dss matrices offer an _ instantaneous _ visualization of the scene ,i.e. , they can provide a real - time digital viewfinder ; this capability substantially simplifies the setup of an smc in practice .[ [ adaptive - sensing ] ] adaptive sensing + + + + + + + + + + + + + + + + the immediate knowledge of the scene even at a low resolution is a key enabler for adaptive sensing strategies .for example , one may seek to extract the changes that occur in a scene from one frame to the next or track the locations of moving objects , while avoiding the typically high latency caused by computationally complex cs recovery algorithms .crucial to the design of the dss matrix is the selection of the parameter . while is often scene - specific , a good rule of thumb is as follows : given an scene , choose such that the motion of objects is less than pixels in the amount of time required to get measurements .basically , this would serve to have motion in the preview images restricted to 1 pixel ( at the resolution of the preview image ) .we next detail the second part of cs - muvi , where we obtain the video at a high spatial resolution by estimating and enforcing motion estimates between frames .thanks to the preview mode , we can estimate the optical flow between any two ( low - resolution ) frames and . for cs - muvi , we compute optical - flow estimates at full spatial resolution between pairs of upsampled preview frames . for the results in the paper , we used `` bicubic '' interpolation to upsample the frames .this approach turns out to result in more accurate optical - flow estimates compared to an approach that first estimates the optical flow at low resolution followed by upsampling of the optical flow .let be the upsampled preview frame .the optical flow constraints between two frames , and , can be written as where denotes the pixel in the plane of , and and correspond to the translation of the pixel ( ) between frame and ( see ) . in practice, the estimated optical flow may contain sub - pixel translations , i.e. , and are not necessarily integer valued .if this is the case , then we approximate as a linear combination of its four closest neighboring pixels where denotes rounding towards and the weights are chosen according to the location within the four neighboring pixels . in order to obtain robustness against occlusions, we enforce consistency between the forward and backward optical flows ; specifically , we discard optical flow constraints at pixels where the sum of the forward and backward flow causes a displacement greater than one pixel . before we detail the individual steps of the cs - muvi video - recovery procedure , it is important to specify the rate of the frames to be recovered . when sensing scenes with smc architectures , there is no obvious notion of frame rate .one notion of the frame rate comes from the measurement rate which in the case of the spc is the operating rate of the dmd .however , this rate is extremely high and leads to videos whose dimensions are too high to allow feasible computations .further , each frame would be associated with a _single _ measurement which leads to a severely ill - conditioned inverse problem .a potential definition comes from the work of park and wakin who argue that the frame rate is not necessarily defined by the measurement rate .specifically , the spatial bandwidth of the video often places an upper - bound on its temporal bandwidth as well . intuitively , the idea here is that the larger the pixel size ( or smaller the spatial bandwidth ) , the greater the motion to register a change in the scene .hence , given a scene motion in terms of pixels / second , a suitable notion of frame rate is one that ensures sub - pixel motion between consecutive frames .this notion is more meaningful since it intuitively weaves in the observability of the motion into the definition of the frame - rate . under this definition ,we wish to find the largest window size such that there is virtually no motion at full resolution ( ) . in practice ,an estimate of can be obtained by analyzing the preview frames .hence , given a total number of compressive measurements , we ultimately recover full - resolution frames .note that a smaller value of would decrease the amount of motion associated with each recovered frame ; this would , however , increase the computational complexity ( and memory requirements ) substantially as the number of full - resolution frames to be recovered increases . finally, the choice of is inherently scene - specific ; scenes with fast moving highly textured objects require a smaller as compared to those with slow moving smooth objects .the choice of could potentially be made time - varying as well and derived from the preview ; this showcases the versatility of having the preview and is an important avenue for future research .we are now ready to detail the final stage of cs - muvi .assume that is chosen such that there is little to no motion associated with each preview frame .next , associate a preview frame with a high - resolution frame , by grouping compressive measurements in the immediate vicinity of the frame ( since ) . then , compute the optical - flow between successive ( up - scaled ) preview frames .we can now recover the high - resolution video frames as follows .we enforce sparse spatio - temporal gradients using the 3d total variation ( tv ) norm .we furthermore consider the following two constraints : consistency with the acquired cs measurements , i.e , , where maps the sample index to the associated frame index , and estimated optical - flow constraints between consecutive frames .together , we arrive at the following convex optimization problem : \!\text{subject to } & \!\!\ !\left\| { \ensuremath{\left\langle\phi_t,{{\bf x}}_{i(t)}\right\rangle } } - y_t \right \|_2 \le \epsilon_1 , \\ & \!\!\ !\left\| { { \bf x}}_i(x , y ) \!-\ ! { { \bf x}}_j(x+u_x , y+ v_y ) \right\|_2\le \epsilon_2 , \end{array}\right.\end{aligned}\ ] ] which can be solved using standard convex - optimization techniques .the specific technique that we employed was by variable splitting and using alm / admm . the parameters and are indicative of the measurement noise levels and the inaccuracies in the brightness constancy , respectively . captures all sources of measurement noise including photon , dark , and read noise .photon noise is signal dependent .however , in an spc , each measurement is the sum of a random selection of half the micromirrors on the dmd . for most natural scenes, we can expect the measurements to be tightly clustered to be more specific , around one - half of the total light - level of the scene .hence , the photon noise will have nearly the same variance across the measurements .hence , for the spc , all sources of measurement noise can be clubbed into one parameter which is set via a calibration process .setting is based on the thresholds used in detecting violation of brightness constancy when estimating brightness constancy . for the results in this paper, is set to , where is the total number of pixel pairs for which we enforce brightness constancy .in this section , we validate the performance and capabilities of the cs - muvi framework using simulations .results on real data obtained from our spc lab prototype are presented in sec .[ sec : real ] .all simulation results were generated from high - speed videos having a spatial resolution of pixels .the preview videos have a spatial resolution of pixels with ( i.e. , ) .we assume an spc architecture as described in with parameters chosen to mimic operation of our lab setup .noise was added to the compressive measurements using an i.i.d .gaussian noise model such that the resulting snr was 60db .optical - flow estimates were extracted using the method described in .the computation time of cs - muvi is dominated by both optical flow estimation and solving .typical runtimes for the entire algorithm are 23 hours on an off - the - shelf quad - core cpu for a video of resolution pixels with frames .however , computation of the low - resolution preview can be done almost instantaneously .[ [ video - sequences - from - a - high - speed - camera ] ] video sequences from a high - speed camera + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the results shown in figs .[ fig : carcar ] and [ fig : cardmonster ] correspond to scenes acquired by a high - speed ( hs ) video camera operating at 250 frames per second .both videos show complex ( and fast ) movement of large objects as well as severe occlusions . for both sequences ,we emulate an spc operating at compressive measurements per second . for each video, we used frames of the hs camera to obtain a total of compressive measurements .the final recovered video sequences consist of frames .both recovered videos demonstrate the effectiveness of cs - muvi .[ [ comparison - with - the - p2c2-algorithm ] ] comparison with the p2c2 algorithm + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in the p2c2 camera , a two - step recovery algorithm similar to cs - muvi is presented .this algorithm is near - identical to cs - muvi except that the measurement model does not use dss measurement matrices ; hence , an initial recovery using wavelet sparse models is used to obtain an initial estimate that plays the role of the preview frames .figure [ fig : comparisonwithp2c2 ] presents the results of both cs - muvi and the recovery algorithm for the p2c2 camera , with the same number of measurements / compression level .it should be noted that the p2c2 camera algorithm was developed for temporal multiplexing cameras and _ not _ for smc architectures .nevertheless , we observe from figs .[ fig : comparisonwithp2c2 ] ( a ) and ( d ) that nave -norm recovery delivers significantly worse initial estimates than the preview mode of cs - muvi .the advantage of cs - muvi for smc architectures is also visible in the corresponding optical - flow estimates ( see figs . [fig : comparisonwithp2c2 ] ( b ) and ( e ) ) .the p2c2 recovery algorithm has substantial artifacts , whereas the result of cs - muvi is visually pleasing . in all , this demonstrates the importance of the dss matrix and the ability to robustly obtain a preview of the video .[ [ comparisons - against - single - image - super - resolution ] ] comparisons against single - image super - resolution + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + there has been remarkable progress in single image super - resolution ( sr ) .figure [ fig : sr ] compares cs - muvi to a sparse dictionary - based super - resolution algorithm . from our observations ,the results produced by the super - resolution are comparable to cs - muvi when the upsampling is about . however , in spite of this , the best known results in sr seldom produce meaningful results beyond super - resolution .our proposed technique is in many ways similar to sr except that we obtain multiple coded measurements of the scene and this allows us to obtain higher super - resolution factors at potential loss in temporal resolution . [[ performance - analysis ] ] performance analysis + + + + + + + + + + + + + + + + + + + + finally , we look at quantitative evaluation of cs - muvi for varying compression ratios and input measurement noise level .our metric for performance is reconstruction snr in db defined as follows : where and are the ground truth and estimated video , respectively .the test - data for this is a 250 fps video of vehicles on a highway .a few frames from this video are shown in fig .[ fig : performance](a ) .we establish a baseline for these results using two different algorithms .first , we consider `` nyquist cameras '' that blindly tradeoff spatial and temporal resolution to achieve the desired compression .for example , at a compression factor of , a nyquist camera could deliver full - resolution at -th the temporal resolution or deliver -th the spatial resolution at -th the temporal resolution , and so on .this spatio - temporal trade off is feasible in most traditional imagers by binning pixels at readout .second , we consider videos recovered using nave frame - to - frame wavelet priors . for such reconstructions , we optimized over different window lengths of measurements associated with each recovered frame and chose the setting that provided the best results. figure [ fig : performance](b , c ) show reconstruction snr for cs - muvi and the two baseline algorithms for varying levels of compression . at high compression ratios ,the performance of cs - muvi suffers from poor optical - flow estimates . finally , in fig .[ fig : performance](d ) , we present performance for varying level of measurement or input noise . again , as before , for high noise levels , optical flow estimates suffer leading to poorer reconstructions . in all, cs - muvi delivers high quality reconstructions for a wide range of compression and noise levels . + ( a ) hand : simple motion + + ( b ) hand : complex motion + + ( c ) windmill + + ( d ) pendulumwe now present video recovery results on real data from our spc lab prototype . [[ hardware - prototype ] ] hardware prototype + + + + + + + + + + + + + + + + + + the spc setup we used to image real scenes is comprised of a dmd operating at 10,000 mirror - flips per second .the real measured data was acquired using a swir photodetector for the scenes involving the pendulum and a visible photodetector for the rest ( the hand and windmill scene ) . while the dmd we used is capable of imaging the scene at a xga resolution ( i.e. , 1024 pixels ), we operate it at a lower spatial resolution mainly , for two reasons .first , recall that the measurement bandwidth of an spc is determined by the speed of operation of the dmd . in our case ,this was 10,000 measurements per second .even if we were to obtain a compression of , then our device would be similar to a conventional sampler whose measurement bandwidth is measurements / sec which would result in a video of approximately pixels at 30 frames / sec .hence , we operate it at a spatial resolution of pixels by grouping pixels together on the dmd as one super - pixel .second , the patterns displayed on the dmd were required to be preloading onto the memory board attached to dmd via a usb port . with limited memory , typically 96 gb , any reasonable temporal resolution with xga resolution would be infeasible on our current spc prototype .we emphasize that both of these are limitations due to the used prototype and not of the underlying algorithms .recent , commercial dmds can operate at least -to- orders of magnitude faster and the increase in measurement bandwidth would enable sensing at higher spatial and temporal resolutions . [ [ gallery - of - real - data - results ] ] gallery of real data results + + + + + + + + + + + + + + + + + + + + + + + + + + + + figure [ fig : mosaic ] shows a few example reconstructions from our spc lab setup .each video is approximately seconds long and correspond to measurements from the spc . with ,all previews ( the top row in each sub - image in [ fig : mosaic ] ) were each of size pixels .videos were recovered with frames .the supplemental material has videos for each of the results .[ [ role - of - different - signal - priors ] ] role of different signal priors + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + figures [ fig : signalmodels ] , [ fig : priors ] , and [ fig : with_without_of ] show the performance of three different signal priors on the same set of measurements . in fig .[ fig : signalmodels ] , we compare wavelet sparsity of the individual frames , 3d total variation , and cs - muvi , which uses optical flow constraints in addition to the 3d total variation model .cs - muvi delivers superior performance in recovery of the spatial statistics ( the textures on the individual frames ) as well as temporal statistics ( the textures on temporal slices ) . in fig .[ fig : priors ] , we look at specific frames across a wide gamut of reconstructions where the target motion is very high . again, we observe that reconstructions from cs - muvi is not just free from artifacts , it also resolves spatial features better ( ring on the hand , palm lines , etc . ) . finally , for completeness , in fig .[ fig : with_without_of ] , we vary the number of measurements associated with each frame for both 3d total variation and cs - muvi .predictably , while the performance of 3d total variation is poor for fast moving objects , cs - muvi delivers high - quality reconstructions across a wide range of target motion .[ [ achieved - spatial - resolution . ] ] achieved spatial resolution .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + in fig . [fig : reschart ] and fig .[ fig : rescmp ] , note that a smc seeks to super - resolve a low resolution sensor using optical coding and spatial light modulators .hence , it is of utmost importance to verify if the device actually delivers on the promised improvement in spatial resolution . in fig .[ fig : reschart ] , we present reconstruction results on a resolution chart .the resolution chart was translated so as to enter and exit the field - of - view of the spc within 8 seconds providing a total of measurements .a video with frames was recovered from these measurements for an overall compression ratio of .[ fig : reschart ] indicates that the cs - muvi recovers spatial detail to a per - pixel precision validating the claims of achieved compression . for this result, we regularized the optical flow to be translational . specifically , after estimating the flow between the preview frames , we used the median of the flow - vectors as a global translational flow . in fig .[ fig : rescmp ] , we characterize the spatial resolution achieved by cs - muvi by comparing it to the image of a static scene obtained using pure hadamard multiplexing . as expected , we observe that the preview image is the same resolution as the static image downsampled .frames recovered from cs - muvi exhibit sharper texture than a downsampling of the static frame , but slightly worse than the full - resolution static image .note that this scene contained complex non - rigid and fast motion .[ [ variations - in - speed - illumination - and - size ] ] variations in speed , illumination , and size + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + finally , we look at performance on real data for varying levels of scene illumination , object speed and size . for illumination ( fig . [fig : illum ] ) , we use the spc measurement level as a guide to the amount of scene illumination . for object speed ( fig .[ fig : speedo ] ) , we instead slow down the dmd since it indirectly provides finer control on the apparent speed of the object . for size ( fig .[ fig : size ] ) , we vary the size of the moving target . in all cases ,we show the recovered frame corresponding to the object moving at the fastest speed .the performance of cs - muvi degrades gracefully across all variations .the interested reader is referred to supplemental material for videos of these results .compressive measurements were obtained at a dmd resolution of . in each case , we show multiple reconstructions with different number of compressive measurements associated with each frame . that is , in each instance , the number of recovered frame is chosen to satisfy the target value .( a ) reconstructions without optical flow constraints .the top row shows the pendulum at one end of its swing where it is nearly stationary .the bottom row shows the pendulum when it is moving the fastest . as expected ,increasing the number of measurements per frame , , increases the motion blur significantly .( b ) in contrast , use of optical flow preserves the quality of results .the visual quality peaks at ( see supplemental videos ) . ] . in each row, we show frames from the recovered video as well as its xt slice in the color coded box in the last column . ]downsampling but slightly worse than the full resolution image . ][ [ summary ] ] summary + + + + + + + the promise of an smc is to deliver high spatial resolution images and videos from a low - resolution sensor .the most extreme form of such smcs is the spc which poses a single photodetector or a sensor with no resolution by itself . in this paper , we demonstrate for the very first time on real data successful video recovery at super - resolution for fast - moving scenes .this result has important implications for regimes where high - resolution sensors are prohibitively expensive .a example of this is imaging in swir ; to this end , we show results using a spc with a photodetector tuned to this spectral band . at the heart of our proposed framework is the design of a novel class of sensing matrices and an optical - flow based video reconstruction algorithm .in particular , we have proposed dual - scale sensing ( dss ) matrices that exhibit no noise enhancement when performing least - squares estimation at low spatial resolution and preserve information about high spatial frequencies .we have developed a dss matrix having a fast transform , which enables us to compute instantaneous _ preview _ images of the scene at low cost .the preview computation supports a large number of novel applications for smc - based devices , such as providing a digital viewfinder , enabling human - camera interaction , or triggering adaptive sensing strategies .[ [ limitations ] ] limitations + + + + + + + + + + + since cs - muvi relies on optical - flow estimates obtained from low - resolution images , it can fail to recover small objects with rapid motion . more specifically , moving objects that are of sub - pixel size in the preview mode are lost .figure [ fig : carcar ] shows an example of this limitation : the cars are moved using fine strings , which are visible in fig .[ fig : carcar](a ) but not in fig .[ fig : carcar](b ) .increasing the spatial resolution of the preview images eliminates this problem at the cost of more motion blur . to avoid these limitations altogether, one must increase the sampling rate of the smc .in addition , reducing the complexity of solving ( pv ) is of paramount importance for practical implementations of cs - muvi .[ [ faster - implementations ] ] faster implementations + + + + + + + + + + + + + + + + + + + + + + current implementation of cs - muvi take in the order of hours for high - resolution videos with a large number of frames .this large run - time can be attributed to the dss matrix lacking a fast transform as well as the inherent complexity associated with high - resolution signals .faster implementations of the recovery algorithm is an interesting research directions .[ [ multi - scale - preview ] ] multi - scale preview + + + + + + + + + + + + + + + + + + + a drawback of our approach is the need to specify the resolution at which preview frames are recovered ; this requires prior knowledge of object speed . an important direction for future work is to relax this requirement via the construction of multi - scale sensing matrices that go beyond the dss matrices proposed here .the recently proposed sum - to - one ( short stone ) transform provides such a multi - scale sensing matrix .specifically , the stone transform is a carefully designed hadamard transform that remains a hadamard transform of a lower - resolution when downsampled .using the stone transform in place of the dss matrix could potentially provide previews of various spatial resolutions .[ [ multi - frame - optical - flow ] ] multi - frame optical flow + + + + + + + + + + + + + + + + + + + + + + + + the majority of the artifacts in the reconstructions stem from inaccurate optical - flow estimates a result of residual noise in the preview images .it is worth noting , however , that we are using an off - the - shelf optical - flow estimation algorithm ; such an approach ignores the continuity of motion across _multiple _ frames .we envision significant performance improvements if we use multi - frame optical - flow estimation .such an approach could potentially alleviate some of the challenges faced in pairwise optical flow including the inability to recover precise flow estimates for both slow - moving and fast - moving targets .[ [ towards - high - resolution - imagers ] ] towards high - resolution imagers + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the spatial resolution of an smc is limited by the resolution of the spatial light modulator .commercially available dmds , lcds and lcoss have a spatial resolution of megapixels .an important direction for future research is the design of imaging architectures , signal models and recovery algorithms to obtain videos at this spatial resolution ( and say , 30 fps temporal resolution ) . the key stumbling block for an spc - based approach for solving this is the measurement bandwidth which , for the spc , is limited by the operating rate of dmd .an approach to increasing the measurement rate is by using a multi - pixel architecture .one way to interpret such imagers is to think of each pixel on the sensor as an spc .hence , with the successful demonstrated in this paper , megapixel videos could potentially be achieved with the use of an photodetector array .however , the very high - dimensionality of the recovered videos raises important computational challenges with regards to the use of optical flow - based recovery algorithms .acs was supported by the nsf grant ccf-1117939 .lx , yl and kfk were supported by onr ( n660011114090 ) , darpa kecom ( # 11darpa1055 ) through lockheed martin , and princeton mirthe ( nsf eec # 0540832 ) .rgb was supported by the grants nsf ccf-0431150 , ccf-0728867 , ccf-0926127,ccf-1117939 , aro muri w911nf-09- 1 - 0383 , w911nf-07 - 1 - 0185 , darpa n66001 - 11 - 1 - 4090 , n66001 - 11-c-4092 , n66001 - 08 - 1 - 2065 , onr n00014 - 12 - 1 - 0124 and afosr fa9550 - 09 - 1 - 0432 .
spatial multiplexing cameras ( smcs ) acquire a ( typically static ) scene through a series of coded projections using a spatial light modulator ( e.g. , a digital micro - mirror device ) and a few optical sensors . this approach finds use in imaging applications where full - frame sensors are either too expensive ( e.g. , for short - wave infrared wavelengths ) or unavailable . existing smc systems reconstruct static scenes using techniques from compressive sensing ( cs ) . for videos , however , existing acquisition and recovery methods deliver poor quality . in this paper , we propose the cs multi - scale video ( cs - muvi ) sensing and recovery framework for high - quality video acquisition and recovery using smcs . our framework features novel sensing matrices that enable the efficient computation of a low - resolution video preview , while enabling high - resolution video recovery using convex optimization . to further improve the quality of the reconstructed videos , we extract optical - flow estimates from the low - resolution previews and impose them as constraints in the recovery procedure . we demonstrate the efficacy of our cs - muvi framework for a host of synthetic and real measured smc video data , and we show that high - quality videos can be recovered at roughly compression . video compressive sensing , optical flow , measurement matrix design , spatial multiplexing cameras 68u10 , 68t45
the kinetic theory of gases is a very vast field which successfully explains the irreversible laws of fluid mechanics through a statistical description of a system composed of large number of particles .kinetic theory based method should preserve the basic properties and characteristics of the boltzmann equation and also comply with the principles of non - equilibrium thermodynamics like i ) positive entropy production , ii ) satisfaction of onsager s relation and maximum entropy production principle .non - equilibrium thermodynamics being a phenomenological theory gives the symmetry relationship between kinetic coefficients as well as general structure of equations describing the non - equilibrium phenomenon .the onsager s symmetry relationship is a consequence of microscopic reversibility condition due to the equality of the differential cross sections for direct and time reversed collision processes . for a prescribed irreversible force the actual flux which satisfies onsager s theoryalso maximizes the entropy production .maximum entropy production principle is an additional statement over the second law of thermodynamics telling us that the entropy production is not just positive , but tends to a maximum .research in this area is yet to enter the domain of computational fluid dynamics , publications concerning maximum entropy principle is still in the realm of physics .the solution of the boltzmann equation is in accordance with the principle of maximum entropy production ( mep ) .non - equilibrium thermodynamics provides a tool for checking the correctness of the kinetic theory based solutions .distribution function derived using kinetic theory has to comply with requirements of non - equilibrium thermodynamics like following onsager s principle and maximization of entropy production under constraint imposed due to conservation laws .this section introduces kinetic theory and describes boltzmann equation and its moments .the section also presents maximum entropy production ( mep ) principle and brings out relationship between onsager s variational principle and linearized boltzmann equation .the boltzmann equation in bogoliubov s generalized form is expressed as follows where , the position vector , is the acceleration vector and is the velocity vector of molecules given in , here is number of directions a molecule is allowed to move .the left hand side describes the streaming operation as , thus it expresses advection of molecules written in conservative form . on the right hand side factor is binary or two particle collision , being the ternary or three particle collision and is quaternary or four particle collision . here in the difference in position between the colliding particles is taken into account .consider dilute polyatomic gas with binary collisions , the boltzmann transport equation in such a case describes the transient single particle molecular distribution where is the degree of freedom .an additional internal energy variable is added as polyatomic gas consists of particles with additional degree of freedom required for conservation of total energy instead of translational energy alone .thus a molecule of a polyatomic gas is characterized by a dimensional space given by its position , molecular velocity vector and internal energy .distribution function expresses the probability of finding the molecules in the differential volume dimensional space and the differential volume in phase space is where is and is .] of the phase space .the equilibrium or maxwellian distribution function for the polyatomic gas is given by where with as the specific gas constant and is given as where is the specific heat ratio .many polyatomic gases are calorically imperfect i.e. specific heat varies with temperature and in most of the engineering applications translational , rotational and vibrational partition functions contribute to the thermodynamic properties .for such a case distribution function can be represented as the probability of dominant macrostate with specific heat ratio , as ^{-1 } \end{array}\ ] ] where is the degeneracy of the vibrational mode , is the characteristic temperature and is number of vibrational modes .the moment of a function , is defined as hilbert space of functions generated by the inner product the five moments function defined as ^{t } ] , where , is the fluid velocity vector .when we take moments of the boltzmann equation we get the hyperbolic conservation equation .for example with we get euler equations that are set of inviscid compressible coupled hyperbolic conservation equations written as where ^{t } = \left\langle { \rm \boldsymbol{\psi } \ ; } , f_{0 } \right\rangle \equiv \int _ { \mathbb{r}^{+ } } \int _ { \mathbb{r}^{d } } \boldsymbol{\psi } f_{0 } ( \vec{x},\vec{v},\mathbb{i},t)d\vec{v}d\mathbb{i} ] and {\boldsymbol{x},\boldsymbol{j}}=0 ] gives .using the second kkt condition we obtain and the lagrangian can be written as {\boldsymbol{x } } = 0\ ] ] this lagrangian can be recast in a form similar to onsager s variational principle {\boldsymbol{x } } = 0\ ] ] where derived entropy production term for linear irreversible thermodynamics ( lit ) is similar to onsager s dissipative function density for domain in the flux space and coefficients is equivalent of onsager s phenomenological symmetric tensor .onsager variational principle is one of the corner stone of linear non - equilibrium thermodynamics .it states that each flux is a linear homogeneous function of all the forces of the same tensorial order following curie principle such that flux . in isotropic media vanish if forces couple with fluxes of different tensor types ._ for a prescribed irreversible force the actual flux which satisfies onsager s principle also maximizes the entropy production , . _an alternative gyarmati formulation in force space can be written as {\boldsymbol{j } } = 0\ ] ] the derived entropy in terms of thermodynamic force , is similar to dissipation function density in the force space expressed as where is the phenomenological symmetric tensor in the force space .gyarmati formulation also leads to the same conclusion that for a prescribed thermodynamic fluxes the actual irreversible forces maximize entropy production . consider linearized distribution ] which is not the solution of boltzmann equation but it satisfies additive invariants property and produces entropy .martyushev and seleznev proved that the distribution ] where is the rank - d identity invariant tensor , is the coefficient of bulk viscosity expressed as . from the expression of shear stress derived using kinetic theoryit is evident that _ stokes hypothesis is only valid for monatomic gases _ as for otherwise , as .for polyatomic gas the concept of bulk viscosity term will change if elastic collisions are included e.g. when elastic and inelastic collision terms are of the same order eucken correction to heat transfer coefficient and bulk viscosity may appear .non - equilibrium thermodynamics based kinetic scheme[kscheme ] as the state update moves from one time step to another time step it generates entropy which is the product of the thermodynamic forces and its conjugate fluxes .all the research in the development of upwind scheme revolves around the methodology of adding the correct dissipation or entropy e.g. if the amount of dissipation is too less then the solver will fail to capture shocks and if the amount of dissipation is too high then natural viscous behavior will get overshadowed .the correct amount of dissipation and its distribution for each thermodynamic force depends on the physical process through which state update passes , hence it is difficult to have a single monolithic solver operating across the regime from rarefied flow to hypersonic continuum flow . in precise wordsthe state update of a solver has to follow the path laid down by non - equilibrium thermodynamics and address the issue of correct distribution of entropy for each thermodynamic force associated with stress tensor and thermal gradient vector .the research in the development of such a solver will follows a rigorous procedure based on principles of kinetic theory incorporating phenomenological theory of non - equilibrium thermodynamics .the following section of the paper presents kinetic flux vector splitting based on the onsager - bgk kinetic model .the final aim is to have a single monolithic solver that mimics the physics by naturally adding the necessary dissipation for each thermodynamic force such that it is valid across wide range of fluid regimes .pullin initiated the development of kinetic schemes for compressible euler system based on maxwellian distribution using equilibrium flux method ( efm ) .deshpande pioneered kinetic flux vector splitting ( kfvs ) scheme which was further developed by mandal and deshpande for solving euler problems . around the same period perthame developed kinetic scheme and prendergast and xu proposed a scheme based on bgk simplification of the boltzmann equation .the gas kinetic scheme of xu uses method of characteristics and differs from the kfvs scheme mainly in the inclusion of particle collisions in the gas evolution stage .chou and baganoff extended kfvs for navier - stokes - fourier equations by taking moments of the upwind discretized boltzmann equation using first order distribution function .non - equilibrium thermodynamics ( net ) based kinetic flux vector splitting ( net - kfvs ) developed in the paper involves three steps : i ) in the first step the boltzmann equation is rendered into an upwind discretized form in terms of maxwellian distribution and its perturbation term based on microscopic tensor and its conjugate thermodynamic forces , ii ) in the second step inviscid or euler fluxes are obtained by taking moments of split maxwellian distribution , iii ) in the third step viscous fluxes are obtained by taking moments of split microscopic tensors followed by full tensor contraction with its conjugate thermodynamic force to obtain upwind scheme for macroscopic conservation equations . in order to illustrate net - kfvs , consider two - dimensional boltzmann equation in upwind form as follows \ ] ] in net - kfvs the distribution function at time in a fluid domain is constructed based on half range distribution at time where is the half - range distribution function for and and is the half - range distribution function for and .similarly , is the half - range distribution function for and and is the half - range distribution function for and .the upwind boltzmann equation after taking -moments simplifies to ^{t } \end{array}\ ] ] this leads to upwind equations in macroscopic form i.e. navier - stokes - fourier equations in kinetic upwind form as follows ^{t}\ ] ] the upwinding is enforced by stencil sub - division such that derivative of positive split fluxes are evaluated using negative split stencil .the inviscid part of the split flux is defined as the viscous part of the split flux is defined as for example viscous split mass flux component evaluated using is it contains features of non - equilibrium thermodynamics due to cross coupling of shear stress tensor and thermal gradient vector .similarly , momentum and energy flux will also contain terms due to cross coupling of shear stress tensor and thermal gradient vector defined by macroscopic split tensors and given as the components of split macroscopic tensors ^{t} ] are defined for each ^{t } ] is the state vector and is the time step .the component of state vector is not updated as . represents the split flux based on half range distributions . is the split flux resulting from half range distribution .derivatives of , and are evaluated using points on the left , right and upward side of the stencil .the mass , momentum and energy components of -directional flux can be written as sum of inviscid or euler part and viscous part as follows similarly , components of -directional flux can be written as sum of inviscid part and viscous part as follows where macroscopic split tensors is defined as the viscous fluxes are obtained using macroscopic tensors and associated with shear stress tensor and thermal gradient vector following onsager s reciprocal relationship so as to maximize the entropy production . the fluxes and can also be written in alternative form as where , , and are evaluated based on fluid conditions while and are the inviscid flux corresponding to maxwellian distribution , based on the wall conditions . the kinetic wall boundary condition can also be extended for rarefied regime bordering transition flows using onsager - bgk knudsen layer model described in appendix [ kn2model ] .results and discussions[resdis ] the present kinetic solver uses meshless approach to solve the net - kfvs formulation .consider distribution given at point and _ m _ points surrounding it called its connectivity . finding the derivative at point is a least squares problem where error norm is to be minimized with respect to derivatives , and using stencil . least square based methodusing normal equations approach fails to handle highly stretched distribution of points .more - so - ever not all ill effects inherent in the normal equation approach can be avoided by using orthogonal transformation as condition number is still relevant to some extent .the meshless kinetic solver uses split least square approach by minimizing , and with respect to , and respectively for each carefully selected sub - stencils , , such that the cross - product matrix is diagonally dominant and well conditioned .split - stencil based least squares approach retains the simplicity of normal equations while avoiding the ill - conditioning of the matrix .based on the formulations described split stencil least square kinetic upwind method or slkns in short uses net - kfvs for fluid domain and implements net - kfvs based kinetic wall boundary condition for non - continuum slip flow as well as continuum flow .slkns solver was validated for variety of test cases including continuum flows and non - continuum slip flows considering formulation based on inelastic collisions .consider transonic continuum flow of air at mach past a naca0012 airfoil at 10 degrees angle of attack at .the knudsen number , evaluated for such a case is based on chord length .this test case is simulated using the kinetic boundary condition with fully diffuse reflecting wall i.e. the accommodation coefficient , .the kinetic boundary condition treats the continuum region in the same way as the non - continuum region admitting velocity slip and temperature jump which becomes negligibly small in the continuum region .figure [ ronaca ] ( a ) shows the plot of coefficient of friction compared with slkns solver with kinetic wall boundary condition , slkns with no - slip boundary condition and fluctuation splitting lda scheme using no - slip boundary condition . dip in coefficient of friction near the leading edge can be observed due to slip flow .figure [ ronaca ] ( b ) shows the small velocity slip existing on the surface .temperature jump for this case was found to be very negligible .this continuum flow test case using kinetic wall boundary condition confirms the observation made by struchtrup that temperature jump and velocity slip will be present for all dissipative walls even in continuum regime .( a),title="fig:",width=264](b),title="fig:",width=264 ] ( a)rarefied flow past naca0012 airfoil at mach=0.8 , re=73 . , kn=0.014 ( a)contours of , ( b)comparison of the slip velocity distribution with dsmc , title="fig:"](b)rarefied flow past naca0012 airfoil at mach=0.8 , re=73 . , kn=0.014 ( a)contours of , ( b)comparison of the slip velocity distribution with dsmc , title="fig:",width=226 ] consider free stream rarefied transonic flow at mach , , density and temperature k past a naca0012 airfoil at zero angle of attack . the reynolds number based on the airfoil chord is 73 and knudsen number is 0.014 . the chord length is and wall of the airfoil is at k. the density contours shown in figure [ naca - rar](a ) reveals rise of density near stagnation point and rarefaction towards the tail where the density drops down .the viscosity based mean free path depends on the density , as .the rise in the mean free path near the tail makes the slip influence more pronounced which results in sudden rise of slip velocity .figure [ naca - rar](b ) shows the comparison of the slip velocity distribution for rarefied flow past naca0012 airfoil based on slkns solver and direct simulation monte carlo ( dsmc) using points .dsmc gave better results for nearly same grid size .hypersonic rarefied flow over a flat plate is one of the fundamental problem as it generates wide range of flow phenomena extending from highly non - equilibrium flow near the leading edge through the merged layer to strong and weak interaction regimes to a classical boundary layer flow at the downstream .kinetic flow region exists very near the leading edge caused by collisions between free stream and body reflected molecules . near the leading edgenon - continuum non - equilibrium viscous region exists where molecule - molecule and molecule - body collisions dominate the flow and as a consequence the distribution function is far away from maxwellian .further downstream in the transition region molecule - molecule collisions dominate the flow , this is followed by merged layer region in which wall boundary layer merges with the a non - rankine - hugoniot shock .consider a test case of hypersonic flow of argon at free stream velocity of m / s , with pressure of pascal at temperature of k over a flat - plate held at uniform temperature k at angle of attack .the test case used in this paper consists of flat plate 45 cm long placed along the x - axis in a flow domain of 25 cm 50 cm as shown in figure [ shockro](a ) .the simulation used a mesh of graded from mm at the plate surface to mm ahead of the plate tip .figure [ shockro](b ) shows the profile of the density in the boundary layer at mm from the plate tip .figure [ profileuxt](a ) shows the profile of the tangential velocity and figure [ profileuxt](b ) shows the profile of the temperature in the boundary layer at mm from the plate tip .all of these boundary conditions are compared with the flux based kinetic boundary condition and results of dsmc .figure [ shockslip](a ) shows the plot of velocity slip for dsmc , kinetic boundary condition , maxwell slip and onsager - maxwell slip .figure [ shockslip](b ) shows the plot of temperature jump for dsmc , kinetic , von smoluchowski , onsager - von smoluchowski boundary condition and results of greenshields and reese .temperature boundary condition based on von smoluchowski using net - kfvs gave unphysical temperature jump near the leading edge , hence they were evaluated based on dsmc field data to compare it with kinetic boundary condition .kinetic boundary condition was found to give better agreement with the results of dsmc .as also observed by greenshields and reese that there is discrepancy between the results of dsmc and boundary conditions of maxwell , von smoluchowski and patterson .this discrepancy is because of two factors : i)missing features of non - equilibrium thermodynamics , ii)as well as due to the fact that these expressions are derived under condition of negligible tangential variations .the mass flux due to slip on the surface of the plate is governed both by the tangential as well as normal components of shear stress tensor and thermal gradient vector . in order to estimate the order of importance of this _ cross phenomenon due to thermodynamic forces _ pertaining to shear stress tensor and thermal gradient vector a new dimensionless term called reciprocity number , was derived .dimensionless number based on the ratio of viscous split slip fluxes due to the contribution of thermodynamic forces can be a good measure of the non - equilibrium cross phenomenon . in this paper is derived as the ratio of slip mass flux due to shear stress tensor and thermal gradient vector based on half range distribution as follows where expressions of and are given in appendix [ splitflux ] .the plot of in figure [ rp - plot ] shows a sudden variation in the ratio of contribution of shear stress tensor and heat flux vector near the leading edge . near the leading edgethe flow is dominated by the cross - coupling due to thermodynamic forces based on thermal gradient vector and shear stress tensor .tangential variations become insignificant as we move away from the zone of sudden dip .( a) mm from the plate tip.[shockro],title="fig:"](b) mm from the plate tip.[shockro],title="fig:",width=226 ] ( a) mm from the plate tip.[profileuxt],title="fig:",width=226](b) mm from the plate tip.[profileuxt],title="fig:",width=226 ] ( a),title="fig:",width=264](b),title="fig:",width=264 ] couette flow between concentric inner rotating and outer stationary cylinders is one of a classical fluid dynamics problem .consider rarefied flow with a mean free path , of m confined in a rotating inner cylinder of radius and stationary outer cylinder of radius .the motive gas chosen is argon with initial uniform density of and inner cylinder held at k , rotates at frequency of radians / sec . a cloud of size was used to carry out simulation using slkns with net - kfvs based kinetic wall boundary condition for tangential momentum accommodation coefficient , .fig.[invslip](a ) shows the plot of the non - dimensional tangential velocity with respect to non - dimensional radial distance for slkns solver using - , and its comparison with formulation based on axi - symmetric boltzmann equation , results of dsmc and analytical solution using isothermal condition and uniform density .the viscous dissipation may generate faint temperature variations which are difficult to capture using dsmc .one of the objective of the test case was to resolve such weak features associated with viscous dissipation .figure [ invslip](b ) shows the contour of temperature which breaks the symmetry .it should be noted that any dsmc simulation as well as analytical derivation based on axi - symmetric approach may no longer be accurate as symmetry breaks down due to slip flow .when the outer cylinder is specularly reflecting then no circumferential momentum is transferred to the outer cylinder . as a consequence the gas accelerates and tries to reach the stationary state of rigid body rotation ( for which the distribution function is a maxwellian ) , satisfying the onsager s principle of least dissipation of energy valid for processes close to equilibrium . _ for non - inertial flows slip can exists even for non - dissipative specular walls . _( a)slkns simulation on r- plane ( a ) comparison of the non - dimensional tangential velocity with , dsmc and analytical result , ( b ) contours of temperature based on simulation on r- plane.,title="fig:",width=264](b)slkns simulation on r- plane ( a ) comparison of the non - dimensional tangential velocity with , dsmc and analytical result , ( b ) contours of temperature based on simulation on r- plane.,title="fig : " ] conclusions and future recommendations[conc ] most of the research in kinetic theory have focused more in the issues related to entropy generation and ignored the crucial aspect of non - equilibrium thermodynamics .kinetic models and kinetic scheme should comply with the requirements of non - equilibrium thermodynamics .non - equilibrium thermodynamics being a phenomenological theory gives the symmetry relationship between kinetic coefficients as well as general structure of non - equilibrium phenomenon derived using kinetic theory .the onsager s symmetry relationship is a consequence of microscopic reversibility condition due to the equality of the differential cross sections for direct and time reversed collision processes . for a prescribed irreversible force the actual flux which satisfies onsager s theoryalso maximizes the entropy production .the solution of the boltzmann equation is in accordance with the principle of maximum entropy production ( mep ) .linearized collision operator and non - equilibrium part of the distribution function can be expressed in terms of microscopic tensors and its associated thermodynamic forces .each thermodynamic force generates non - equilibrium distribution which relaxes with its own specific relaxation time satisfying onsager relation for entropy production . a new kinetic model called onsager - bgk model was formulated based on these principles of non - equilibrium thermodynamics .the boltzmann h - function in such a case can be interpreted as a summation of components of h - function belonging to its thermodynamic force i.e. each thermodynamic force will have its own h - theorem .h - theorem can also be understood as a ratio of mahalanobis distance between non - equilibrium and equilibrium distribution and its associated relaxation time .the positivity property of mahalanobis distance quickly establishes an effortless proof of h - theorem for the onsager - bgk kinetic model .the non - equilibrium part of the distribution function resulting from the new kinetic model can also be expressed in onsager sform i.e. as full tensor contraction of thermodynamic forces and its associated microscopic tensors . kinetic scheme and boundary conditions should also follow the principle of non - equilibrium thermodynamics by addressing the issue of correct generation and distribution of entropy for each thermodynamic force associated with its non - equilibrium state .velocity slip and temperature jump follow onsager s variational principle .the simulation results based on the onsager - bgk model and non - equilibrium thermodynamics based kinetic scheme were validated with analytical as well as the results of direct simulation monte carlo ( dsmc ) .a new term called reciprocity number was also derived using the contribution of thermodynamic forces on viscous split fluxes in order to estimate the order of importance of cross phenomenon .non - equilibrium thermodynamics based kinetic schemes , kinetic wall boundary condition and kinetic particle method can simulate continuum as well as rarefied slip flows within navier - stokes - fourier equations in order to avoid costly multi - scale simulations .the future course of action will require validation of onsager - bgk model for multi - component gas mixtures and further investigation on collision probability function based onsager - bgk model for knudsen layer . onsager - bgk model also opens up a possibility of its metamorphosis as a lattice boltzmann model for compressible , non - isothermal flows .we are thankful to shri g. gouthaman and shri t.k .bera for help and support .first author is grateful to prof .s.m . deshpande for being a mentor and a source of inspiration . in case of two fluid approximationwe will have kinetic models for each species .the kinetic model will not only depend on relaxation time associated with self - collisions but also on relaxation time associated with cross collisions of specie with specie .the kinetic model for specie and for specie can be expressed as where and are the non - equilibrium and equilibrium distribution for specie .similarly , and are the non - equilibrium and equilibrium distribution for specie .maxwellian distribution and are based on free temperature parameter .parameters , are the relaxation time due to self collisions of specie while , are the relaxation time due to self collisions of specie . parameters , are the relaxation time due to cross collisions of specie with specie while , are the relaxation time due to self collisions of specie with specie .the cross collision relaxation time and are related to number density as for where and is the number density of specie and specie .the components of split macroscopic tensors ^{t} ] are }\end{array}\ ] ] }\end{array}\ ] ] } \end{array}\ ] ] } \end{array}\ ] ] } \end{array}\ ] ] } \end{array}\ ] ] } \end{array}\ ] ] } \end{array}\ ] ] similarly components of ^{t} ] , ] , ] are inadequate as vector of collision invariant should include ^{t } $ ] giving rise to at - least 13 moment equations such that split mass flux becomes density in the second step and split momentum flux becomes density in the third step .this set of 13-moment grad like system includes evolution of pressure tensor and heat flux vector .however , the present 5 moments based formulation is adequate for the simulation of most of the engineering slip flow problems which lie in the regime of linear irreversible thermodynamics. the present approach will not be adequate for cases that involve large mach number in shock waves , high frequencies for sound waves , etc .it should also be noted that the approach based on non - equilibrium thermodynamics may also modify levermore procedure which generates hierarchy of closed systems of moment equations that ensures every member of the hierarchy is symmetric hyperbolic with an entropy , and formally recovers to euler limit .the finite dimensional linear subspace of functions of in levermore procedure should ensure that entropy generation follows onsager s reciprocity principle . in real mediait is the split flux which participates in any physical process and for non - equilibrium flows the split fluxes contain dissipative terms .for example split flux associated with mass flow contains contribution of viscous terms .it is also interesting to note that the presence of dissipative terms due to thermodynamic forces in the split fluxes brings out its relationship and difference with the hydrodynamic theory of brenner and quasi - gas dynamics where dissipative terms were introduced in un - split flux terms such that time - spatial averages are invariant under galileo transform . near the wall at a normal distance of order there exists a knudsen layer parametrized by which is the effectivemean free path depending on the effective viscosity and wall conditions . in the knudsen layersome molecules may collide more with the wall and may not suffer as much collisions with the molecules as compared to the molecules above the knudsen layer . for modeling slip near transition regimeideally we require an approach which is computationally cheap includes higher moments thereby terms of order .it should be noted that validity of chapman - enskog expansion procedure can only be said for , the more correct way to obtain non - linear distribution has to be based on extended irreversible thermodynamics ( eit ) by expanding the distribution function in terms of microscopic tensors and thermodynamic forces . from non - equilibrium thermodynamics point of viewlinear irreversible thermodynamics ( lit ) is no longer valid in the knudsen layer as fluxes are no longer linear functions of its conjugate force , regime shifts to extended irreversible thermodynamics ( eit ) described by .simplest approach may be to approximate eit based flux as a function of lit based flux by using suitable scaling function . consider a function will also depend on the curvature of the surface as it is a volume dependent parameter related to onsager s dissipation function . ] as a measure of probability of collision at any normal distance such that non - equilibrium eit based flux , can be approximated in terms of for curvature free surface as .in such a case the single particle velocity distribution in the knudsen layer at any normal dimensionless distance can be expressed as where is the maxwellian distribution .the total distribution function , at the wall based on maxwell gas - interaction model in terms of accommodation coefficient can be written as where is the total , is the incident knudsen layer chapman - enskog distribution , is the specular reflected knudsen layer chapman - enskog distribution and is the diffuse reflected maxwellian distribution based on wall conditions and conservation . at the wall so the non - equilibrium part of the incident chapman - enskog distribution vanishes and the temperature as well as velocity gradients are singular similar to the findings of lilley and sader . this can also be interpreted as a onsager - bgk knudsen layer model , valid in the knudsen layer with varying relaxation time expressed as the relaxation time varies with the normal distance from the wall based on the collision probability function , .the state update equations in the macroscopic form in the knudsen layer becomes \ ] ] the viscous part of the flux component , and are obtained as this kinetic model is quite easy to implement as the viscous fluxes are just multiplied by the collision probability function , .
boltzmann equation requires some alternative simpler kinetic model like bgk to replace the collision term . such a kinetic model which replaces the boltzmann collision integral should preserve the basic properties and characteristics of the boltzmann equation and comply with the requirements of non equilibrium thermodynamics . most of the research in development of kinetic theory based methods have focused more on entropy conditions , stability and ignored the crucial aspect of non equilibrium thermodynamics . the paper presents a new kinetic model formulated based on the principles of non equilibrium thermodynamics . the new kinetic model yields correct transport coefficients and satisfies onsager s reciprocity relationship . the present work also describes a novel kinetic particle method and gas kinetic scheme based on this linkage of non - equilibrium thermodynamics and kinetic theory . the work also presents derivation of kinetic theory based wall boundary condition which complies with the principles of non - equilibrium thermodynamics , and can simulate both continuum and rarefied slip flow in order to avoid extremely costly multi - scale simulation . presented in _ the tenth international conference for mesoscopic methods in engineering and science _ ( icmmes-2013 ) , 22 - 26 july 2013 , oxford . introduction[intro ] all the research in the development of upwind scheme based on macroscopic theories can be seen in terms of inclusion of physically consistent amount of entropy . in many case , a single solver operating from rarefied flow to hypersonic continuum flow requires corrections and tuning , as most of the time it is not known what is the correct amount of entropy generation for a particular regime and the correct distribution of entropy generation for each thermodynamic force . figure [ entropy ] shows schematic of entropy generation as physical state evolves from time to time . the components of entropy due to thermodynamic forces associated with stress tensor and thermal gradient vector differ in magnitude and vary with locations in the flow domain . _ genuine upwind scheme should resolve these different components of entropy generation due to its conjugate thermodynamic force in order to satisfy thermodynamics while the state update happens . _ most of the upwind schemes basically aim to add the correct dissipation or entropy but fail to resolve and ensure the correct distribution of the entropy associated with its conjugate thermodynamic force . if the solver follows and mimics the physics then we can have a single monolithic solver serving the entire range from rarefied flow to continuum flow , creeping flow to flow with shocks . the entropy generation observed at the macroscopic level is a consequence of molecular collisions at the microscopic level . mesoscopic method based on kinetic theory uses statistical description of a system of molecules and provides model for molecular collisions leading to non - equilibrium phenomena . non - equilibrium thermodynamics ( net ) being a phenomenological theory describes this non - equilibrium phenomena and provides linkage with kinetic theory ( kt ) based coefficients of transport and relaxation . the molecular description is provided by the kinetic theory while the relationship between the entropy generation due to thermodynamic forces associated with stress tensor and thermal gradient vector is a feature of non - equilibrium thermodynamics . kinetic theory and non - equilibrium thermodynamics together become a powerful tool to model non - equilibrium processes of compressible gas . this work introduces maximum entropy production principle and investigates its relationship with onsager s reciprocity principle and boltzmann equation . a new kinetic model based on onsager s principle is proposed , which gives correct prandtl number and also complies with the requirements of non - equilibrium thermodynamics . the paper describes kinetic flux vector splitting and kinetic particle method based on the new kinetic model which incorporates features of non - equilibrium thermodynamics . it also gives derivation of kinetic theory based wall boundary condition which complies with the principles of non - equilibrium thermodynamics , and can simulate both continuum and rarefied slip flow as an efficient and economical alternative to extremely costly multi - scale simulation . finally , simulation and validation of continuum , and rarefied slip flow test cases are presented to illustrate the present formulation .
the buckling of a piece of paper hitting an obstacle when ejected from a printer , the insertion of a catheter in to an artery or of steel piping in to a wellbore , even the so - called inverse spaghetti problem are all examples of mechanical settings where an elastic rod is forced out of a constraint and pushed against an obstacle . during this process the length of the rod subject to deformation changes , and when the rod is injected through a sliding sleeve , a configurational or eshelby - like force is generated , that until now has been ignored in the above - mentioned problems .this force has recently been evidenced for a cantilever beam by bigoni et al .it has also been shown to influence stability , used to design a new kind of elastically deformable scale and to produce a form of torsional locomotion .the aim of this article is to provide direct theoretical and experimental evidence that the effect of configurational forces on the injection of an elastic rod is dominant and can not be neglected .it leads , in the structure that will be analyzed , to a force reversal that otherwise would not exist .the considered setup is an inextensible elastic rod , clamped at one end and injected from the other through a sliding sleeve ( via an axial load ) , so that the rod has to buckle to deflect ( fig .[ variablelength ] , left ) .the elastic system is analytically solved in section [ caricodipunta ] , through integration of the elastica ( ) .it is shown that eshelby - like forces strongly influence the loading path and yield a surprising force reversal , so that certain equilibrium configurations are possible if and only if the applied force changes its sign .the change of sign is shown to occur when the rotation at the inflexion points exceeds ( corresponding to ) , a purely geometric condition independent of the bending stiffness and distance .furthermore , it is also proven that during loading two points of the rod come into contact and , starting from this situation ( again defined by a purely geometric condition ) , the subsequent configurations are all self - intersecting elastica .all theoretical findings have been found to tightly match the experimental results presented in section [ exppost ] and obtained on a structural model ( designed and realized at the instability lab of the university of trento http://ssmg.unitn.it ) , see also the supporting electronic material .the structure shown in fig . [ variablelength ] ( left ) is considered axially loaded by the force .the curvilinear coordinate ] , which may be treated as the cantilever rod shown in the inselt of fig .[ variablelength ] .the equilibrium configuration can be expressed as the rotation field , solution of the following non - linear second - order differential problem \\ [ 3 mm ] \theta_{\textup{eq}}(0 ) = 0 , \\ [ 3 mm ] \theta_{\textup{eq}}^{'}\left(\dfrac{l_{\textup{eq}}}{4}\right ) = 0 , \end{array}\ ] ] where the parameter , representing the dimensionless axial thrust , has been introduced as integration of the elastica ( [ elasticazzipostcr]) and the change of variable where , yields the following differential problem for the auxiliary field \\ [ 3 mm ] \phi(0 ) = 0 , \\ [ 3 mm ] \phi\left(\dfrac{l_{\textup{eq}}}{4}\right ) = \dfrac{\pi}{2}. \end{array}\ ] ] further integration of ( [ systemequivalente]) leads to the relation between the load parameter and the rotation measured at the free end of the cantilever ( which is an inflection point ) be neglected , the following solution is obtained ] where is the complete elliptic integral of the first kind .the configurational force , included in , is a function of the curvature at the ends of the rod comprised between the two constraints , namely , which can be obtained , through a multiplication of equation ( [ elasticazzipostcr]) by and its integration , together with the boundary condition , as so that equation ( [ rho ] ) may be rewritten as the rotation field can be expressed through the inversion of the relation ( [ cambiovarpost]) as while the axial and transverse equations describing the shape of the elastica are obtained from integration of the following displacement fields as \right\ } , \\ [ 4 mm ] x_2(s)= \dfrac{2\upsilon}{\rho}\left[1- \mbox{cn}(s\rho)\right ] , \end{array}\ ] ] where the functions am , cn , and sn denote respectively the jacobi amplitude , jacobi cosine amplitude and jacobi sine amplitude functions , while is the incomplete elliptic integral of the second kind of modulus . note that equations ( [ tetapost ] ) and ( [ spostamentipost ] ) are valid for the entire structure , $ ] although the problem under consideration seems to be fully determined by equations ( [ finalpostcritico ] ) , ( [ tetapost ] ) and ( [ spostamentipost ] ) , the length is unknown because it changes after rod s buckling .this difficulty can be bypassed taking advantage of symmetry , because the axial coordinate of the rod , for every unknown cantilever s length , is , so that equation ( [ spostamentipost]) gives \right\}=\dfrac{l}{4},\ ] ] and therefore , inserting equation ( [ finalpostcritico ] ) in equation ( [ pincopalla ] ) , it is possible to obtain the relation between the load parameter and the angle of rotation at the free edge of the cantilever as -\mathcal{k}(\upsilon)\right\}.\ ] ] the applied thrust , normalized through division by the eulerian critical load of the structure , is calculated from equation ( [ caricofinalepostfede ] ) as a function of the kinematic parameter through equation ( [ cambiovarpost]) as -\mathcal{k}(\upsilon)\right\}^2.\ ] ] fig .[ caricoangolopost ] shows load p ( divided by ) versus the rotation at the inflection point .also shown , for comparison , the dashed line representing the structural response when the configurational force is neglected , equation ( [ mistpost ] ) .it is noted that , although the critical load is not affected by the presence of the configurational force ( because the bending moment is null before buckling ) , the behaviour is strongly affected by it , so that the unstable postcritical path exhibits a _ force reversal _ when ( confirmed also by experiments , see sect .[ exppost ] ) , absent when the configurational force is neglected . according to equation ( [ sloffia ] ) , a self - equilibrated , but deformed , configuration ( ) exists for , where the rod is loaded only through the configurational force .the length , measuring the amount of rod injected through the sliding sleeve can be easily computed from equations ( [ finalpostcritico ] ) and ( [ finalrhopostfede ] ) in the following dimensionless form -\mathcal{k}(\upsilon)}-1,\ ] ] whereas , considering the relationship ( [ confforcepost ] ) , the analytical expression for the eshelby - like force ( divided by ) becomes -\mathcal{k}(\upsilon)\right\}^2.\ ] ] when the rotation at the inflexion points exceeds ( corresponding to ) , the elastica self - intersects , so that the presented solution holds true for rods capable of self - intersecting ( as shown in ) .finally , it is noted that the condition -\mathcal{k}(\upsilon)=0\ ] ] occurs for , representing a limit condition for which the injected rod length becomes infinite , , and the applied load becomes null , , as evident from equations ( [ deltallll ] ) and ( [ sloffia ] ) , respectively .the structural system shown in fig . [ variablelength ] ( right ) was designed and manufactured at the instabilities lab ( http://ssmg.unitn.it/ ) of the university of trento , in such a way as to be loaded at prescribed displacement , with a continuous measure of the force .the displacement was imposed on the system with a loading machine midi 10 ( from messphysik ) . during the testthe applied axial force was measured using a mt1041 load cell ( r.c .500n ) and the displacement by using the displacement transducer mounted on the testing machine .data were acquired with a ni compactrio system interfaced with labview 2013 ( from national instruments ) .the elastic rods employed during experiments were realized in solid polycarbonate strips ( white 2099 makrolon uv from bayer , elastic modulus 2350 mpa ) , with dimensions 650 mm 24 mm 2.9 mm .the sliding sleeve , 285 mm in length , was realized with 14 pairs of rollers from misumi europe ( press - fit straight type , 20 mm in diameter and 25 mm in length ) , modified to reduce friction .the sliding sleeve was fixed to the two columns of the load frame , in the way shown in fig .[ variablelength ] ( right ) .the influence of the curvature of the rollers on the amount of the configurational force generated at the end of the sliding sleeve was rigorously quantified in and found to be negligible when compared to the perfect sliding sleeve model .a sequence of six photos taken during the experiment with mm is reported in fig .[ sequenzepost ] , together with the theoretical elastica , which are found to be nicely superimposed to the experimental deformations .experimental results are reported in fig .[ esperimentipost ] and compared with the presented theoretical solution and the solution obtained by neglecting the configurational force ( dashed line ) .while the former solution is in excellent agreement with experimental results performed for two different distances between the two constraints ( mm and mm ) , the latter solution reveals that neglecting the configurational force introduces a serious error .note that two different lengths have been tested only to verify the robustness of the model , because size effects are theoretically not found with the assumed parametrization , which makes results independent also on the bending stiffness .the dimensionless applied load versus dimensionless length ( fig .[ esperimentipost ] , left ) , measuring the amount of elastic rod injected into the sliding sleeve , confirms a load reversal at .we note that the configurational force does not influence the qualitative shape of the elastica , but the amount of deflection , so that neglecting it leads to a strong and completely unacceptable estimation of the amplitudes . finally , the dimensionless configurational force reported as a function of ( fig .[ esperimentipost ] on the right ) shows its strong influence on postcritical behaviour , such that this force grows to make up one third of the rod s critical load .instabilities occurring during injection of an elastic rod through a sliding sleeve were investigated , showing the presence of a strong configurational force .this force causes a force reversal during a softening post - critical response and has been theoretically determined and experimentally validated , so that it is now ready for exploitation in the design of compliant mechanisms .moreover , configurational forces as investigated in this paper can also be generated through growth or swelling , so that the can play an imortant role in the description of deformation processes of soft matter .buecker , a. , spuentrup , e. , schmitz - rode , t. , kinzel , s. , pfeffer , j. , hohl , c. , van vaals , j.j . , guenther , r.w .( 2004 ) use of a nonmetallic guide wire for magnetic resonance - guided coronary artery catheterization . investigative radiology . _ investigative radiology _ , 39 ( 11 ) , 656660 .
when an inextensible elastic rod is injected through a sliding sleeve against a fixed constraint , configurational forces are developed , deeply influencing the mechanical response . this effect , which is a consequence of the change in length of the portion of the rod included between the sliding sleeve and the fixed constraint , is theoretically demonstrated ( via integration of the elastica ) and experimentally validated on a proof - of - concept structure ( displaying an interesting force reversal in the load / deflection diagram ) , to provide conclusive evidence to mechanical phenomena relevant in several technologies , including guide wire for artery catheterization , or wellbore insertion of a steel pipe . _ keywords _ : force reversal , elastica , variable length , eshelbian mechanics .
the study of complex systems entered social science in order to understand how self - organization , cooperative effects and adaptation arise in social communities . in this context the use of simple automata or dynamical models often elucidates the underlying dynamics of the observed behaviour ( 1 - 13 ) .in particular some models have been proposed to describe the opinion dynamics and the final decision of a community of voters ( 3,5 - 13 ) .these models focus on the self - organization resulting from a local dynamics , which represents the mutual influence , based on two simple properties : \i ) individuals are more likely to interact with others who already share many of their opinions ; \ii ) interaction increases the number of features that individuals share .however , a deeper analysis of the process which leads to the final decision in a political vote requires an improvement of the proposed models . this can be done in a mathematical way by considering the application of the game theory to political problems or , more simply , by introducing in the decision process other important * global * effects such as , for example , the government policy and/or the propaganda , which are usually neglected . in this paperwe propose a model toward a more realistic description of the opinion dynamics underlying the choice of the voters .the analysis is restricted to a bipolar scheme with a possible third political area , but it can be generalized to the case of many political parties .this initial simplification helps to clarify the model and , anyway , it applies to many european and non european countries .the model is based on the following points to be formalized and discussed in details later on : \1 ) there are initially three different groups of voters : the right coalition ( rc ) , the left coalition ( lc ) and a central group ( cg ) .the voters in the rc or lc do not change their opinion : they represent the cores of the political bipolar scheme .\2 ) the individuals of the cg interact with each other and with the individuals of the rc and of the lc . at the end of the dynamical processthere are three groups : majority , minority and the others .\3 ) the final decision of the single cg voter is based on his / her opinions on the arguments that he / she considers more relevant .\4 ) the mutual local influence among voters is modified by the global degree of satisfaction / dissatisfaction with respect to the government .the scheme ( 1 - 4 ) is , of course , an extremely simplified version of the real processes .for example the possibility that each voter ( lc , rc , cg ) decides not to vote ( abstentionism ) is not taken into account in the dynamics and this implies that the abstentionism is proportionally distributed among the different groups .however the model has many steps of analysis , it requires the application of general techniques of the complex systems and , as we shall see , it is able to give some interesting indications on the political dynamics. the paper is organized as follows : the cognitive political model and the social - political interaction are described in sec . 1 ; in sec .2 we set the criterion of political decision ; in sec .3 the approximations introduced in the numerical simulations are discussed ; the phenomenological analysis of the political vote in italy and germany and the prediction for the next italian vote are carried out in sec .4 ; sec . 5 is devoted to the conclusions and outlooks .in simulating the opinion dynamics toward a final political vote one has to generalize the available models of social interaction . in the axelrod model of the cultural evolution the interaction is limited to an imitation process in which the agents adapt cultural traits stochastically from each other with a bias toward similar agents .the interesting final result of the evolution is the diversity : there are different cultural domains . in the bounded confidence model , more oriented to the analysis of voting patterns , each voter possesses a single real valued opinion .when two voters interact , they average their opinions only if the opinion difference is within an external pre - fixed threshold otherwise there is no interaction .also in this case , as in the previous models , the system breaks up into distinct opinion clusters . however , in the analysis of the political opinion dynamics there is , ab initio , a community with a non - negligible heterogeneity and the selfish individual convictions play a crucial role not only in the interaction among individuals but also in the degree of influence of global effects ( governments , mass media , social shocks ) on the single voter . in particular , in a bipolar political system ,the number of votes of the two most important ( left and right ) coalitions depends upon an almost constant core of voters , who do not change their opinion , and on the individuals with less strong political convictions ( the cg ) who decide on the basis of a personal political analysis . more precisely , in the political decision process each individual is in front of many questions of social importance ( the contexts ) and he / she has to evaluate the possible alternative choices .this analysis , based on a personal mental representation of the validity of the different alternatives , evolves according to the interaction with the other members of the community and according to the global influence . as a result of this process, a restricted number of more relevant concepts emerges : in the individual decision making mechanism there is a simplification with respect to the social - political complexity and a `` dimensional reduction '' to the most relevant aspects .the previous considerations can be formalized by following the interesting cognitive model proposed in ref . that , with some peculiar modifications ( see later ) , is a good starting point to investigate the opinion dynamics toward a realistic simulation of the final political vote . in ref . one considers agents who divide the world into a number of _ contexts _ in which they evaluate alternative possibilities ( scenarios ) according to their personal opinions .the alternative scenarios are characterized by their objective attributes .a crucial assumption of the model is that the agent s _ theoretical payoff _ from the realization of a possible scenario in context is posited to be a linear function where is the agent s _ context vector _ , reflecting actual circumstances of the context and the agent s personal opinions . for each agentthere are context vectors each of dimension , which are assumed fixed in the model .an agent does not know his / her context vectors explicitly as this would require a detailed understanding of the effect of all attributes on his payoffs .however , by collecting experience on choices he / she has made previously , he / she learns to approximate the payoffs using an appropriate _ mental representation_. the mental representation is built around the world s `` most important degrees of freedom '' , constituting the agent s concepts . one assumes again that the _ approximate payoff _ that the agent `` computes '' directly is linear with the agent s _ approximate context vector_. by eq.(3 ) , is decomposed using_ mental weights _ in a reduced subspace of dimension and a number of concept vectors , , assumed normalized , which the agent uses to evaluate alternatives . due to the reduction of dimensionality , , the approximate payoff deviates from the theoretical payoff .the agents goal is to find the best possible set of concepts and mental weights which minimizes the error of the mental representation under the constraint that only concepts can be used .the natural measure of agent s _ representation error _ is the variance where is the average over alternatives in context , that can be written as where the agent s utility is given by and is a positive semi - definite , dimensional matrix , called the _ * world matrix * _ , which encompasses all information about agent s relationship to the world .the criterion for the political decision , that will be discussed in sec .2 , is based on the minimization of which is equivalent to maximize .this is the well - known * principal component analysis * ( pca ) problem . according to this ,the optimal concept vectors are provided by the most significant ( largest eigenvalues ) eigenvectors of in the considered basis of the d dimensional space ( see below ) .thus to achieve the best possible mental representation the agent should choose his / her concept vectors according to the eigenvectors of his / her world matrix in the order of their significance .two important remarks are now in order : \a ) in the application of the model to the political dynamics the previous considerations apply only to the agents in the cg ( see the next subsections ) . the agents in the rc and lcare assumed to have fixed orthogonal concept vectors , called and respectively , with scalar products , , . these vectors form the basis of two fixed orthogonal subspaces . the orthogonality of the subspaces associated with the lc and the rc is also an approximation because in the real dynamics the concept vectors of the lc and rc are not always completely orthogonal ( bipartisan choices ) . with this approximationone assumes that the bipartisan choices are irrelevant for the final political decision .\b ) the identification , sorting and truncation of the degrees of freedom in the model is closely analogous to what occurs in white s density matrix renormalization group method ( dmrg) ) . in the dmrgthe optimally renormalized degrees of freedom turn out to be the most significant eigenvectors of the reduced density matrix of the quantum subsystem embedded in the environment with which it interacts .as discussed in the introduction , the opinion dynamics is due to local interactions among voters and to global political effects .the local and global interactions , related with the previous cognitive model , will be now separately analyzed .the representation error is minimal if the agent learns to approximate his / her world matrix in the dimensional subspace spanned by the most significant eigenvectors of his / her world matrix .the final vote decision is due to the social - political network and , in our bipolar scheme , it is useful to cast context vectors into two basic categories : 1 ) context vectors which only depend on a single agent , with a world matrix 2 ) context vectors for agent which depend on the interaction with at least one other agent . for simplicity only pair interactions will be considered and following ref . the world matrix to be used in the utility function is written as where the total agents number is , the agent is in the cg ( see later ) , the parameters , , measure the relative strength of socio - political interactions with agent in the different groups ( cg , lc , rc ) and is given by i.e. by there is in an overlap of the concept vectors of the agents and .it is important to stress that the previous interactions are among the agents of the cg with the other agents in the same group , in the lc and in the rc .indeed , in our scheme the agents in the lc or rc do not change their opinion and their world matrices are fixed to constant .moreover the world matrix of the agents in the lc is orthogonal to the world matrix for the rc agents .* b ) the global effects * the opinion making process in a national political vote depends only partially on the local interaction and is strongly influenced by other important elements such as the decisions of the government and the mass media role . in order to include these effects in the dynamical process ,let us introduce a set of indices , , to describe the satisfaction of the agent with respect to the global political decisions .again the voters in the rc and lc do not change opinion despite their satisfaction / dissatisfaction and then the satisfaction indices , , related with the rating of the global events , are relevant only for the voters in the cg .in the previous subsection , the local influence has been introduced by the political interaction matrices , , . in the political bipolar scheme , a simple way to include the satisfaction indices , , in the dynamics is to consider that a voter who has a positive perception of the government actions increases the strength of its interaction with the coalition which governs while a dissatisfied voter tends to interact more with the opposition. therefore the interaction matrices of an agent of the cg with the agents of the lc or of the rc will depend on the satisfaction index in such a way to increase the interaction with the majority and decrease the interaction with the minority or viceversa .the interaction among voters in the cg remains unchanged . in section 3the , dependence on will be clarified .the most important elements which determine the agent representation of the social - political system have been specified in the previous sections .following ref . , one can assume that the system evolves according to a `` gradient adjustment dynamics '' obtained by the time evolution of the concept vectors given by of course , only the cg agents have a dynamical evolution because the agents in the lc and in the rc have fixed world matrices , , and the corresponding concept vectors span orthogonal subspaces . for the agents in the cgit is reasonable to assume that without interaction they choose random concept subspaces , i.e. complete disorder , and this implies that their matrices have a wishart distribution ( see ) . after the evolution according to the best response dynamics , at the equilibrium , each agent of the cg decides his / her vote and one needs a well defined criterion to understand if his / her final `` position '' is closer to the lc or the rc or is too far from both .the most simple idea is a comparison of the final concept vectors of each agent in the cg with the concept vectors of the lc and of the rc and then to define a `` political distance '' by the scalar products of the previous concept vectors .however , as clarified in ref. , the choice of the concept vectors is not unique and only the subspace they span is relevant for the dynamical process. then the criterion has to be related with the subspace spanned by cg agent concept vectors at the equilibrium with respect to the lc subspace ( i.e. spanned by the concept vectors of the lc agent ) or to the rc subspace and it is natural to consider the `` angle '' , , between two subspaces as the agent - agent distance .the procedure is the following one .firstly one considers the , , concept vectors of agent as columns of a new matrix ( x dimensional matrix ) .similarly we construct the matrix of any agent of lc and of the rc ( fixed by definition ) . according to refs . , we calculate the overlap between and in the following way : hence , we calculate the angle ( in $ ] ) as )\end{aligned}\ ] ] and )\end{aligned}\ ] ] where is the biggest singular value of according to the singular value decomposition ( svd ) . with this definitionthe distance is between ( politically equivalent ) and ( politically orthogonal ) .finally the agent will vote for the rc or the lc , according to the fact that the smallest `` angle '' is or , respectively .however , in order to have a more realistic model of the decision process , one has to take into account another important political aspect : the bipolar systems are , indeed , not perfectly bipolar .there is a non negligible part of the individuals in the cg that at the end of the dynamical evolution is still `` too far '' from the political ideas of the l and the r coalition and decide to vote for other possible groups ( og ) .of course one can neglect this point , which , however , is a crucial ingredient of the political dynamics .our definition of `` political distance '' between two agents is related to the angle of the corresponding subspaces spanned by the concept vectors .when two subspaces are orthogonal and then an agent of the cg will be considered voting for the og if * both * the angles and are in the range between a fixed value and .an increase of decreases the number of agents in the cg who vote for the third group : represents the `` mobility '' of the cg toward the political coalitions .the equilibrium and dynamic properties depend on the interaction matrices , , . as a first approximation ,one considers for the * local * interaction a mean - field political network with .this implies that , without global effects , the final voting choice is directly correlated with the number of agents of the lc and of the rc : a larger number of agents in the lc ( rc ) will automatically determine that a larger number of cg agents will we `` closer '' to the lc ( rc ) .moreover , the global effects change the interaction strength of the agent in the cg with the voters of the l and r coalitions , by the satisfaction index ( the interaction within the cg is unmodified ) . for simplicity ,let us assume that is independent on the agent . by definition, it has opposite effect on the interactions with the lc and the rc .then the starting approximation for the world matrix of the agent in the cg , which includes the local and the global interactions , can be written as where has been conventionally assumed positive if the interaction with the lc increases .the parameter drives the opinion dynamics and determines the majority .a more detailed analysis of its meaning and effects is carried out in section 4 .another assumption concerns the initial core distributions of the l and the r coalitions .in fact , it is not difficult to take into account a numerical difference between the core voters of the two coalitions , but this would introduce another parameter in the model and , in the present work , we are mainly interested in analyzing the global effects on the vote decision . therefore , in this first version of the model , we shall consider a symmetrical initial distribution for the core voters of the l and r coalitions .moreover , abstentionism is not of dynamical origin and then , in the numerical simulation , is proportionally divided according to the group initial distributions .the initial core voters of the l and r coalitions in our simulations are assumed equal to the of valid votes while the cg has only the .these numbers are not far from the real political situation : we verified that the lowest level for the l and r coalitions ( assumed as the cores ) in italian national votes , from 1996 to 2004 , is close to the of the valid votes .this implies that the winning coalition is determined by the decision of a relatively small number of individuals of the cg .before analyzing the data of the national vote in italy and in germany , let us note that in the model there are essentially two parameters , and , because , in the data fitting , a change in the parameter gives a rescaling of the previous ones . for the simulations of the dynamical evolution one considers the initial configuration previously discussed , with d=10 , k=4 and 10.000 agents .the results are averaged on 200 samples .table [ tab1 ] reports the results of the italian national vote from 1994 to 2004 for the different groups , the difference ( in percent ) , , in the final vote between the l and r coalitions and the fitted values of and which reproduce the data ( starting with the initial configurations previously discussed ) ..[tab1]the results of the italian national vote from 1994 to 2004 for the three different groups ( rc , lc , og ) and the difference ( ) ( in percent ) .we also report the respective values of and which allow us to reproduce the data .the l coalition includes the following parties : democratici di sinistra , margherita , rifondazione comunista , verdi , udeur , sdi , italia dei valori , comunisti italiani .the r coalition contains forza italia , alleanza nazionale , lega nord , udc , pri .the other parties are in the third group . [ cols="^,^,^,^,^,^,^",options="header " , ] as one can see , worsening government rating , which implies a positive , can cause today majority to loose the election and become minority . however , this depends on the numerical value of and the question arises if it is somehow related to the evolution of the opinions on the government decisions as `` detected '' , for example , by polls .the answer to this risky question requires a careful comparison with the rating of the governments obtained by polls immediately before the various elections .this point is interesting also for practical reasons ( see sec .5 ) and will be discussed in a forthcoming paper .the proposed model is a first step towards a more realistic description of the opinion dynamics of the political vote and , indeed , it has many key elements of the decision process .however there are some approximations : however , in the real political arena the winning coalition is determined not only by the vote decision of the agents in the cg but also by the decision not to vote of the agents in the core coalition : an agent with clear political convictions but strongly dissatisfied from his / her government does not vote for the opposite coalition but prefers the abstentionism . since the winning coalition has only a few percent of vote more than the minority , this effect can easily determine the final result .\2 ) for the same reason , the symmetric distribution of the core of the l and r coalitions is an approximation : a small initially asymmetry of the coalitions can change the final result and/or increase the strength of the global parameter needed for the victory . indeed, the opinion dynamics starts soon after a political vote and completes the evolution with the next vote choice .the agents in the cg who voted for the winning coalition are , at least immediately after the vote , predominately interested to interact with the majority . from this point of view, the symmetric local interaction means that the model applies for timeframe close to the next vote or , generally , when the agents of the cg become more independent respect to the previous majority .the points 1 - 3 require the introduction of ( at least ) other three parameters in the model .this last point is interesting because during the political polls , people are more incline to answer on more general and less direct political questions . for example , the answer is less uncertain if the question is about the coalition rather than the single political party .now , it is not difficult to think of a set of undirect questions able to determine the parameters of the model ( , plus the others needed for points 1 - 3 ) and put on a more rigorous basis the relation among polls and mathematical models .finally , the next steps include : a ) a more realistic version of the model ; b ) a combined effort with poll experts ; c ) the application of the model to different countries . indeed , the correlation , if any , between and certainly depends on the peculiar characteristics of the considered nation but it should be important to verify if there are similar political behaviors in different places .the authors thank g. fath and d. zappala for useful suggestions and comments , s. fortunato and g. gambarelli for interesting discussions and r. fonda , of the swg , for the help in understanding the relation among the model parameters and the political poll results .g. deffuant , d. neau , f. amblard and g. weisbuch , adv .. syst . * 3 * , 87 ( 2000 ) ; g. weisbuch , g. deffuant , f. amblard and j. p. nadal , complexity * 7*,87 ( 2002 ) ; g. deffuant , f. amblard , g. weisbuch and t. faure , journal of artificial societes and social simulations * 5 * , issue 4 , ( 2002 ) .g. fath and m. sarvary , physica a * 348*,611 ( 2005 ) and in `` economics and heterogeneous interacting agents '' - lecture notes in economic and mathematical systems , a.namate,t.kaizouji and y.aruka eds , springer ( 2005 ) .g. owen , game theory- academic press , san diego 1995 ; for a review see g.gambarelli and g.owen , `` the coming of game theory '' , essay on cooperative games- in honour of g. owen- special issues of theory and decision * 36 * , 1 ( 2004),g.gambarelli ed ., kluwe academic publishers .
a model of the opinion dynamics underlying the political decision is proposed . the analysis is restricted to a bipolar scheme with a possible third political area . the interaction among voters is local but the final decision strongly depends on global effects such as , for example , the rating of the governments . as in the realistic case , the individual decision making process is determined by the most relevant personal interests and problems . the phenomenological analysis of the national vote in italy and germany has been carried out and a prediction of the next italian vote as a function of the government rating is presented . keywords : disordered systems , opinion dynamics
linear network coding is a promising new approach to information dissemination over networks .the fact that packets may be linearly combined at intermediate nodes affords , in many useful scenarios , higher rates than conventional routing approaches . if the linear combinations are chosen in a random , distributed fashion , then random linear network coding not only maintains most of the benefits of linear network coding , but also affords a remarkable simplicity of design that is practically very appealing .however , linear network coding has the intrinsic drawback of being extremely sensitive to error propagation . due to packet mixing , a single corrupt packet has the potential to contaminate all packets received by a destination node .the problem is better understood by looking at a matrix model for ( single - source ) linear network coding , given by all matrices are over a finite field . here, is an matrix whose rows are packets transmitted by the source node , is an matrix whose rows are the packets received by a ( specific ) destination node , and is a matrix whose rows are the additive error packets injected at some network links .the matrices and are transfer matrices that describe the linear transformations incurred by packets on route to the destination .such linear transformations are responsible for the ( unconventional ) phenomenon of error propagation .there has been an increasing amount of research on error control for network coding , with results naturally depending on the specific channel model used , i.e. , the joint statistics of , and given . under a worst - case ( or adversarial ) error model , the work in ( together with ) has obtained the maximum achievable rate for a wide range of conditions . if is square ( ) and nonsingular , and , then the maximum information rate that can be achieved in a single use of the channel is exactly packets when is known at the receiver , and approximately packets when is unknown .these approaches are inherently pessimistic and share many similarities with classical coding theory .recently , montanari and urbanke brought the problem to the realm of information theory by considering a probabilistic error model .their model assumes , as above , that is invertible and ; in addition , they assume that the matrix is chosen uniformly at random among all matrices of rank .for such a model and , under the assumption that the transmitted matrix must contain an identity submatrix as a header , they compute the maximal mutual information in the limit of large matrix size approximately packets per channel use .they also present an iterative coding scheme with decoding complexity that asymptotically achieves this rate .the present paper is motivated by , and by the challenge of computing or approximating the actual channel capacity ( i.e. , without any prior assumption on the input distribution ) for any channel parameters ( i.e. , not necessarily in the limit of large matrix size ) .our contributions can be summarized as follows : * assuming that the matrix is a constant known to the receiver , we compute the exact channel capacity for any channel parameters . we also present a simple coding scheme that asymptotically achieves capacity in the limit of large field or matrix size . * assuming that the matrix is chosen uniformly at random among all nonsingular matrices , we compute upper and lower bounds on the channel capacity for any channel parameters .these bounds are shown to converge asymptotically in the limit of large field or matrix size .we also present a simple coding scheme that asymptotically achieves capacity in both limiting cases .the scheme has decoding complexity and a probability of error that decays exponentially fast both in the packet length and in the field size in bits . * we present several extensions of our results for situations wherethe matrices , and may be chosen according to more general probability distributions .a main assumption that underlies this paper ( even the extensions mentioned above ) is that the transfer matrix is always invertible .one might question whether this assumption is realistic for actual network coding systems .for instance , if the field size is small , then random network coding may not produce a nonsingular with high probability .we believe , however , that removing this assumption complicates the analysis without offering much insight . under an _ end - to - end coding _( or _ layered _ ) approach , there is a clear separation between the network coding protocol which induces a matrix channel and the error control techniques applied at the source and destination nodes .in this case , it is reasonable to assume that network coding system will be designed to be _ feasible _( i.e. , able to deliver to all destinations ) when no errors occur in the network .indeed , a main premise of linear network coding is that the field size is sufficiently large in order to allow a feasible network code .thus , the results of this paper may be seen as conditional on the network coding layer being successful in its task .the remainder of this paper is organized as follows . in section [ sec : matrix - channels ] , we provide general considerations on the type of channels studied in this paper . in section [ sec :mmc ] , we address a special case of ( [ eq : basic - channel - model ] ) where is random and , which may be seen as a model for random network coding without errors . in section [ sec : amc ] , we address a special case of ( [ eq : basic - channel - model ] ) where is the identity matrix .this channel may be seen as a model for network coding with errors when is known at the receiver , since the receiver can always compute . the complete channel with a random, unknown is addressed section [ sec : ammc ] , where we make crucial use of the results and intuition developed in the previous sections .section [ sec : extensions ] discusses possible extensions of our results , and section [ sec : conclusion ] presents our conclusions .we will make use of the following notation .let be the finite field with elements .we use to denote the set of all matrices over and to denote the set of all matrices of rank over .we shall write simply when the field is clear from the context .we also use the notation for the set of all full - rank matrices .the all - zero matrix and the identity matrix are denoted by and , respectively , where the subscripts may be omitted when there is no risk of confusion .the reduced row echelon ( rre ) form of a matrix will be denoted by .for clarity and consistency of notation , we recall a few definitions from information theory .a discrete channel consists of an input alphabet , an output alphabet , and a conditional probability distribution relating the channel input and the channel output .an code for a channel consists of an encoding function and a decoding function , where denotes a decoding failure .it is understood that an code is applied to the extension of the discrete memoryless channel . a rate ( in bits ) is said to be achievable if there exists a sequence of codes such that decoding is unsuccessful ( either an error or a failure occurs ) with probability arbitrarily small as .the capacity of the channel is the supremum of all achievable rates .it is well - known that the capacity is given by where denotes the input distribution . here , we are interested in matrix channels , i.e. , channels for which both the input and output variables are matrices . in particular , we are interested in a family of additive matrix channels given by the channel law where , , , , and , and are statistically independent . since the capacity of a matrix channel naturally scales with , we also define a _ normalized capacity _ in the following ,we assume that statistics of , and are given for all . in this case, we may denote a matrix channel simply by the tuple , and we may also indicate this dependency in both and .we now define two limiting forms of a matrix channel ( strictly speaking , of a sequence of matrix channels ) . the first form , which we call the _ infinite - field - size channel _ , is obtained by taking .the capacity of this channel is given by represented in -ary units per channel use .the second form , which we call the _ infinite - rank channel _, is obtained by setting and , and taking .the normalized capacity of this channel is given by represented in -ary units per transmitted -ary symbol .we will hereafter assume that logarithms are taken to the base and omit the factor from the above expressions .note that , to achieve the capacity of an infinite - field - size channel ( similarly for an infinite - rank channel ) , one should find a two - dimensional family of codes : namely , a sequence of codes with increasing block length for each , as ( or for each , as ) .we will simplify our task here by considering only codes with block length , which we call _ one - shot codes_. we will show , however , that these codes can achieve the capacity of both the infinite - field - size and the infinite - rank channels , at least for the classes of channels considered here . in other words , one - shot codes are asymptotically optimal as either or . for completeness, we define also two more versions of the channel : the _ infinite - packet - length channel _ , obtained by fixing , and , and letting , and the _ infinite - batch - size channel _, obtained by fixing , and , and letting . these channels are discussed in section [ ssec : other - infinite - channels ] .it is important to note that a channel is not the same as the -extension of a channel .for instance , the -extension of a channel has channel law where , and and correspond to independent realizations of a channel .this is not the same as the channel law for a channel , since may not be equal to . to the best of our knowledge ,the -extension of a channel has not been considered in previous works , with the exception of .for instance , and consider only limiting forms of a channel .although both models are referred to simply as `` random linear network coding , '' the model implied by the results in is in fact an infinite - rank channel , while the model implied by the results in is an infinite - packet - length - infinite - field - size channel .we now proceed to investigating special cases of ( [ eq : prob - basic - channel - law ] ) , by considering specific statistics for , and .we define the _ multiplicative matrix channel _ ( mmc ) by the channel law where is chosen uniformly at random among all nonsingular matrices , and independently from .note that the mmc is a channel . in order to find the capacity of this channel, we will first solve a more general problem .[ prop : mmc - group - capacity ] let be a finite group that acts on a finite set . consider a channel with input variable and output variable given by , where is drawn uniformly at random and independently from .the capacity of this channel , in bits per channel use , is given by where is the number of equivalence classes of under the action of .any complete set of representatives of the equivalence classes is a capacity - achieving code . for each ,let denote the orbit of under the action of . recall that for all and all , that is , the orbits form equivalence classes . for ,let .by a few manipulations , it is easy to show that for all .since has a uniform distribution , it follows that = 1/|\mathcal{g}(x)| ] , where is a full - rank matrix chosen uniformly at random. an equivalent way of generating is to first generate the entries of a matrix uniformly at random , and then discard if it is not full - rank .thus , we want to compute ] . by the union bound, it follows that the probability of failure satisfies [ prop : amc - coding - achievability ] the coding scheme described above can achieve both capacity expressions ( [ eq : amc - capacity - limit - q ] ) and ( [ eq : amc - capacity - limit - m ] ) . from ( [ eq : amc - failure - prob ] ) , we see that achieving either of the limiting capacities amounts to setting a suitable .to achieve ( [ eq : amc - capacity - limit - q ] ) , we set and let grow .the resulting code will have the correct rate , namely , in -ary units , while the probability of failure will decrease exponentially with the field size in bits .alternatively , to achieve ( [ eq : amc - capacity - limit - m ] ) , we can choose some small and set , where both and are assumed fixed . by letting grow ,we obtain a probability of failure that decreases exponentially with . the ( normalized ) gap to capacity of the resulting codewill be which can be made as small as we wish .consider a channel with , and uniformly distributed and independent from other variables .since is invertible , we can rewrite ( [ eq : prob - basic - channel - law ] ) as now , since acts transitively on , the channel law ( [ eq : ammc - model ] ) is equivalent to where and are chosen uniformly at random and independently from any other variables .we call ( [ eq : ammc - model-2 ] ) _ the additive - multiplicative matrix channel _ ( ammc ). one of the main results of this section is the following theorem , which provides an upper bound on the capacity of the ammc .[ thm : ammc - capacity - upper - bound ] for , the capacity of the ammc is upper bounded by let . by expanding , and using the fact that , and form a markov chain , in that order , we have where ( [ eq : proof - ammc - mutual-1 ] ) follows since and are independent .we now compute an upper bound on .let and write , where and .note that where . since is full - rank , it must contain an invertible submatrix . by reordering columns if necessary ,assume that the left submatrix of is invertible .write , and , where , and have columns , and , and have columns. we have it follows that can be computed if is known .thus , where ( [ eq : proof - ammc - bound-1 ] ) follows since may possibly be any matrix with rank .applying this result in ( [ eq : proof - ammc - mutual-2 ] ) , and using ( [ eq : bound - sum - gc ] ) and ( [ eq : number - matrices - rank ] ) , we have }{0pt}{}{m}{n } } + \log_q ( t+1)\frac{|{\mathcal{t}}_{n\times t}|{\genfrac{[}{]}{0pt}{}{n}{t}}}{|{\mathcal{t}}_{n\times t}|{\genfrac{[}{]}{0pt}{}{m}{t } } } \nonumber \\ & \leq \log_q ( n+1)(t+1 ) { \genfrac{[}{]}{0pt}{}{m - t}{n - t } } \label{eq : proof - ammc - mutual-3 } \\ & \leq ( m - n)(n - t ) + \log_q 4(1+n)(1+t ) .\nonumber\end{aligned}\ ] ] where ( [ eq : proof - ammc - mutual-3 ] ) follows from }{0pt}{}{m}{n } } { \genfrac{[}{]}{0pt}{}{n}{t } } = { \genfrac{[}{]}{0pt}{}{m}{t } } { \genfrac{[}{]}{0pt}{}{m - t}{n - t}} ] . since conclude that the capacities of the amc and the ammc may be reduced by at most .this loss is asymptotically negligible for large and/or large , so the expressions ( [ eq : amc - capacity - limit - q ] ) , ( [ eq : amc - capacity - limit - m ] ) , ( [ eq : ammc - capacity - limit - q ] ) and ( [ eq : ammc - capacity - limit - m ] ) remain unchanged .the steps for decoding and computing the probability of error trapping failure also remain the same , provided we replace by .the only difference is that now decoding errors may occur . more precisely ,suppose that .a necessary condition for success is that .if this condition is not satisfied , then a decoding failure is declared .however , if the condition is true , then the decoder can not determine whether ( an error trapping success ) or ( an error trapping failure ) , and must proceed assuming the former case .if the latter case turns out to be true , we would have an undetected error .thus , for this model , the expression ( [ eq : amc - failure - prob ] ) gives a bound on the probability that decoding is not successful , i.e. , that either an error or a failure occurs .we now extend our results to the infinite - packet - length amc and ammc and the infinite - batch - size amc .( note that , as pointed out in section [ sec : mmc ] , there is little justification to consider an infinite - batch - size ammc . ) from the proof of propositon [ prop : amc - capacity ] and the proof of corollary [ cor : ammc - capacity - limit ] , it is straightforward to see that it is _ not _ straightforward , however , to obtain capacity - achieving schemes for these channels .the schemes described in sections [ sec : amc ] and [ sec : ammc ] for the infinite - rank amc and ammc , respectively , use an error trap whose size ( in terms of columns _ and _ rows ) grows proportionally with ( or ) . while this is necessary for achieving vanishingly small error probability, it also implies that these schemes are not suitable for the infinite - packet - length channel ( where but not ) or the infinite - batch - size channel ( where but not ) . in these situations , the proposed schemes can be adapted by replacing the data matrix and part of the error trap with a _ maximum - rank - distance _ ( mrd ) code .consider first an infinite - packet - length amc .let the transmitted matrix be given by where is a codeword of a matrix code .if ( column ) error trapping is successful then , under the terminology of , the decoding problem for amounts to the correction of _ erasures_. it is known that , for , an mrd code with rate can correct exactly erasures ( with zero probability of error ) .thus , decoding fails if and only if column trapping fails .similarly , for an infinite - batch - size amc , let the transmitted matrix be given by where is a codeword of a matrix code .if ( row ) error trapping is successful then , under the terminology of , the decoding problem for amounts to the correction of _ deviations_. it is known that , for , an mrd code with rate can correct exactly deviations ( with zero probability of error ) .thus , decoding fails if and only if row trapping fails .finally , for the infinite - packet - length ammc , it is sufficient to prepend to ( [ eq : amc - mrd - error - trap ] ) an identity matrix , i.e. , the same reasoning as for the infinite - packet - length amc applies here , and the decoder in is also applicable in this case . for more details on the decoding of an mrd code combined with an error trap ,we refer the reader to .the decoding complexity is in and ( whichever is smaller ) . in all cases ,the schemes have probability of error upper bounded by and therefore are capacity - achieving .we have considered the problem of reliable communication over certain additive matrix channels inspired by network coding .these channels provide a reasonable model for both coherent and random network coding systems subject to random packet errors . in particular , for an additive - multiplicative matrix channel, we have obtained upper and lower bounds on capacity for any channel parameters and asymptotic capacity expressions in the limit of large field size and/or large matrix size ; roughly speaking , we need to use redundant packets in order to be able to correct up to injected error packets .we have also presented a simple coding scheme that achieves capacity in these limiting cases while requiring a significantly low decoding complexity ; in fact , decoding amounts simply to performing gauss - jordan elimination , which is already the standard decoding procedure for random network coding . compared to previous work on correction of adversarial errors ( where approximately redundant packets are required ) ,the results of this paper show an improvement of redundant packets that can be used to transport data , if errors occur according to a probabilistic model .several questions remain open and may serve as an interesting avenue for future research : * our results for the ammc assume that the transfer matrix is always nonsingular. it may be useful to consider a model where is a random variable .note that , in this case , one can not expect to achieve reliable ( and efficient ) communication with a one - shot code , as the channel realization would be unknown at the transmitter .thus , in order to achieve capacity under such a model ( even with arbitrarily large or ) , it is strictly necessary to consider multi - shot codes . * as pointed out in section [ ssec : nonuniform - packet - errors ] , our proposed coding scheme may not be even close to optimal when packet errors occur according to a nonuniform probability model . especially in the case of low - weight errors, it is an important question how to approach capacity with a low - complexity coding scheme. it might also be interesting to know whether one - shot codes are still useful in this case . *another important assumption of this paper is the bounded number of packet errors .what if is unbounded ( although with a low number of errors being more likely than a high number ) ? while the capacity of such a channel may not be too hard to approximate ( given the results of this paper ) , finding a low - complexity coding scheme seems a very challenging problem .we would like to thank the associate editor and the anonymous reviewers for their helpful comments .t. ho , m. mdard , r. koetter , d. r. karger , m. effros , j. shi , and b. leong , `` a random linear network coding approach to multicast , '' _ ieee trans .inf . theory _ , vol .52 , no .44134430 , oct . 2006 .s. jaggi , m. langberg , s. katti , t. ho , d. katabi , m. mdard , and m. effros , `` resilient network coding in the presence of byzantine adversaries , '' _ ieee trans .inf . theory _54 , no . 6 , pp . 25962603 , jun .2008 .danilo silva ( s06 ) received the b.sc .degree from the federal university of pernambuco , recife , brazil , in 2002 , the m.sc .degree from the pontifical catholic university of rio de janeiro ( puc - rio ) , rio de janeiro , brazil , in 2005 , and the ph.d .degree from the university of toronto , toronto , canada , in 2009 , all in electrical engineering .frank r. kschischang ( s83m91sm00f06 ) received the b.a.sc .degree ( with honors ) from the university of british columbia , vancouver , bc , canada , in 1985 and the m.a.sc . and ph.d. degrees from the university of toronto , toronto , on , canada , in 1988 and 1991 , respectively , all in electrical engineering .he is a professor of electrical and computer engineering and canada research chair in communication algorithms at the university of toronto , where he has been a faculty member since 1991 . during 19971998 , he was a visiting scientist at the massachusetts institute of technology , cambridge , and in 2005 he was a visiting professor at the eth , zrich , switzerland .his research interests are focused on the area of channel coding techniques .kschischang was the recipient of the ontario premier s research excellence award . from 1997 to 2000 , he served as an associate editor for coding theory for the ieee transactions on information theory .he also served as technical program co - chair for the 2004 ieee international symposium on information theory ( isit ) , chicago , il , and as general co - chair for isit 2008 , toronto .ralf ktter ( s92-m95-sm06-f09 ) received a diploma in electrical engineering from the technical university darmstadt , germany in 1990 and a ph.d .degree from the department of electrical engineering at linkping university , sweden . from 1996 to 1997 , he was a visiting scientist at the ibm almaden research laboratory in san jose , ca .he was a visiting assistant professor at the university of illinois at urbana - champaign and a visiting scientist at cnrs in sophia - antipolis , france , from 1997 to 1998 .in the years 1999 - 2006 he was member of the faculty of the university of illinois at urbana - champaign , where his research interests included coding and information theory and their application to communication systems . in 2006he joined the faculty of the technische universitt mnchen , munich , germany , as the head of the institute for communications engineering .he served as an associate editor for both the ieee transactions on communications and the ieee transactions on information theory .he received an ibm invention achievement award in 1997 , an nsf career award in 2000 , an ibm partnership award in 2001 , and a 2006 xerox award for faculty research .he is co - recipient of the 2004 information theory society best paper award , of the 2004 ieee signal processing magazine best paper award , and of the 2009 joint communications society and information theory society best paper award .he received the vodafone innovationspreis in 2008 .
this paper is motivated by the problem of error control in network coding when errors are introduced in a random fashion ( rather than chosen by an adversary ) . an additive - multiplicative matrix channel is considered as a model for random network coding . the model assumes that packets of length are transmitted over the network , and up to erroneous packets are randomly chosen and injected into the network . upper and lower bounds on capacity are obtained for any channel parameters , and asymptotic expressions are provided in the limit of large field or matrix size . a simple coding scheme is presented that achieves capacity in both limiting cases . the scheme has decoding complexity and a probability of error that decreases exponentially both in the packet length and in the field size in bits . extensions of these results for coherent network coding are also presented . error correction , error trapping , matrix channels , network coding , one - shot codes , probabilistic error model .
brownian motion with purely time dependent drift and diffusion are ubiquitous in geophysical , environmental and biophysical processes .one can identify numerous geophysical and environmental processes which occur under the crucial effect of external time dependent and random forcing , e.g. , the change between the snow - storage and snow - melt phases , outbreak of water - borne diseases , the life cycle of tidal communities , and many more .stochastic models with time dependent drift and diffusion terms are extensively used in the study of neuroscience .one of the most useful tool to tackle such stochastic processes is the fokker- planck formalism . in this formalism ,different realizations of a system are narrated in terms of probability density which denotes the system in a given state at a certain instant and the theoretical description of such 1-d diffusion motion is governed by : in this respect several interesting questions of wide inter- est can be raised such as , ( i)the probability of finding the system in a certain domain at a certain instant ( survival probability ) , ( ii)the pdf of time at which the system exit a certain domain first time ( known as first passage time ) starting from initial point , ( iii)the pdf of the maximum value of a bm process be- fore of its first passage time , and ( iv)the joint probability distribution of the maximum value m and its occurrence time before the first passage time of the bm process . +all the above mentioned pdfs are calculated and discussed for simple wiener and ornstein - uhlenbeck processes as well as in the context of dna breathing dynamics .but , all these discussions are based on constant drift and diffusion terms .however , the extension to a time dependent drift and time dependent diffusion terms are not straightforward .this is mainly because of the fact that the system has broken both the space and time homogeneity .several attempts are made to study bm process with purely time dependent drift and diffusion terms .one of the main work on bm with time dependent drift is barrierless electronic reactions in solutions . generalizing the oster - nishijima model to the low viscosity limit or the inertial limit ,the authors observed a strong dependence on friction and temperature of the decay rate even in the absence of the barrier , which agrees well with numerical simulation of the full lanevin equation .a series of works on stochastic resonance for time dependent sinusoidal drift is analyzed in refs . .the first passage time statistics for a wiener process with an exponential time dependent drift term are analyzed in the context of neu- ron dynamics in refs . .also , recent studies of dna unzipping under periodic forcing need to be mentioned .recently , molini _ et . al _ make a study on bm with purely time dependent drift and diffusion terms .+ in this work , we extend above mentioned works by incorporating several pdfs of brownian motion i.e. , and for a bm with purely time dependent drift and diffusion terms .one of the main objective of this work is to incorporate inertial effect in brownian functional study and to our best of knowledge , it is the first attempt to incorporate inertial effect in first passage study which is one of the impor- tant unsolved problem .the other objective of this work is to advertise for the use of the recently studied backward fokker - planck ( bfp ) method and the path decomposition ( pd ) method .both the bfp and pd methods are based on the feynman - kac formalism and both of them are first time used for exploring bm process with purely time dependent drift and diffusion terms . both the techniques are extensively used in study- ing many aspects of classical brownian motion , as well as for exploring different problems in computer science and astronomy . for the first time , we consider these elegant methods to study the brownian functionals for a bm with purely time dependent drift and diffu- sion . unlike the standard fp treatment yields distribution functions directly , we derive and solve differential equations for the laplace transforms of var- ious brownian functionals in the bfp method . on the other hand, we can utilize the pd method to calculate the distribution functions of interest by splitting a rep- resentative path of the dynamics into parts with their appropriate weighage of each part separately .this fact is justifiable by considering the markovian property of the dynamics . + the paper is organized as follows . in sectionii , we dis- cuss our bm process model with purely time dependent drift and diffusion terms .then we discuss several dis- tribution functions of interest and their relevances .the bfp and pd methods are explained in short . in sec .iii , we introduce several pdfs for a bm with power law time dependent drift and diffusion terms .we illustrate the example of fresh water availability in summer in the snowmelt dominated regime with the power law time dependent drift and diffusion terms .we conclude our paper in section iv .we are interested with those kind of problem where time - dependent random forcing is predominant . hence , the fokker - planck description of such problem can be made through eq .the associated stochastic differential equation for the state variable x(t ) is given by : where , is the purely time dependent drift term , denotes the diffusion term , and is a wiener process with gaussian distribution .the wiener process is an idealized statistical descriptions that apply to many physical systems .one of the most elegant theoretical method to tackle such kind of stochastic processes is the fokker - planck ( fp ) formalism . in this formalism, one can describe different realizations of a system by the probability density .one can find the system in a given state at a certain time , and the corresponding diffusion equation describes its temporal evolution .several interesting questions related to such stochastic systems are of wide interest in several areas .one of the main interest in this field is to find the probability density for the system remains in a certain domain at a given instant and the moment at which the system escapes it for the first time .due to the stochastic nature of the system , different realizations of the system leave a certain domain at different times and it is natural to consider the statistical properties of this random variable .other interesting questions related with such first passage statistics are ( i ) finding the probability density of area under a path ( ii ) probability density of maximum size and ( iii ) the joint probability density of maximum size and its occurrence time .+ in one dimension , the first passage statistics related problem are basically formulated by considering a state variable which evolves stochastically according to a given law in its phase space .we are mainly concerned about the instant when the variable leaves a certain domain for the first time . to deal with such problema number of several methods or approaches had been described in refs . . here , we describe two elegant methods ( i)backward fokker - planck ( bfp ) method and ( ii ) path decomposition method ( pd ) . following ref . , we can introduce a general description to compute the pdf of a brownian functional in a time interval , where is the first passage time of the process .thus , one can introduce a functional to calculate different statistical properties of a brownian functional : where , is a brownian path which follows differential eq.(2 ) and it starts at at time and continues up to . here , is a specified function of the path and its form depends on the quantity we are interested to calculate . for example , if we are interested to calculate first passage time one should choose . on the other hand , for the area distribution one should consider .one can easily understand that is a random variable which can take different values for different brownian paths .the main goal is to calculate probability distribution .now , one may note that the random variable can be only positive for our choice of , thus , one may consider the laplace transform of the distribution : here , the angular bracket denotes the average over all possible paths starting at at and ending at the first time they cross the origin . for simplicity, we will drop the variable p in the function in the rest of our paper .now , to derive a differential equation for , we follow the method described in ref .thus , we split the interval into two parts . during the first interval , the path starts from andpropagates up to . in the second interval ,the path starts at and ends at 0 at time . here , is a fixed , infinitesimally small time interval . to leading order in , we obtain : . as a result ofthat one can obtain from eq .( 4 ) : here , the angular bracket denotes the average over all possible realizations of .now , one can obtain from the dynamical equation for a free langevin particle , i.e. from that .now , expanding in powers of , and taking the averages over the noise by using the facts and as , one obtains , to lowest order in , the ordinary differential equation : _ boundary conditions : _ equation ( 6 ) is valid in the regime with the following boundary conditions : ( i ) as the initial position , the first passage time vanishes which gives us , ( ii)on the other hand , as , the first passage time diverges which results in . + thus , our scheme will be as follows .we can solve the differential eq .( 6 ) , termed as the bfp equation . by solving eq.(6 ) with appropriate boundary condition as mentioned above provides us the laplace transformed pdfs of various quantities which are determined by the choice of u(x ) .now , inverting the laplace transform with respect to p , one can obtain the desired pdf . on the other hand , the standard fokker - planck method adopted in refs . yields the distribution function p(x , t ) directly .thus , these two approaches are distinct , providing complementary information .the basic principle of this pd method is very simple .since , our motion in eq.(2 ) is markovian one can break a typical path into two parts .thus , the weightage of the whole path is the product of the weights of the two split parts .thus , the joint probability distribution of the maximum bubble size m and the occurrence time at which this maximum occurs before first passage .now , integrating over m , one can obtain the marginal distribution . the basic process to compute by splitting a typical path into two parts , before and after . here , weights and are the weighage of the path before and after . as a matter of fact ,the total weight w of the whole path is : on the left - hand side of , the path propagates from at to at , without attaining the value or during the interval .now , the weight can be determined by using a path - integral treatment based on the feynman - kac formalism .let us we denote be the probability that the motion described by eq .( 2 ) exits the interval for the first time through the origin .thus , is the cumulative probability that the maximum before the first - passage time is .it is known that this function satisfies two boundary conditions : ( i ) and ( ii ) . let us consider a function which gives us the distribution function of a small displacement in time .now , using the markovian property of the dynamics ( 2 ) , one can show that : now , making a taylor expansion of and averaging over , and using and .thus , to the leading order in we obtain now , solving the above equation with the help of above mentioned boundary boundary condition , one can obtain : now , differentiating with respect to we obtain : now , the is obtained as on the other hand , the weight can be obtained from the white noise is gaussian and the probability of a path is given by : then , the weight is then obtained as a sum over contributions from all possible paths : where , and enforce the requirements that the path does not cross either the level or the level for times between and . now , following feynman - kac , the path integral can be identified with the propagator , corresponding to the quantum hamiltonian of a single particle of unit mass , with potential energy for and for and .note , that the infinite potential energy at and at enforces the requirement that the path never crosses either the level 0 or level m. finally , where , and are the eigenfunctions and eigenenergies , respectively for the hamiltonian .our primary focus is on several first - passage brownian functionals of physical relevance .we consider the following quantities and explore their pdfs . in this context, we explore a physical phenomenon of snowmelt dynamics for the fresh water availability in summer .+ ( i)_first passage time or lifetime of the stochastic process _ : the first - passage time pdf , i.e. , the pdf of the time of touching the origin first time with initial size , provides the information about the lifetime of the stochastic process .a related quantity is the survival probability of the process .this survival probability is an experimentally measurable quantity .for example , in the context of dna breathing dynamics can be inferred from experiments by measuring fluorescence correlations of a tagged dna . in the snowmelt dynamics , our key stochastic variable is the total potential water availability , h ( in terms of water equivalent from both snow and rainfall ) .thus , the survival probability for a given initial snow water equivalent and the pdf of first passage time are very much useful quantities to offer important information about the timing between melting of snow and fresh water availability in summer under different climatic scenarios . + ( ii)_area under a path : _ if we consider a typical path which is described by eq .( 2 ) , one can define the area under such a path before the first - passage time as .the interesting quantity is its pdf with an initial value .this quantity is of interest because it provides a measure for the effectiveness of the corresponding stochastic processes .for example , if we consider the snow melt process , then gives us the information about the average total snow water equivalent with initial value .while the first - passage time distribution provides information about the lifetime , it does not contain any hint of the average total water equivalent before full melting .quantities ( i ) , ( ii)can be calculated below by following the bfp method discussed in sec .+ _ maximum size m : _ the other proposed measure for quantifying reactivity of the process is the distribution of the maximum size before the first - passage time , p(m ) . let us consider again snow melt process .now , the pdf provide us about the information about the maximum total available fresh water equivalent before total melting of snow .+ _ maximum size m and the corresponding time tm : _ the joint probability distribution function can be investigated here by following the pd method , which is based on the feynman - kac formalism ( see sec .iib-2 ) . using this pdf , one can further calculate the distribution function of the time at which the process attains its maximum size before hitting the origin .this latter pdf is of interest because it provides information about the ( average ) time of occurrence of the biggest size before hitting the origin .snowmelt is one of the main source of freshwater for many regions of the world and the snowmelt process is very much sensitive to temperature and precipitation fluctuations .snow dynamics is basically consists of two phases : ( a ) an accumulation phase in which snow water equivalent ( i.e. the amount of liquid water available by total and instantaneous melting of the entire snowmass ) rises to its seasonal maximum and the other one is ( b)the depletion phase where the whole snowpack gradually decreases ( release of stored water content ) due to temperature fluctuation . to describe such a complex dynamics one needs a lot of physical parameters .now , we are trying to build a simplified stochastic model which can describe the total water equivalent from both snow and rainfall during the melting season , as driven by both precipitation ( solid to liquid transition ) and increasing air temperature . due to simplification of the stochastic model, we consider the total potential water availability ( in terms of water equivalent ) as the main stochastic variable . here , we neglect any other effects connected with snow percolation and metamorphism etc . .the predominant factors which govern the fresh water availability in the warm season are increasing air temperature and liquid precipitation .accordingly , we assume the melting phase can be described by a power - law time dependent drift directed towards the total melting of the snowpack .on the other hand , positive and negative exponents of power - law diffusion usually represent precipitation events and pure melting periods , respectively .following the `` degree - day '' approach with time - varying melting - rate coefficients , one can assume the melting process can be described by a linear function of time .considering a power - law form for drift and diffusion during the melting season , the dynamics of the total water equivalent from both snow melting and precipitation at a given point in space can be reasonably described by the langevin equation : where , the drift part represents the accumulation or depletion with a rate constant and the diffusion rate is given by . also , both the rainfall and snowmelt contributions are included in . here, we assume that the drift and the diffusion follow the same power law with exponent .this is a reasonable assumption in the sense that the snow melt is most predominant in the summer time i.e. the process is expected to increase its variability during warm season .the initial value of the snow water equivalent ( swe ) , , is the accumulated snow during the cold season .+ the fokker - planck equation corresponding to the differential eq .( 17 ) now , we can use the following transformation equations to go from to space and using the above mentioned transformation equations one can reduce eq . (18 ) into a constant co - efficient free diffusion equation form : using the backward fokker - plank method one can obtain the bfp equation substituting in equation ( 22 ) , we obtain the general solution of equation ( 23 ) is inverting the laplace transform with respect to p gives the pdf of the first passage time for again transforming above equation into original variables and by using equations ( 19 ) and ( 20 ) , we get }{[1/2\int_{0}^{t_{f}}\sigma^{2}(t)dt]^{3/2}}\nonumber \\ & \times&\exp\bigg[-\dfrac{(h_{0}+\int_{0}^{t_{f}}\mu(t)dt)^2}{2\int_{0}^{t_{f}}\sigma^{2}(t)dt } \bigg]\end{aligned}\ ] ] let us consider two different cases for the time dependent drift and diffusion : ( 1) , , and .+ now , substituting and in equation ( 26 ) , we obtain ^{3/2 } } \exp \big[-\dfrac{h_{0}^2}{2k\dfrac{t_f^{\alpha+1}}{\alpha+1}}\big]\ ] ] ( 2)case 2 : proportional power - law diffusion and drift i.e. and ; then the first passage time distribution is given by \end{aligned}\ ] ] whereas the can supply the important information about the time of melting and summer fresh water availability , the pdf will supply us the useful information about the total summer fresh water availability under different climatic conditions .+ we can compute the distribution of ,i.e . , by substituting in equation ( 13 ) : the general solution of equation ( 29 ) is where is the airy function .now , applying the boundary conditions : + 1. when + 2. when , we obtain taking the inverse laplace transform \ ] ] again transforming above equation into original variables and by using equation ( 19 ) and ( 20 ) , we obtain &=&d(t)\dfrac{2^{1/3}}{3^{2/3}\gamma(1/3)}\dfrac{h_{0}+\int_{0}^{t_{f}}\mu(t)dt}{[a(t)]^{4/3}}\nonumber \\ & & \exp \big[-\dfrac{2[h_{0}+\int_{0}^{t_{f}}\mu(t)dt]^3}{9a(t)}\big]\end{aligned}\ ] ] _ case ( 1 ) : unbiased power law time dependent diffusion _ + in this case one can consider , , and .now , substituting the above mentioned values of and in eq .( ) we obtain the pdf of area till : ^{4/3}}\nonumber \\ & & \times \exp\big[-\dfrac{2h_{0}^3}{9a(t)}\big]\end{aligned}\ ] ] _ ( 2)case 2 : proportional power - law diffusion and drift _ + in the case of proportional power - law diffusion and drift , one may consider and , the pdf of area till the first - passage time is given by : }{(\alpha+1)[a(t)]^{4/3}}\nonumber \\ & & \exp\big[-\dfrac{2(\alpha h_{0}+h_{0}+qkt_{f}^{\alpha+1})^{3}}{9(\alpha+1)^3a(t)}\big]\end{aligned}\ ] ] the joint probability distribution of maximum and its occurrence before first passage time, , provides important information about the maximum available fresh water equivalent in summer as well as the exact timing of it . in that senseit is one of the important quantity to study .now , following the path decomposition method discussed in section iib-2 as well as in ref . , we can obtain the exact expressions of joint probability distribution for the two cases of power law .+ _ case ( 1)unbiased power law time dependent diffusion _+ in this case one can consider , , and .thus , the joint probability distribution is given by _ ( 2)case 2 : proportional power - law diffusion and drift _ + in the case of proportional power - law diffusion and drift , one may consider and , the joint probability distribution is given by : it is very difficult to plot the joint probability distribution .so , we are interested on the marginal distribution .the marginal distribution is given by now , putting the expression of , one can obtain now , putting , one can show that _ case i : large asymptote ( ) _ + introducing the variable in above equation , we obtain now , again transforming into ( x , t ) space we obtain }{[1/2\int_{0}^{t_{m}}d(t)dt]^{3/2}}\ ] ] _ a. unbiased diffusion and _ + in this case , we obtain _ proportional power law drift and diffusion : and _ + in this case , the marginal distribution is given by }{(\alpha+1)(t_{m}^{\alpha+1})^{3/2}}\ ] ] _ case ii : small- : _ + in this limit .now , taking the laplace transform of the eq .( ) let us consider s becomes much larger than and , we get taking the inverse laplace transform integrating the above equation over m in the limit , we get again , transforming into ( x , t ) variables we obtain }\dfrac{1}{\bigg [ \dfrac{1}{2}\int_{0}^{t_{m}}d(t)dt \bigg]^{1/2}}\ ] ] _ unbiased power law diffusion and _ + _ proportional power law time dependent drift and diffusion _ + in this case and }\dfrac{1}{(t_{m}^{\alpha+1})^{1/2}}\ ] ]in this work , we analyze several relevant probability distribution functions of various brownian functionals associated with the stochastic model for the total fresh water availability in mountain region incorporating both the temperature effect , snow accumulation and precipitation in the form of power law dependent drift and diffusion constant . based on the backward fokker - planck method discussed in ref. , we derive ( i ) the first- passage time distribution , providing informa- tion about the lifetime of the stochastic process , ( ii ) the distribution , of the area a covered by the ran- dom walk till the first - passage time , measuring the re- activity of stochastic processes , and ( iii ) the distribution p(m ) , of the maximum size m before first passage time , ( iv ) the joint probability distribution of the maximum size m and the time of its occurrence be- fore the first passage time was also obtained by employing the feynman - kac path integral formulation .the advan- tage of the elegant methods adopted here is that they produce results on various functionals by making proper choices of a single term in a parent differential equation with appropriate boundary condition .we are at present studying these functionals for brownian particle with in- ertia .if we assume initial velocity to be zero the problem is easily tractable .however , if we consider a gibbsian distribution of the initial velocity the problem is really challenging and the work is under progress along this line . + also , this study is helpful in analyzing the effect of periodic forcing in dna unzipping or the study on the effect of terahertz field on dna breathing dynamics .in the context of integrate - fire model with sinusoidal modulation of neu- ron dynamics , the membrane voltage , v(t ) , is the stochastic variable under sinusoidal stimulus . in this context , and provide important information about the timing of firing of neuron after reaching the threshold voltage with an initial value 99 d. marks , j. kimball , d. tingey , and t. link , hydrol . process .* 12 * , 1569 ( 1998 ) .a. hamlet , and d. lettenmaier , j. am .. assoc . * 35 * , 1597 ( 1999 ) .m. pascual , m. bouma , a. dobson , microbes infect . * 4 * , 237 ( 2002 ) .j. patz , d. campbell - lendrum , t. holloway , j. foley , nature * 438 * , 310 ( 2005 ) .c. barranguet , j. kromkamp , j. peene , mar .173 * , 117 ( 1998 ) .m. bertness , g. leonard , , ecology * 78 * , 1976 ( 1997 ) .h. charles , j.s .dukes , ecol .* 19 * , 1758 ( 2009 ) .bulsara , s.b .lowen , c.d .rees , phys .e * 49 * , 4989 ( 1994 ) a. r. bulsara , t. c. elston , c. r. doering , s. b. lowen , and k. lindenberg , phys .e 53 , 3958 ( 1996 ) h. e. plesser , and s. tanaka , physics letters a , * 225 * , 228 ( 1997 ) ; j.r.r .duarte , m.v.d .vermelho , and m.l .lyra , physica a , * 387 * , 1446 ( 2008 ) s. chandrasekhar , rev .mod . phys . * 15 * , 1 ( 1943 ) h. risken , _ the fokker - planck equation : methods of solutions and applications _ , 2nd ed .( springer - verlag , berlin , 1989 ) . c. w. gardiner , _ handbook of stochastic methods: for physics , chemistry and the natural sciences _ , 2nd ed .( springer - verlag , berlin , 1985 ) .a. siegert , phys .rev . * 81 * , 617 ( 1951);g .l. gerstein and b. mandelbrot , biophys .j. * 4 * , 41 ( 1964 ) .m. bandyopadhyay , s. gupta , and d. segal , phys .e * 83 * , 031905 ( 2011 ) a. m. jayannavar , chem .199 * , 149 ( 1992 ) .g. v. raviprasad , and a. m. jayannavar , chem .. lett . * 220 * , 353 ( 1994 ) .n. kumar , and a. m. jayannavar , phys .b * 25 * , 4291 ( 1982 ) .g. oster and y. nishijima , j. am .chem . soc .bf 78 , 1581 ( 1956 ) .d. dan , and a. m. jayannavar , physica a : statistical mechanics and its applications , * 345 * , 404 ( 2005 ) d. dan , m. c. mahato , and a. m. jayannavar , phys .e * 60 * , 6421 ( 1999 ) s. saikia , a. m. jayannavar , and m. c. mahato phys .e * 83 * , 061121 ( 2011 ) ; m. c. mahato , t. p. pareek and a. m. jayannavar , int .10 , 28(1996 ) e. urdapilleta , phys .e * 83 * , 021102 ( 2011 ) j. benda , l. maler , and a. longtin , j. neurophysiol .* 104 * , 2806 ( 2010 ) ; b. lindner and a. longtin , j. theor .biol . * 232 * , 505 ( 2005 ) .sanjay kumar and garima mishra , phys .* 110 * , 258102 ( 2013 ) ; sanjay kumar , ravinder kumar , and wolfhard janke phys .e * 93 * , 010402(r ) ( 2016 ) .alexandrov , v. gelev , a.r .bishop , a. usheva , k. .rasmussen , phys .a * 374 * , 1214 ( 2010 ) e. s. swanson phys .e * 83 * , 040901(r ) ( 2011 ) .a. molini , p. talkner , g.g .katul , and a. porporatoa , physica a * 390 * , 1841 ( 2011 ) s. n. majumdar , curr .sci . * 89 * , 2076 ( 2005 ) .j. randon - furling and s. n. majumdar , j. stat .mech . : theory exp .( 2007 ) p10008 .m. kac , trans . am .soc . * 65 * , 1 ( 1949 ) .s. n. majumdar and m. j. kearney , phys .e * 76 * , 031130 ( 2007 ) .p. l. krapivsky , s. n. majumdar , and a. rosso , j. phys .a * 43 * , 315001 ( 2010 ) .a. hanke , and r. metzler , j. phys .a * 36 * , l473 ( 2003 ) .a. bar , y. kafri , and d. mukamel , phys .* 98 * , 038103 ( 2007 ) .a. bar , y. kafri , and d. mukamel , j. phys .matter * 21 * , 034110 ( 2009 ) .n. g. van kampen , _stochastic processes in physics and chemistry _( north - holland , amsterdam , 2007 ) .o. krichevsky and g. bonnet , rep .phys . * 65 * , 251 ( 2002 ) .g. altan - bonnet , a. libchaber , and o. krichevsky , phys .lett . * 90 * , 138101 ( 2003 ) .t. barnett , r. malone , w. pennell , d. stammer , b. semtner , w. washington , clim .change * 62 * , 1 ( 2004 ) .barnett , j.c .adam , d.p .lettenmaier , nature * 438 * , 303 ( 2005 ) d. de walle , a. rango , _ principles of snow hydrology _ , cambridge university press , cambridge , uk , 2008 .bras , _ hydrology : an introduction to hydrological science _ , addison - wesley , reading , ma , 1990 a. dubey , m. bandyopadhyay , and a. m. jayannavar ( in preperation ) .a. dubey , m. bandyopadhyay , and a. m. jayannavar ( in preperation ) .
in this paper , we investigate a brownian motion ( bm ) with purely time dependent drift and difusion by suggesting and examining several brownian functionals which characterize the lifetime and reactivity of such stochastic processes . we introduce several probability distribution functions ( pdfs ) associated with such time dependent bms . for instance , for a bm with initial starting point , we derive analytical expressions for : ( i ) the pdf of the first passage time which specify the lifetime of such stochastic process , ( ii ) the pdf of the area a till the first passage time and it provides us numerous valuable information about the effective reactivity of the process , ( iii ) the pdf associated with the maximum size m of the bm process before the first passage time , and ( iv)the joint pdf of the maximum size m and its occurrence time before the first passage time . these distributions are examined for the power law time time dependent drift and diffusion . a simple illustrative example for the stochastic model of water resources availability in snowmelt dominated regions with power law time dependent drift and diffusion and is demonstrated in details . we motivate our study with approximate calculation of an unsolved problem of brownian functionals including inertia .
quantum mechanics includes a dualism concerning the principle of state change .the schrdinger equation , on the one hand , governs the state change caused by time evolution .the rule of state reduction , on the other hand , governs the state change caused by measurement .the dualism is justified as long as the state change of only one system is concerned .the schrdinger equation holds true only when the system is isolated , but every measurement accompanies the interaction with the measuring apparatus so that the rule of state reduction holds true only when the system is not isolated .the dualism is therefore justified . accepting that every measurement accompanies the interaction between the object and the apparatus at all , one can expect that the rule of state reduction can be derived from the schrdinger equation holding for the composite system of the object and the apparatus during the measurement .a negative view , however , prevails against this program . according to that view, the schrdinger equation for the composite system transforms the problem of a measurement on the object to the problem of an observation on the apparatus , but in order to derive the rule of state reduction holding for the object one still needs the rule of state reduction applied to the composite system .this implies that the program of deriving the rule of state reduction from the schrdinger equation holding for the object - apparatus composite system falls into a vicious circle sometimes called von neumann s chain ( * ? ? ?* section 11.2 ) .the purpose of this paper is to show that the above argument , usually called the orthodox view of measurement theory , includes a serious physical inconsistency and then to present a consistent argument which derives the rule of state reduction from the schrdinger equation of the composite system without falling into the vicious circle . in this paper ,we are confined to the state reduction caused by a measurement of an observable with nondegenerate purely discrete spectrum satisfying the repeatability hypothesis .sections [ se:2][se:7 ] review with elaboration the most basic part of measurement theory originated with von neumann .section [ se:2 ] presents postulates for quantum mechanics and defines state reduction .section [ se:3 ] introduces the notion of nonselective measurement and shows that a nonselective measurement causes a state change in quantum mechanics whereas it is not the case in classical mechanics .section [ se:4 ] concludes the existence of an interaction between the measured object and the apparatus in every measurement .section [ se:5 ] shows that the rule of state reduction is equivalent to the repeatability hypothesis .section [ se:6 ] introduces the projection postulate as the rule of state reduction in the case where the observable has degenerate spectrum .section [ se:7 ] derives a necessary condition for a unitary operator to represent the measuring interaction .the condition determines the form of the unitary operator representing the measuring interaction leading to state reduction . the problem is then formulated as whether the unitary operator of this form is sufficient for deriving the rule of state reduction .section [ se:8 ] reviews the orthodox view along with wigner s argument that claims that the unitary operator does not lead to the rule of state reduction without appealing to the rule of state reduction , the projection postulate , for the measurement of the pointer position .section [ se:9 ] shows that the orthodox view suffers from a serious physical inconsistency concerning the causality between the reading of the outcome and the state reduction .section [ se:10 ] presents a consistent argument which derives the rule of state reduction from the schrdinger equation holding for the object - apparatus composite system without appealing to the projection postulate for the pointer - measurement .concluding remarks are provided in section [ se:11 ] .in this paper , all state vectors are supposed to be normalized , and mixed states are represented by density operators , i.e. , positive operators with unit trace .let be an observable with a nondegenerate purely discrete spectrum .let be a complete orthonormal sequence of eigenvectors of and the corresponding eigenvalues ; by assumption , all different from each other . according to the standard formulation of quantum mechanics , on the result of a measurement of the observable following postulates are posed : ( a1 ) _ if the system is in the state at the time of measurement , the eigenvalue is obtained as the outcome of measurement with the probability ._ ( a2 ) _ if the outcome of measurement is the eigenvalue , the system is left in the corresponding eigenstate at the time just after measurement . _ the postulate ( a1 ) is called the _ statistical formula _ , and ( a2 )the _ measurement axiom_. the state change described by the measurement axiom is called the _ state reduction_.the state reduction can be thought to be caused by the following two factors : the dynamical change of the system and the change of the observer s knowledge . in order to separate these factors ,let us suppose that we terminate the procedure of measurement of the observable just before the observer s reading of the outcome ; this procedure is called a _nonselective measurement_. it follows from postulates ( a1 ) and ( a2 ) that the nonselective measurement leaves the system in the mixture of the states with the probability and therefore yields the following state change : since a nonselective measurement does not change the observer s knowledge , this state change is considered to be caused entirely by the dynamical factor . even in classical mechanics , when the outcome of measurement is obtained , the information on the outcome changes the observer s knowledge and the probabilistic description of the state of the system changes according to the bayes principle , which is formulated as follows : let be two ( discrete ) random variables .suppose that we know the joint probability distribution .then , the prior distribution of is defined as the marginal distribution of , i.e. , if one measures , the _ information_ `` '' changes the probability distribution of for any outcome .the posterior distribution of is defined as the conditional probability distribution of given , i.e. , this principle of changing the probability distribution from the prior distribution to the posterior distribution is called the _bayes principle_. nonetheless , a nonselective measurement in classical mechanics causes no change in the system .therefore , it is a characteristic feature of quantum mechanics that a nonselective measurement changes the system dynamically , and it is the origin of von neumann s measurement theory to explain this change .in quantum mechanics the state of an isolated system changes dynamically according to the schrdinger equation , but this state change does not change the entropy of the system . on the other hand , the state change ( [ eq:0 ] ) caused by the nonselective measurement increases the entropy of the measured system , and hence this process of state changecan not be described by the schrdinger equation of the measured system .it follows that this dynamical change of state must be caused by the interaction between the measured object and the measuring apparatus , a system external to the measured object including every system interacting with the measured object .thus , from the basic postulates of quantum mechanics we have derived the existence of measuring interaction , which is neglected in classical mechanics . since our discussion concerns only nonselective measurements , we do not need to mention the function of consciousness , although the conventional argument mentions the psycho - physical parallelism .in the measurement axiom ( a2 ) , state reduction is described as a change of the state of the object . in order to consider state reduction together with the interaction between the object and the apparatus , it is desirable to describe it independently of particular descriptions of states of systems .as one of such descriptions , von neumann showed that ( a2 ) is equivalent to the following _ repeatability hypothesis _ : \(m ) _ if a physical quantity is measured twice in succession in a system , then we get the same value each time ._ in fact , according to ( m ) the state of the object just after the first measurement is the eigenstate corresponding to the outcome , and in the nondegenerate case it is determined uniquely so that we have ( a2 ) .it is obvious that ( m ) follows from ( a2 ) .in this paper , we are devoted to measurements of observables with nondegenerate discrete spectrum . in the conventional discussion explaining state reduction , however , we need to consider a measurement on the object - apparatus composite system and we need the statistical formula and the measurement axiom for observables with degenerate spectrum .the statistical formula for the discrete observable to be measured in the state ( density operator ) is given as follows : ( b1 ) _ the eigenvalue is obtained as the outcome with the probability ] . _this principle is called the _ projection postulate _ , because the eigenstate provoked by the measurement is chosen by the projection onto the corresponding eigenspace so that for the initial state .it is clear that statements ( b1 ) and ( b2 ) imply ( a1 ) and ( a2 ) as special cases .it follows from ( b1 ) and ( b2 ) that the nonselective measurement of leads to the state change such as shall turn to the discussion on the measurement of the discrete observable with nondegenerate eigenvalues . in section [ se:4 ], we have concluded that the state change ( 1 ) in the nonselective measurement must be caused by the interaction between the object and the apparatus .then , what is this interaction ? let us suppose that the object is in the eigenstate of the observable pertaining to the eigenvalue . by the statistical formula , the outcome is with probability one .hence , the measurement changes the position of the pointer in the apparatus from the original position to the position on the scale .let be the observable corresponding to the position of the pointer in the apparatus , and the original state of the apparatus .generally it is only required that the eigenvalues of are in one - to - one correspondence with the eigenvalues of , but we assume for simplicity that the observable has also the same eigenvalues , , as . in the hilbert space of the composite system of the object and the apparatus , the observables and are represented by the self - adjoint operators and respectively .the state change due to the interaction is represented by a unitary operator on the hilbert space of the composite system : according to the statistical formula ( a1 ) , the state after the interaction must be the eigenstate of pertaining to the eigenvalue .in fact , the position of the pointer takes the value with probability one after the interaction , and this implies that the state is the eigenvector of pertaining to the eigenvalue . on the other hand, according to the repeatability hypothesis ( m ) the state is the eigenvector of pertaining to the eigenvalue .in fact , if the observer were to measure in this state again , then the observable would be measured twice in succession and hence the outcome would be with probability one .this implies that the state is the eigenstate of pertaining to the eigenvalue .suppose here that the eigenvalues of are also nondegenerate .then the state that satisfies the above two conditions is represented by the state vector , where is an arbitrary eigenvector of with unit length pertaining to the eigenvalue . in order to represent the measurementthe unitary must satisfy the following relation if the unitary operator satisfies the above condition , then for the arbitrarily given original state we have by linearity thus the problem is whether equation ( [ eq:2 ] ) is a sufficient condition for the unitary operator to represent the measuring interaction or whether , in other words , equation ( [ eq:2 ] ) implies ( a1 ) and ( a2 ) even when is a superposition of the eigenstates . if equation ( [ eq:2 ] ) were not a sufficient condition , further interaction though ineffective in the case where the object is initially in the eigenstate might be necessary for the explanation leading to the state reduction .the conventional approach adopted by most of the text books on measurement theory , the so - called orthodox view , is negative about the above problem .the orthodox view holds that ( [ eq:2 ] ) does not imply the measurement axiom ( a2 ) .the argument runs as follows .the state transformation by the unitary , makes a one - to - one correspondence between the state of the object and the state of the apparatus .the state of the object is mirrored by the state of the apparatus , and the problem of a measurement on the object is transformed into the problem of an observation on the apparatus . if the observer observes the pointer position of the apparatus to obtain the outcome of measurement , the state in the right - hand side of ( [ eq : a ] ) changes as follows : the state change in ( [ eq:3 ] ) is derived by the projection postulate ( b2 ) applied to the state change caused by the measurement of the observable of the composite system . according to the projection postulate ( b2 ) , the new state is the mixture of the states , and hence when the outcome is obtained , a system in the state is selected from the ensemble described by the right - hand side of ( [ eq:3 ] ) : latexmath:[\[\label{eq : c } \sum_n |c_n|^{2 } |\phi_n \otimes \xi_n\rangle\langle \phi_n \otimes \xi_n| \mapsto the composite system is in the state , and this implies that the object is led to the state .nevertheless , if we describe further the measuring process which leads to the state change ( [ eq:3 ] ) in terms of the coupling with the second apparatus , having an orthonormal system and being prepared in a state , measuring the pointer position in the first apparatus , then instead of ( [ eq:3 ] ) we have the following state change : from this state change , we can not conclude that the measurement leads the object with the outcome to the state .the original problem of explaining the state reduction caused by the first apparatus is not solved but only transferred to the the problem of explaining the state reduction caused by the second apparatus ( * ? ? ?* section 11.2 ) . this vicious circleis often called _ von neumann s chain_.a difficulty in the orthodox view is to apply the projection postulate to the object - apparatus composite system in order to show that the state of the object that leads to the outcome is in the state at the time just after measurement .this argument suffers from the circular argument that assumes the rule of state reduction for the composite system in order to explain the rule of state reduction for the object .the conventional studies of measurement theory , however , have not detected any physical inconsistency or empirical inadequacy of the above argument in the orthodox view and have aimed at circumventing the above circular argument , an epistemological difficulty , by adding , for instance , the element of macroscopic nature of the measuring apparatus .in fact , the above argument leading to the state reduction has been generalized to the following argument for any measurements to determine the state change caused by measurement conditional upon the outcome : let us given the initial state of the object , the initial state of the apparatus , and the unitary evolution operator of the object - plus - apparatus .then , compute the state of the composite system just after the interaction as and apply the projection postulate to the measurement of the pointer observable in the apparatus , and the state just after the measurement conditional upon the outcome is given by } { \mbox{\rm tr}[(i\otimes e^{b}(a_{n}))u(\rho\otimes\sigma ) u^{\dagger}(i\otimes e^{b}(a_{n}))]},\ ] ] where is the projection operator onto the eigenspace of corresponding to the eigenvalue and stands for the partial trace over the hilbert space of the apparatus .this unitary - evolution - plus - projection - postulate argument has been a standard argument for determining the general state reduction , see for example .the purpose of this section is , despite the conventional arguments , to show that the orthodox view suffers from a serious physical inconsistency concerning the causality between the reading of the outcome and the state reduction . in order to explain the rule of state reduction in terms of the time evolution of the object - plus - apparatus , it is necessary to clarify the meanings of the words the `` time of measurement '' and the `` time just after measurement '' in the context as to what happens in the object and the apparatus .let us suppose that one measures an observable of the object in the state at the time .the measurement , carried out by an interaction with the apparatus , takes finite time .thus , the object interacts with the apparatus from the time to and is free from the apparatus after the time .it follows that the time of measurement is the time , the time just after measurement is , and that the object is in the state at the time of measurement .the statistical formula ( a1 ) means , in this case , that the observer obtains the outcome with probability .the measurement axiom ( a2 ) means that the object that leads to the outcome is in the state at the time .moreover , the repeatability hypothesis ( m ) means that if the observable is measured at the time again in the same object then the outcome coincides with the one obtained by the measurement of at the time . in the orthodox view, the state changes given by ( [ eq : a ] ) and ( [ eq:3 ] ) represent dynamical changes of the system , and the state change ( [ eq : c ] ) represents a change of the knowledge of the observer .the state change ( [ eq : a ] ) represents the interaction between the object and the apparatus . the state change ( [ eq:3 ] ) represents the interaction between the `` apparatus '' and the `` apparatus measuring the apparatus '' .it follows that the state change ( [ eq : a ] ) shows that the object - plus - apparatus is in the state at the time and in the state at the time .suppose that the state change ( [ eq:3 ] ) takes time .then , it is at the time that the object - plus - apparatus turns to be in the state described by the right - hand side of ( [ eq:3 ] ) .since the state change ( [ eq : c ] ) represents the change of the observer s knowledge , it does not accompany the change of time so that at the time the object turns to be in the state as the result of the state reduction . in other words , the orthodox view leads to the conclusion that the state reduction occurs at the time which is later in time than the time just after measurement .thus , if is not negligible in the time scale of the time evolution of the object then this contradicts the measurement axiom that the state reduction leaves the system in the state at the time .since the object is free from the apparatus after the time , one can make the object interact with the second apparatus at the time .if this apparatus also measures , according to the repeatability hypothesis it is predicted , and will be confirmed by experiments , that the outcome from the first apparatus and the outcome from the second are always the same .but , this fact can not be explained by the orthodox view which concludes that the state reduction occurs at the time . is negligible in the time scale of the time evolution of the object ? in general, the process of the state change ( [ eq:3 ] ) is regarded as a process in which a macroscopic instrument operates or a directly - sensible variable feature is produced otherwise , the state change in the apparatus measurement might not necessarily satisfy the repeatability hypothesis and hence the duration of this process can not be negligible in the time scale of the time evolution of the microscopic measured object .consider , for instance , the experiment in which the light is scattered by an atom in a low intensity atom beam . regarding the paths before the collision as known , the measurement of the path of the photon after the collision suffices to determine the point of scattering .in order to measure the position of the atom ( at the time of collision ) twice in succession in this method , suppose to use two nearly placed light beams i and ii ; see fig .[ fig:1 ] .suppose that the atom interacts with the beam i from the time to and with the beam ii from to and that is so small that the time evolution of the atom in this period can be neglected hence , we can put .suppose that the photon scattered from the beam i is detected by a photoelectric detector at the time , and the one from the beam ii is detected by another photoelectric detector at the time . in this experiment ,the collision is accomplished in quite a short time and the photoelectric detectors are necessary to place sufficiently far from the light beams , so that it is taken for granted in scattering theory that it is taken for granted from the compton - simons experiment that there is the uncertainty of the position at which the beam i is scattered depending on the initial state of the atom but that the position of the scattering from the beam ii is always near the position of the scattering from the beam i. it follows that this experiment can be considered as the position measurement of the atom satisfying the repeatability hypothesis . in this example , the state change ( [ eq : a ] ) corresponds to the interaction between the atom and the light beam , and hence we have . on the other hand , the apparatus corresponds to the scattered photon , the state change ( [ eq:3 ] ) corresponds to the process including the free propagation of the photon after scattering and the interaction between the photon and the photoelectric detector , and hence we have .thus , from ( [ eq:6.1 ] ) we have and consequently we can not neglect .this means that the orthodox view claims that the state reduction of the atom occurs after the photon is detected despite the fact that the state reduction of the atom occurs just after the scattering of the light . the inconsistency of the orthodox view is in the claim of causality between the reading of the outcome and the state reduction such that the state change ( [ eq:3 ] ) of the composite system causes the state reduction of the object system .it is obvious , however , that such causality does not exists , since the result , the state reduction , occurs before the cause , the reading of the outcome or the manifestation of the directly - sensible variable feature .it is not the case that the observer s knowing or reading of the outcome at the time causes the state reduction of the object at the time .but , it is the case that by knowing or reading of the outcome at the time the observer obtains the information to determine the state of the object at the time .the orthodox view confuses the time at which the outcome of measurement is obtained and the time at which the object is left in the state determined by the outcome . or , in other words, it confuses the time just after the reading of the outcome and the time just after the interaction between the object and the apparatus .there is no causality relation between the outcome and the state just after measurement but there is coincidence between them yielded by the measuring interaction .our new interpretation presented in the following does not includes the inconsistency of the orthodox view discussed in the previous section . moreover ,the state reduction can be explained only from ( [ eq : a ] ) without assuming the process ( [ eq:3 ] ) so that the circular argument of the von neumann chain is circumvented . as in the preceding section ,suppose that the observer measures the observable of the object in the state at the time .the object interacts with the apparatus from the time to and is free from the apparatus after the time . the repeatability hypothesis ( m )means that if the observer measures at the time again then the outcome coincides with the outcome of the measurement at the time . as shown previously ,the measurement axiom ( a2 ) is equivalent to the repeatability hypothesis ( m ) .hence , if the statistical formula ( a1 ) and the repeatability hypothesis ( m ) is derived from ( [ eq:2 ] ) , it is demonstrated that the state reduction is derived from ( [ eq:2 ] ) .let be the probability of obtaining the outcome when the pointer position is measured at the time . by ( [ eq:2 ] ) and the statistical formula ( b1 ) for the degenerate observable we have let be the probability that the measurement of at the time leads to the outcome .since this outcome is obtained as the outcome of the measurement of at the time , we have thus if we regard this process as the measurement of namely , if we interpret the outcome of the measurement of at the time as the outcome of the measurement of at the time then it is shown that this measurement satisfies the the statistical formula ( a1 ) .we shall show that this measurement satisfies the measurement axiom ( a2 ) in the following .since the measurement axiom ( a2 ) is equivalent to the repeatability hypothesis ( m ) , we need only to show that this measurement satisfies the repeatability hypothesis ( m ) . in order to show the last statement, it suffices to show that if the observer measures again at the time , immediately after the first measurement , then the outcome of the first measurement at the time and that of the second at the time are always the same .let be the joint probability that the first outcome is and the second outcome is .the outcome of the first measurement of at the time is the same as the outcome of the measurement of the pointer position at the time .since the measurements of and at the time does not interferes each other , the joint probability distribution of the outcomes of these two measurements satisfies the statistical formula for the simultaneous measurements : thus if then we have it follows that the outcome of the first measurement and that of the second are always the same .therefore , this process satisfies the repeatability hypothesis ( m ) and hence satisfies the measurement axiom ( a2 ) .we have thus demonstrated that the unitary operator satisfying ( [ eq:2 ] ) represents the interaction between the object and the apparatus and leads to the state reduction in the object at the time just after measurement .it follows from the basic postulate requiring the state reduction that even in the case where the observer obtains no information from the measurement , namely the case of nonselective measurement , the state of the system changes .this change is not accompanied with the change of knowledge so that it is purely dynamical , but it is irreversible so that it can not be described by the schrdinger equation of the object .the only way to explain this by the rules of quantum mechanics is to derive this change from the schrdinger equation of a larger system than the object , which describes the interaction between the object and the apparatus . this interaction is turned on during a finite time interval , from the time of measurement to the time just after measurement . after the object - apparatus interaction , the object turns to be free from the apparatus again .thus , the state reduction describes the state change from the time to the time conditional upon the outcome of measurement .the state change in the nonselective measurement is derived without any difficulties from the interaction between the object and the apparatus .in fact , the state change ( [ eq:0 ] ) is explained as the open system dynamics of the object yielded by the unitary evolution of the object - apparatus composite system described by the unitary in ( [ eq:2 ] ) , i.e. , ,\ ] ] where is the partial trace over the hilbert space of the apparatus .the problem is to explain the change of state dependent on the outcome , namely the state reduction .the answer of the orthodox view to this problem is that the state reduction of the object is resulted from the state reduction of the object - plus - apparatus caused by the measurement of the pointer position carried out after the object - apparatus interaction . applying the above argument that state reduction needs the time for the nonselective measurement to the measurement of the pointer position, it is concluded that the state reduction , explained by the orthodox view , occurs apparently later than the time at which the state reduction should occur .this time difference leads to the detectable difference as to whether the outcomes obtained by measuring the same object twice in succession satisfy the repeatability hypothesis , and hence we can conclude that the inconsistency of the the orthodox view can be tested by an experiment .the photon scattering experiment from the atom beam in an atom interferometer has been realized already by chapman _the double scattering _ gedanken _ experiment suggested in section [ se:9 ] will be realized in future along with a similar experimental setting with the additional second laser beam for the repeated measurement of the point of scattering of a single atom , if a conceivable difficulty can be circumvented in distinguishing the case where two detected photons from the two beams have been scattered by a single atom from the other cases .
the argument is re - examined that the program of deriving the rule of state reduction from the schrdinger equation holding for the object - apparatus composite system falls into a vicious circle or an infinite regress called the von neumann chain . it is shown that this argument suffers from a serious physical inconsistency concerning the causality between the reading of the outcome and the state reduction . a consistent argument which accomplishes the above program without falling into the circular argument is presented .
millions of people use the web on a daily basis to buy products in online shops , perform financial transactions via online banking , or simply browse information systems , media libraries or online encyclopedias , such as imdb , netflix or wikipedia . to find and access relevant information on the web , people either search , navigate , or combine these two activities .a recent study found that of all visits to a website can be attributed to teleports , which are the direct result of clicks on search - engine results , navigation through manually typed urls , or clicks on browser bookmarks .the remaining of the clicks can be attributed to the task of navigating a webpage . in this paper , we direct our attention towards these of actions and tackle the question what potential effects we can expect if we influence the link selection process of websitevisitors by simple link modifications . in particular , we are interested in the effects of different link modification strategies on ( stochastic ) models of web navigation . * problem .* by inserting new links between webpagesof a website , we alter the link structure .this has the potential to change user browsing behavior , since new links create new paths for users to explore the website .alternatively , without changing the link structure of the website , we might be able to influence the link selection process of visitors .studies have shown that the decisions of users for where to navigate next can be influenced by the layout and the position of the links on a webpage . in particular ,due to position bias users are more likely to select links higher up on webpages . as a result , inducing click biases , such as repositioning links on a webpage , highlighting the links , or even making them visually more appealing , can affect the users decision of where to click next on a website , similar to the way that adding new links affects browsing .in this paper we are particularly interested in investigating and comparing the potential consequences of inserting new links and modifying already existing links on the navigational behavior of users .these newly obtained insights are of a significant practical relevance for website owners , as they can be used , for example , by owners of media libraries to increase visits of specific media files in order to reduce the number of different files that need to be cached on fast storage devices .another example includes online encyclopedias , where operators may want to guide users towards articles of a specific category over some period of time ( e.g. , the birthday of an inventor ) . in some of these cases ,link insertion might be more time - consuming than simply changing the layout of the websiteto increase visibility of specific links and vice versa .theoretically , we would like to analyze and compare the effects of such link modification endeavors .practically , new tools are needed to assist websiteoperators in deciding which of the two strategies they should deploy to achieve the desired effects .* methods . * in this paper we study the impact of link modifications on the random surfer , which we apply as a proxy for real user behavior . in the past , a user s decision to click on a link on a webpagewas successfully modeled using the random surfer . in this model ,a user selects one of the links on a webpageuniformly at random and navigates to the page to which the link points .apart from the huge success of the google search engine , whose ranking algorithm is based on the random surfer model , empirical studies have shown that this model provides a very precise approximation of real browsing behavior in many situations and for a variety of applications .an important property of a random surfer is its _ stationary distribution _ , which is the probability distribution of finding a random surfer at a specific webpagein the limit of large number of steps .in particular , we investigate how the random surfer s stationary distribution of a subset of pages ( i.e. , _ target pages _ ) of a given websitechanges as a consequence of ( i ) modifying already existing links towards them , ( ii ) introducing new links towards them , or by ( iii ) combining these two approaches . to that end , we introduce a _click bias _ , and a _ link insertion _ strategy .we model the effects of click biases on the intrinsic attractiveness of a link to the user by increasing the weight of that link . in practice, we may introduce such click biases , for example , by locating the corresponding link on the top of a page .with link insertion , we simply introduce new links between webpagesof a website , for example , by linking towards a given target page from the starting page .we introduce quantitative measures that allow us to address the following research questions : _ navigational boost_. how stable is the stationary distribution with respect to the proposed modification strategies , and what are the limits of stationary distributions that can be achieved for a given set of webpages ?is it ( theoretically ) possible to achieve a given stationary probability distribution for an arbitrary subset of webpagesof a website ?what is the connection between simple topological measures of the websitenetwork and stationary probability ?_ influence potential_. what is the relative gain of the stationary probabilities compared to their unmodified counterparts .this provides us with an answer to the `` guidance '' potential of a set of webpages , defining to what extent it is possible to increase the relative stationary probabilities as compared to the initial unmodified values . _combinations_. finally , we are interested how combinations of the two proposed link modification strategies perform in terms of increased stationary probabilities of selected subpages .in particular , we investigate the performance of certain combinations across several different networks and/or selected subpages .* contributions & findings . *we find that intuitions about how either modification strategy affects navigation are not always correct .further , our experiments show that the size of a set of targeted subpages is not always a good predictor for the observed effects .rather , other topological features often better reflect the consequences of a modification .practically , we provide an https://github.com/floriangeigl/randomsurfers[open source framework ] ] for websiteadministrators to estimate the effects of link modifications on their website .the random surfer model has received much attention from the research community .while the model is very simple , it became well - established over the last years .it was applied to a variety of problems from graph generators over graph analysis to modeling user navigation .furthermore , the model has been applied to calculate structural node properties in large networks . hits and pagerank rank network nodes according to their values in the stationary distribution of the random surfer model . especially forthe later there exists a detailed analysis ranging from the efficiency of its calculation towards its robustness .bianchini et al . provided an in - depth analysis of how to tweak the cumulative pagerank of a community of websites .they found that splitting up the content of pages onto more highly interlink pages increases the community s cumulative pagerank since the community is larger it consists of more pages which are able to trap the random surfer for a longer period of time .moreover , they suggest to avoid dangling webpages(i.e . ,pages without links to other pages ) . in this paperwe are also interested in the sum of the random surfers visit probabilities in a community , however we do not use ( i ) teleportation as in the pagerank model , and ( ii ) do not modify the network in its size ( i.e. , number of pages ) . on the contrary we modify the transition probabilities of certain links and insert new links into the network . moreover , since all our datasets are strongly connected , we do not face the problem of unwanted high visit probabilities of usually unimportant pages ( i.e. , dangling nodes ) .a random surfer can be steered towards specific nodes in the network by increasing the probability of traversing links towards those nodes .this can be accomplished by biasing random surfer s link selection strategy so that it is not uniformly random anymore , but biased towards specific nodes .for instance , in the field of information retrieval richardson et al . successfully applied biased random surfers to increase the quality of search results compared to those achieved using a simple pagerank . at the same timehaveliwala biased pagerank towards topics retrieved from a search query to rank the query results . utilizing this technique the results where more accurate than those produced using a single , generic pagerank . moreover , gyongyi et al . successfully used trust as bias to detect and filter out spam pages of search results .later al - saffar and heileman showed that biased pagerank algorithms generate a considerable overlap in top results with a simple pagerank . concerning this problemtheir main suggestion was to use external biases which do not rely onto the underlying link structure of the network . in our paperwe randomly decide towards which nodes we bias the random surfer .this allows us to explore the borders of changes in stationary distributions caused by a bias .later helic et al . compared click trails characteristics of stochastically biased random surfers with those of humans .their conclusion was , that biased random surfers can serve as valid models of human navigation .further , geigl et al . validated this by showing that the result vector of pagerank and clickdata biased pagerank have a strong correlation in an online encyclopedia .this is especially interesting , since it creates the connection of our simulation to real human navigation on the web .additionally , lerman and hogg already showed that it is possible to bias the link selection of users .in particular , they came to the conclusion that users are subject to a _ position bias _ , making the selection of links higher up on webpagesup to a factor of more likely .hence , it is of practical relevance to investigate also the effects of _ biases _ in the link selection process onto the stationary distribution .concerning link insertion there already exists work in literature which makes use of statistical methods to suggest new links in network structures to , for instance , increase the performance of chip architectures . in particular , the authors use a standard mesh and insert long - range links , converting the network into a small - world network .this reduced packet latency results in a major improvement in throughput .another field of research where link insertion is of interest are recommender systems for social friendship networks .for example , xie et al . characterized interests of users in two dimensions ( i.e. , context and content ) and exploited this information to efficiently recommend potential new friends in an online social network . in this paperwe focus on the effects of inserted links onto the typical whereabouts of the random surfer . in particular , we are interested in inserting links into the network such that the random surfer more frequently visits a predefined subset of pages of a website(i.e . , target pages ) .we base our methodology on the calculations of the stationary distribution of a random surfer on the original and manipulated networks .the networks consist of nodes , which represent webpagesand directed links between nodes , which represent hyperlinks between webpages .we first calculate the transition matrix and the stationary distribution for the original network , which will be used as a baseline for comparing the effects of link modifications .second , we increase the statistical weight of a random surfer visiting a set of predefined nodes ( i.e. , _ target pages _ or _ target nodes _ ) .we do that either by increasing the link weights towards selected nodes ( click bias ) or by adding new links pointing towards those nodes ( link insertion ) .third , we generate the corresponding transition matrix for the modified network .fourth , we calculate the stationary distribution of the new transition matrices . finally , we compare the modified stationary distribution with the original stationary distribution to gain insights into the effects of the different link modifications . figure [ fig : edu ] illustrates these steps on a toy example . in what followswe formalize our approach algebraically .we represent a websiteas a directed network with a weighted adjacency matrix , where is the number of webpagesin the websiteunder investigation .we define the element of the weighted adjacency matrix as the sum of edge weights of all links pointing from node to node . for example , if there is a single link from page to page with weight , and if there are three links pointing from page to page each with weight . for our analysiswe introduce _ target nodes _ as the nodes whose stationary probability we want to increase .we use vector to specify them : we further define as a fraction of target nodes with respect to the total number of nodes : , means that of nodes from the network are target nodes .[ 1]>m#1px c5 c45 llll & & & & & + & & & & & + & & & & & the stationary distribution represents the probability to find the random surfer on any node in the limit of large number of steps . to compute the stationary distribution we first need to construct a diagonal out - degree matrix , with the weighted node out - degrees on its diagonal . using to denote diagonal matrices with elements of a vector on their diagonalwe define as : using matrix we can calculate the transition matrix , which is a left stochastic matrix of as .the stationary distribution now satisfies the ( right ) eigenvalue equation for the matrix : . to introduce click biases that influence the link selection strategy of the random surfer , we reweigh the links pointing towards target nodes by multiplying their weight by a constant scalar , which we call bias strength .for example , a bias strength of doubles the weight of all links towards target nodes .the final probability of the random surfer to traverse a link is then directly proportional to its weight .algebraically , we induce biases with a diagonal bias matrix which we define as . the adjacency matrix of a biased network is . to compute the stationary distribution of the biased network ,we first calculate the new transition matrix and then its stationary distribution .please note that from the technical perspective , inducing a bias is the same as inserting parallel links towards target nodes it increases the value of specific elements ( i.e. , those representing links towards target nodes ) in the adjacency matrix .the total weight of newly added parallel links due to an induced bias is given by : to allow for a fair comparison between the click bias and the link insertion strategy we insert exactly new links with weight in the latter case .the second link modification strategy consists of inserting new links towards the target nodes from a given set of source nodes .this strategy represents the case where a websiteadministrator inserts links towards target nodes from important subpages of their website .we define the importance of a webpageas its stationary probability in the original network . to insert a given number of new links we proceed as follows .we start by sorting nodes by their stationary probability in a descending order . in the next stepwe insert new links from the top nodes to all target nodes . here is the number of target nodes and we always _ ceil _ the calculated number of source nodes to ensure that there are enough pairs of nodes .if one of the target nodes is itself designated as a source node we do not insert self - loops from the practical point of view , it does not make sense to link a webpageto itself . in the rare case where we have connected all possible combinations of source and target nodes butdid not reach the required number of links , we simply reiterate the list of the source nodes resulting in parallel links between nodes .please note that we insert parallel links if a link between a source and a target node has already existed in the original network .however , this happens extremely rarely because all of our networks are sparse .in fact , in all our experiments the fraction of inserted parallel links was on average less than .finally , we can combine the two link modification strategies and study the effects of such combinations on the stationary distribution and investigate if an optimal combination of strategies exists , which outperforms the individual approaches . from the practical point of viewthis means that for optimally steering websiteusers , we combine both , the click bias and link insertion mechanisms . to create a combined link modification methodwe first introduce ] . since we got comparable results over that complete interval we present only the results for bias strength .the performance of the click bias is robust across datasets and different with a low variance in both dimensions ( cf .figure [ fig : mod_pot : bias ] ) .we observe a negative correlation between influence potential and fraction of target nodes , meaning the smaller fractions of target nodes profit more from an induced click bias than larger fractions .our calculations of the influence potential confirm once more the results from the previous section , in which smaller fractions with top energy nodes are able to outperform larger fractions of target nodes without top nodes .we once more depict two such examples from figure [ fig : stat_prob : bias ] .target nodes depicted by _ a _ with reach an energy that is almost twice as high as those depicted by _b _ with .however , nodes _ a _ start with a larger initial energy and nodes _ b _ with a smaller one .therefore , in relative terms nodes _ b _ have a higher influence potential than nodes _ a _ ( cf .figure [ fig : mod_pot : bias ] ) .performance of link insertion is again strongly dependent of the dataset .however , similarly to the click bias we observe over all datasets that smaller fractions of target nodes profit significantly more from the link insertion than the larger ones .for example , in dataset for we measure an average influence potential of more than , whereas for influence potential is less than ( cf . figure [ fig : mod_pot : li ] ) .a similar decay , although not as pronounced as in can be seen in the other two datasets .similarly to the navigational boost this high influence potential of smaller fractions of target nodes in the case of link insertion can be explained through the skewness of the initial stationary distributions ( cf .figure [ fig : stat_prob : ccdf ] ) .as previously , we investigated more closely the relation between influence potential of small fractions of target nodes and their structural properties such as in - degree , out - degree and degree ratio .target nodes with a high degree ratio ( i.e. , a small in - degree , a large out - degree or both ) have the largest influence potential . intuitively , such target nodes start with a very small initial energy and therefore can achieve a significant relative increase . on contrary , in absolute terms such target nodeskeep a rather small energy even after the modification , whereas target nodes with a large initial energy ( a low degree ratio ) are experiencing a significant navigational boost in absolute terms but possess relatively low influence potential .* finding .* the influence potential of small fractions of target nodes is very high regardless of the link modification strategy . for click biasthe influence potential is limited by the bias strength , whereas for link insertion we do not observe such a limit and influence potential can become as high as . with increasing fraction of target nodesthe influence potential decays drastically .* implications . * as previously ,if possible we should prefer link insertion over click bias in cases where we are interested in utilizing the influence potential of the target nodes .our findings suggest that in practice there is a trade - off that we need to make between optimizing for influence potential and for navigational boost . for the former , we need to aim at target nodes with a high degree ratio and for the latter at target nodes with a low degree ratio . in the previous experiments we found that in some situations link insertionshould be preferred over click bias ( e.g. , small fraction of target nodes ) , whereas sometimes the opposite represents an optimal approach ( e.g. , large ) .for that reason we want now to shed more light onto combinations of both strategies , that is , we are interested in the navigational effects of simultaneously applying click bias and link insertion to varying extent .figure [ fig : combinations ] depicts the results of this experiment .we find consistent best performing mixtures over all datasets .in particular , we observe that for small fractions of target nodes , exclusive link insertion outperforms any other combination ( see figure [ fig : combinations:001 ] ) . for medium sized target nodes ( i.e. , ) we observe a shift of best performing combinations towards for higher bias strengths ( i.e. , and ) .this combination consist of click bias and link insertion . for combinations of large fractions of target nodes (i.e. , ) and small bias strengths ( ) the best performing combination is around ( click bias and link insertion ) and further shifts towards ( click bias and link insertion ) with an increased bias strength .these results confirm our insights from the previous experiments .thus , click biases act as an amplifier and only work well if target nodes initially possess valuable incoming links .this is highly likely for larger and medium sized fractions of target nodes , and very unlikely for the case of smaller fractions of target nodes . on the other hand ,link insertion diffuses a large portion of the energy of top nodes towards target nodes .hence , it works especially well for combinations of small fractions of target nodes and datasets with a highly skewed stationary distribution . ** for small fractions of target nodes with initially low energy , pure link insertion should be preferred over any other combination .however , with increasing bias strength and larger fraction of target nodes , combinations consisting of click bias and link insertion performs best . * implications . * smaller sets of webpages(i.e . , small )should focus on introducing new links to achieve the highest browsing guidance .the bigger the set of webpagesand the used bias strength becomes , the more this preference shifts towards a combination of , meaning that % of the modifications should be invested in increasing the transition probability of already existing links towards target nodes ( e.g. , highlighting in the user interface ) .the remaining % should be used to insert new links towards target nodes .the random surfer which navigates forever ( stationary behavior ) may look like a rather unrealistic behavior of users .more realistically , a single user visits a websiteclicks a couple of times on various links and leaves the websiteagain ( transient behavior ) .however , our calculations of the stationary distribution show that , at least on the networks that we have investigated in this paper these two behaviors are quite similar to each other .the stationary distribution is calculated with the power - iteration method .thus , we initialize a probability vector representing an initial probability to find a random surfer on each particular node in the network .we initialize this vector using a uniform distribution .afterwards , we iterate by recalculating the probabilities for the next click of the random surfer . thus , one iteration step of the power - iteration method can be interpreted as a step or a click performed by the random surfer moving from the current node to one of its neighbors .hence , the number of iteration steps that are needed until there are no significant changes in the node probabilities , that is , the convergence rate of the power - iteration method , can be interpreted as the number of clicks needed to model the stationary user behavior . in other words the random surfer does not need to navigate forever it only needs to navigate through the network until the point where the next click does not change the observed stationary distribution . in all our datasets , all networks that we generated and modified for these datasets , all combinations of fractions of target nodes and the bias strength our calculations converge within iterations .thus , the stationary user behavior is in fact a behavior of users who navigate pages in a websiteat most .we believe that these clicks are within realistic boundaries for user behavior in the cases in which users decide to explore and browse a website . however , since many users leave a websiteimmediately upon arrival or within only a single or a small number of clicks this still represents a limitation in our work .this limitation can be easily remedied by introducing a small teleportation probability of jumping to an arbitrary page without following the underlying network structure ( i.e. , calculating pagerank vector instead of the stationary distribution ) .we have already experimented with the calculations of pagerank and our first results are quite similar to results that we have presented in this paper .however , we plan to address this question in more details in our future work .in this paper we have analyzed the effects of two link modification strategies used to influence the typical whereabouts of the random surfer .we investigated how an induced click bias towards a set of webpageschanges the stationary distribution ( i.e. , energy ) of those pages .additionally , we compared those effects with the consequences of altering the network structure by inserting new links .we find that both strategies have a high potential to modify the stationary distribution and that for certain situations there exist constantly high performing link modification strategy .in particular , click biases work well on sets of webpagescontaining already highly visible webpages , whereas link insertion should be preferred for sets of webpagesconsisting of pages with low visibility .further , we showed that a simple structural property of target nodes , namely degree ratio , provides a valuable basis for the estimation of the effects of both link modification strategies .administrators of websitescan use our approach and our open source framework to determine the best strategy for their settings without having to implement and test all the different strategies ( e.g. , altering link position , highlighting , or creating new links ) . in future workour analysis can be extended to investigate additional empirical as well as synthetic datasets to broaden the understanding of consequences of manipulating the link selection process in navigation or inserting new links .furthermore , investigating the complex dynamics which arise if we induce two competing link modifications into one network at the same time is an interesting avenue for future work .s. al - saffar and g. heileman .experimental bounds on the usefulness of personalized and topic - sensitive pagerank . in _ web intelligence ,ieee / wic / acm international conference on _ , pages 671675 .ieee , 2007 .g. buscher , e. cutrell , and m. r. morris .what do you see when you re surfing ? : using eye tracking to predict salient regions of web pages . in _ proceedings of the sigchi conference on human factors in computing systems _, pages 2130 .acm , 2009 . c. ding , x. he , p. husbands , h. zha , and h. d. simon .pagerank , hits and a unified framework for link analysis . in _ proceedings of the 25th annual international acm sigir conference on research and development in information retrieval _ , pages 353354 .acm , 2002 .f. geigl , d. lamprecht , r. hofmann - wellenhof , s. walk , m. strohmaier , and d. helic .random surfers on a web encyclopedia . in _ proceedings of the 15th international conference on knowledge technologies and data - driven business _ , i - know 15 , pages 5:15:8 , new york , ny , usa , 2015 .d. f. gleich , p. g. constantine , a. d. flaxman , and a. gunawardana . tracking the random surfer : empirically measured teleportation parameters in pagerank . in _ proceedings of the 19th international conference on world wide web _ , pages 381390 .acm , 2010 .z. gyngyi , h. garcia - molina , and j. pedersen .combating web spam with trustrank . in _ proceedings of the thirtieth international conference on very large data bases - volume 30 _ , pages 576587 . vldb endowment , 2004 . d. helic , m. strohmaier , m. granitzer , and r. scherer .models of human navigation in information networks based on decentralized search . in _ proceedings of the 24th acm conference on hypertext and social media _ , pages 8998 .acm , 2013 .n. li and g. chen .multi - layered friendship modeling for location - based mobile social networks . in _ mobile and ubiquitous systems : networking services , mobiquitous , 2009 .mobiquitous 09 .6th annual international _ , pages 110 , july 2009 .m. moricz , y. dosbayev , and m. berlyant .pymk : friend recommendation at myspace . in _ proceedings of the 2010 acm sigmod international conference on management of data _ , sigmod 10 , pages 9991002 , new york , ny , usa , 2010 .acm .n. silva , i .-r . tsang , g. cavalcanti , and i .- j .a graph - based friend recommendation system using genetic algorithm . in_ evolutionary computation ( cec ) , 2010 ieee congress on _ , pages 17 , july 2010 .potential friend recommendation in online social network . in _ green computing and communications ( greencom ) , 2010 ieee / acm intl conference on intl conference on cyber , physical and social computing ( cpscom ) _ , pages 831835 , dec 2010 .
websites have an inherent interest in steering user navigation in order to , for example , increase sales of specific products or categories , or to guide users towards specific information . in general , website administrators can use the following two strategies to influence their visitors navigation behavior . first , they can introduce _ click biases _ to reinforce specific links on their website by changing their visual appearance , for example , by locating them on the top of the page . second , they can utilize _ link insertion _ to generate new paths for users to navigate over . in this paper , we present a novel approach for measuring the potential effects of these two strategies on user navigation . our results suggest that , depending on the pages for which we want to increase user visits , optimal link modification strategies vary . moreover , simple topological measures can be used as proxies for assessing the impact of the intended changes on the navigation of users , even before these changes are implemented .
the mirable features of the kolmogorovian measure - theoretic axiomatization of classical probability theory has lead to consider it as the last word about foundations of classical probability theory , leading to the general attitude of forgetting the other different axiomatizations and , in particular , von mises frequentistic one .richard von mises axiomatization of classical probability theory lies on the mathematical formalization of the following two empirical laws : 1 .* law of stability of statistic relative frequencies * + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ it is essential for the theory of probability that experience has shown that in the game of dice , as in all other mass phenomena which we have mentioned , the relative frequencies of certain attributes become more and more stable as the number of observations is increased ( cfr .pag.12 of ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2 . * law of excluded gambling strategies * + _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ everybody who has been to monte carlo , or who has read descriptions of a gambling bank , know how many absolutely safe gambling systems , sometimes of an enormously complicated character , have been invented and tried out by gamblers ; and new systems are still suggested every day .the authors of such systems have all , sooner or later , had the sad experience of finding out that no system is able to improve their chance of winning in the long run , i.e . to affect the relative frequencies with which different colours of numbers appear in a sequence selected from the total sequence of the game .this experience forms the experimental basis of our definition of probability .pagg.25 - 26 of ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ according to von mises probability theory concerns properties of collectivities , i.e. of sequences of identical objects . considering each individual object as a letter of an alphabet , we can then say that probability theory concerns elements of the set of the sequences of letters from or , more properly , a certain subset whose elements are called * collectives * , where : [ def : classical strings on an alphabet ] set of the strings on : [ def : classical sequences on an alphabet ] set of the sequences on : where denotes the _empty string_. given let us denote by the string made of n repetitions of and by the sequence made of infinite repetitions of .it is important to remark that : [ th : cardinalities of strings and sequences ] on the cardinalities of strings and sequences over a finite alphabet let us then introduce the set of the * attributes * of s elements defined as the set of unary predicates about the generic . the mathematical formalization of the * law of stability of statistic relative frequencies * results in the following : [ ax : axiom of convergence ] axiom of convergence where denotes the number of elements of the prefix of c of length n for which the attribute a holds .given an * attribute * of a * collective * the axiom[ax : axiom of convergence ] make consistent the following definition : von mises frequentistic probability of a in c : let us then introduce the following basic definition : [ def : gambling strategy ] gambling strategy : where , following the notation of , denotes a _ partial function _ from a to b , i.e. a total function , with called the _ halting set of f_. if we will say that _ f does nt halt on the input x _ and will denote it by .given a gambling strategy s : [ def : subsequence extraction function ] subsequence extraction function induced by s : : \sigma^{\infty } \ ; \rightarrow \ ; \sigma^{\infty } ] it gives rise to .clearly we have that : [ cols="^,^",options="header " , ] as we see .the probability distribution of the string is the uniform distribution on : \ ; = \ ; \frac{1}{2^{n } } \ ; \ ; \forall \vec{y } \in \sigma^{n } , \forall n \in \mathbb{n}\ ] ] when such a distribution tends to the following : unbiased probability measure on : ] by proving by induction on n that \ ; = \ ; 0 \ ; \ ; \forall n \in { \mathbb{n } } ] follows immediately by the fact that .we have , conseguentially , simply to prove that \ ; = \ ; 0 \ ; \ ; \rightarrow \ ; \ ; e[payoff(n ) ] \ ; = \ ; 0 \ ; \ ; \forall s ] is the _ commutator _ between a and b. given a * classical set * m , let us introduce the following terminology : m is of type : m is of type : m is of type : m is of type : the real axis and its unitary segment both the whole real axis and the its unitary segment have the continuum power : the lebesgue measure is an unbiased probability measure for the unitary segment while it is not a probability measure on the whole real axis since it is not normalizable .hence : an analogous situation exists in the quantum case , where the rule of a * * quantum set * * is played by a non - abelian von neumann algebra a or , better , the building blocks it is made of , i.e. the * factors * contributing to its * factor decomposition*. introduced the following terminology : quantum unbiased probability measure on a : a finite , faithful , normal trace on a and assumed for simplicity that a is itself a factor we have a classification very similar to the classical one : a is of type : a is of type : a is of type : a is of type : where d is an arbitary * dimension function * on the * complete * , * orthomodular * lattice pr(a ) of the projections of a , , , , , .it must be said , for completeness , that in the quantum case there exist a suppletive one - parameter family of cases $ ] having no corrispective in the classical case that ( fortunately ) wo nt have no rule in this paper .these considerations justify the introduction of the following terminology : [ def : one qubit algebraic alphabet ] one qubit algebraic alphabet : [ def : algebraic space of quantum strings of n qubits ] algebraic space of quantum string of n qubits : [ def : algebraic space of quantum strings of qubits ] algebraic space of quantum strings of qubits : obviously is a -factor ; furthermore , being - isomorphic to , is a -factor .hence it is also - isomorphic to ( with denoting an n - dimensional hilbert space ) and admits the unbiased quantum probability measure .furthermore every state on it is * normal * and hence there exists a density matrix so that : eq.[eq : density matrix of a normal state ] shows that the algebraic characterization of quantum strings is absolutely equivalent to the usual hilbert space one based on the definitions definition[def : space of the quantum strings of n qubits ] and definition[def : space of the quantum strings of qubits ] .the whole operator algebraic machinery would then seem ( as is , indeed , often considered by not enough accultured physicists ) an arbitrary mathematical sophistication to recast in a hieratic mathematical language simple physical statements . as far as our issues is concerned ,if for tended to an -factor this would be indeed true since , in this case , there would exist an infinite - dimensional hilbert space such that : 1 . would be - isomorphic to 2 .every state on would be * normal * and hence there would exist a density matrix so that : admitting to recast again the analysis in the usual hilbert space formulation ( at the price of some quantum - logical subtility owed to the fact that the lattice in this case would nt be * modular * ) * but the factor to which tends for is of type . * this can be shown in the following way : the restriction of the unbiased quantum probability measure to is a dimension function so that : since : it follows that the infinite tensor product of ca nt be of type and , conseguentially : 1 .it is not -isomorphic to a 2 .a state on it is not , in general , normal and , hence , ca nt be represented by a density matrix making more rigorous the previous informal arguments let us introduce the following : [ def : algebraic space of quantum sequences of qubits ] algebraic space of quantum sequences of qubits : it can be proved that admits an unbiased quntum probability measure and is then of type (implying that the lattice is * modular * ) .let us finally introduce the following notions : [ def : algebraic quantum coin ] algebraic quantum coin : a * quantum random variable * on the * quantum probability space * [ def : third kind quantum casino ] third kind quantum casino : a quantum casino specified by the following rules : 1 .at each turn n the croupier throws an * algebraic quantum coin * obtaining a value 2 . before each algebraic quantum coin tossthe gambler can decide , by adopting a * quantum gambling strategy * , among the following possibilities : * to bet one fiche on an a letter * not to bet at the turn 3 .if he decides for the first option it will happens that : * he wins a fiche if the distance among and b is less or equal to a fixed quantity . * he loses the betted fiche if the distance among and a is greater than where the adoption of a * tensor gambling strategy * is formalized in terms of the following notion : [ def : third kind quantum gamling straregy ] third kind quantum gambling strategy : the concrete way in which the gambler applies , in every kind of quantum casino , the chosen strategy s is always the same : * if s does nt halt on the * previous game history * he does nt bet at the next turn * if s halts on on the past game history he bets s(previous game history ) lets us denote by the occured quantum sequence of qubits and with its * quantum prefix of length n * , i.e. the quantum string of the results of the first n quantum coin tosses .quantum casinos could seem , at this point , an abstruse mathematical concept ; they are , anyway , as concrete as classical casinos and may be concretelly simulated by the following mathematica code : where the initial assignations may be arbitarily variated from their default : a number of quantum coin tosses , an error of , an edge s length of the origin - centered square of the complex plane to which belong random matrices entries of 10 and the adoption of the gambling strategy discussed in the following example : betting on pauli matrices choosing according to the height of the unbiased quantum probability measure let us consider the following * third kind quantum gambling strategy * : where denotes the empty quantum string .let us imagine that the results of the first three quantum coin tosses are : so that : where we have passed from four to zero decimal ciphres to save space .gambler s evening to a third kind quantum casino may be told in the following way : * at the beginning he has ; since at the first turn he does nt bet we have obviously that * since : he bets on . * since : he loses his fiche .consequentially * since : he bets on . * since : he loses his fiche .consequentially * since : he bets on . * since : he loses his fiche .consequentially * since : he bets on . *since : he loses his fiche .consequentially exactly as it happened for the other kinds of quantum casinos , the notion of a * third kind quantum casino * induces naturally the notion of a * third kind collective * : third kind quantum collectives : induced by the axiom[ax : axiom of randomness ] and the assumption that the * third kind quantum admissible gambling strategies * are nothing but the * quantum algorithms * on : algoritmic information theory is a young field of research in which there is not general agreement even on the basic notion , i.e the correct way of defining quantum algorithmic information , but a plethora of different attempts : * karl svozil s original creation of the research field * yuri manin s final remarks in his talk at the 1999 s bourbaki seminar * paul vitanyi s definition * the definition by andr berthiaume , wim van dam and sophie laplante * our proposal * the approach by peter gacs 1 . the characterization of the notion of * quantum algorithmic randomness * as * quantum algorithmic incompressibility * 2 . the formulation and proof of * quantum - algorithmic - information undecidability theorems * analogous to chaitin s undecidability theorems poning constraints on the decidability of , respectively , quantum algorithmic information and the * * quantum halting probability * * since effective - realizable measurements are particular * quantum algorithms * the issue of characterizing the right notion of * quantum algorithmic randomness * is related with the issue of the * classical algorithmic randomness * of quantum measuremnts outcomes recentely analyzed by ulvi yurtsever .exactly as it happened for the classical notion of martin lf solovay chaitin randomness , it is rather natural to think that the right notion of * quantum algorithmic randomness * will emerge as the more stable one , i.e. as that notion to which completelly independent approaches belonging to completelly different frameworks collapse to .let us observe , by the way , that the theorem[th : not existence of kolmogorov random sequences of cbits ] does nt generalize to the quantum domain , since given a * quantum probability space * and introduced the following straigthforward noncommutative generalization of the previously introduced classical notions : set of the quantum kolmogorov random elements of qps : we have that the unbiased quantum probability of a single quantum sequence is not necessary null , so that : where is the * unbiased quantum probability space of quantum sequences * , while is the predicate , implying that the proof of the theorem[th : not existence of kolmogorov random sequences of cbits ] does nt hold in the quantum case .it is highly probable that the answer to such a question is negative , since the notion of quantum kolmogorov randomness does nt seem to have the features of a notion candidated to be a measure of ( quantum ) algorithmic incompressibility .pour - el .the structure of computability in analysis and physical theory : an extension of church s thesis . in e.r .griffor , editor , _ handbook of computability theory _ , pages 449472 .elsevier science b.v . , 1999 .
we introduce and analyze a quantum analogue of the law of excluded gambling strategies of classical decision theory by the definition of different kind of quantum casinos . the necessity of keeping into account entaglement ( by the way we give a staightforward generalization of schmidt s entanglement measure ) forces us to adopt the general algebraic language of quantum probability theory whose essential points are reviewed . the mathematica code of two packages simulating , respectively , classical and quantum gambling is included . the deep link existing between the censorship of winning quantum gambling strategies and the central notion of quantum algorithmic information theory , namely quantum algorithmic randomness ( by the way we introduce and discard the naive noncommutative generalization of the original kolmogorov definition ) , is analyzed
the domains of scientific visualization , medical visualization , cad , and virtual reality often require large amounts of time for gpu rendering . at the high end , these computations require many computers in a cluster for highly gpu - intensive graphics rendering . at the medium- and low - endthis rendering may still require large amounts of gpu - intensive graphics rendering , but on a single computer .for such long - running jobs , checkpointing the state of graphics is an important consideration .transparent checkpointing is strongly preferred , since writing application - specific code to save graphics state is highly labor - intensive and error - prone .this work presents an approach to transparent checkpoint - restart for gpu - accelerated graphics based on record - prune - replay .the gpu support is vendor - independent .it is presented in the context of opengl for linux , but the same approach would work for other operating systems and other graphics apis , such as direct3d . previously , lagar - cavilla presented vmgl for vendor - independent checkpoint - restart . that pioneering work employs the approach of a shadow device driver for opengl , which shadows most opengl calls in order to model the opengl state , and restores it when restarting from a checkpoint . while that tour de force demonstrated feasibility for most of opengl 1.5 , the vmgl code to maintain the opengl state grew to 78,000 lines of codethis document proposes a simpler , more maintainable alternative approach whose implementation consists of only 4,500 lines of code , to support opengl 1.5 .the approach is based on replaying a log of library calls to opengl and their parameters .the opengl - specific code is written as a plugin on top of a standard checkpoint - restart package , dmtcp .the role of dmtcp in this work corresponds to the role of virtual machine snapshots in vmgl . the key observation behind the _ record - prune - replay _ approach is that in order to restore the graphics state after restart , it suffices to replay the original opengl calls .more precisely , it suffices to replay just those opengl calls that are relevant to restoring the pre - checkpoint graphics state .for example , opengl provides two functions , glenableclientstate and gldisableclientstate .both functions take a `` capability '' parameter .each capability is initially disabled , and can be enabled or disabled . at the end of a sequence of opengl calls , a capability is enabled or disabled based only on which of the two functions was called last for that capability .so , all previous calls for the same capability parameter can be pruned ._ pruning _ consists of purging all opengl calls not needed to reproduce the state of opengl that existed at the time of the checkpoint .the current work is implemented as a plugin on top of dmtcp .checkpoint - restart is the process of saving to traditional disk or ssd the state of the processes in a running computation , and later re - starting from stable storage ._ checkpoint _ refers to saving the state , _ resume _ refers to the original process resuming computation , and __ refers to launching a new process that will restart from stable storage .the checkpoint - restart is _ transparent _ if no modification of the application is required .this is sometimes called _ system - initiated _ checkpointing . finally , this work is demonstrated both for opengl 1.5 and for opengl 3.0 .this is in contrast to vmgl , which was demonstrated only for opengl 1.5 .the complexity of supporting opengl 3.0 is considerably larger than that for opengl 1.5 .this is because opengl 3.0 introduces programmable pipelines , and shaders .shaders are programs compiled from a glsl c - like programming language .shaders are then passed to the server side of opengl ( to the gpu ) .when including the opengl 3.0 functions , the size of our implementation grows from 4,500 lines of code to 6,500 lines of code ( in c and python ) .[ [ organization - of - paper . ] ] organization of paper .+ + + + + + + + + + + + + + + + + + + + + + in the rest of this work , section [ sec : background ] provides background on the software libraries being used .section [ sec : design ] describes the software design , and section [ sec : algorithm ] describes the algorithm for pruning the log .section [ sec : comparison ] compares the two approaches of shadow drivers and record - prune - replay .section [ sec : limitations ] describes the limitations of this approach , while section [ sec : relatedwork ] presents related work .section [ sec : experiments ] describes an experimental evaluation .finally , section [ sec : futurework ] presents future work , and section [ sec : conclusion ] presents the conclusion .the software stack being checkpointed is described in figure [ fig : software ] .software stack of a graphics application to be checkpointed . ] [ [ windowing - toolkits . ] ] windowing toolkits .+ + + + + + + + + + + + + + + + + + + opengl applications typically use a windowing toolkit in order to handle external issues such as providing a graphics context based on a window from x - windows , and handling of input ( mouse , keyboard , joystick ) , and handling of sound .two of the more popular windowing toolkits are glut and sdl .glut is noted for its simplicity , while sdl is noted for its completeness and is often used with games . in particular ,sdl is used by ioquake3 , and glut is used by pymol .both are built on top of lower - level glx functions .this work supports calls both to sdl and to glut , as well as to glx .[ [ sec : dmtcp ] ] dmtcp .+ + + + + + dmtcp ( distributed multithreaded checkpointing ) provides transparent user - space checkpointing of processes .the implementation was built using dmtcp plugins .in particular , two features of plugins were used : 1 .wrapper functions around calls to the opengl library ; and 2 .event notifications for checkpoint , restart , and thread resume during restart ( pre_resume ) .wrapper functions around the opengl calls were used to maintain a log of the calls to the opengl library , along with the parameters passed to that library . at the time of restart ,the log was pruned and replayed , before returning control back to the graphics application .as described in the introduction , the fundamental technique to be used is record - prune - replay .the key to pruning is to identify the dependencies among opengl calls in the log .note that it is acceptable to over - approximate the dependencies ( to include additional dependencies that are not required ) , as long as the size of the pruned log continues to remain approximately constant over time .this is sometimes used to produce a simpler model of the dependencies among opengl calls . in maintaining dependencies, the key observation is that if the function can only set the single parameter on which depends , then only the last call to prior to calling needs to be preserved .however , the situation can become more complicated when a single function is capable of setting multiple parameters .for example , imagine a function : + .imagine further that and exist .then it is important the last call be retained for each category , where is one of the three functions , , and , and is a value of the parameter .several additional considerations are important for a correct implementation . [[ reinitializing - the - graphics - library - during - restart . ] ] reinitializing the graphics library during restart .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + sdl provides a function sdl_quit ( ) , that resets all state to its initial state .that function is required for the current work , since at the time of restart , the pruned log of calls to opengl is replayed .glut does not appear to provide such a re - initialization function .hence , for glut , we make a copy of the glut library in memory ( both text and data segments ) prior to initialization , and at the time of restart , we overwrite this memory with a copy of the glut library memory as it existed prior to the first call to that glut library .potentially , one must also worry about re - initializing a low - level graphics library such as dri .we posit that this is usually not necessary , because a well - structured graphics library like opengl or a generic dri library will determine which vendor s gpu is present , and then directly load and initialize that vendor - specific library as part of its own initialization .note that a subtle issue arises in overwriting a graphics library during restart .this overwriting occurs only after the checkpoint - restart system has restarted all pre - checkpoint threads , open files , etc . yet, some graphics libraries directly create a new thread .this issue arose , for example , in the case of the dri library for the ati r300 .when it initializes itself , it creates an auxiliary thread to maintain its internal data structures .the auxiliary thread still exists at the time of restart , even though the re - initialization creates an additional thread .hence , one must kill the older thread on restart .this is accomplished through the pre_resume event notification from the dmtcp checkpoint - restart subsystem . otherwise , the older thread would write to its data structures at the same time as the newer thread , causing memory to be corrupted .[ [ sdl - glut - and - glx . ] ] sdl , glut and glx .+ + + + + + + + + + + + + + + + + + in this work , we intercept and log calls to sdl and glut as a proof of principle .these windowing toolkits provide high - level abstractions for calls to glx . in the future ,we intend to directly intercept calls to glx in the opengl library .this will provide support for all windowing toolkits .[ [ virtualization - of - graphics - ids . ] ] virtualization of graphics ids .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + most of the wrappers around opengl functions are used only to maintain the log . in a few cases, we also required virtualization of certain ids for graphics objects .this is because the graphics i d could change between checkpoint and a restart in a new session .hence , a translation table between virtual ids and real ids was maintained for these cases .these cases included texture ids , sdl surface ids , and joystick ids .[ [ sec : pointers ] ] saving pointers to memory for later replay .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in most opengl functions , the entire argument is passed to the opengl library .but in a few cases , a pointer to application memory is passed to the opengl library . in these cases , the data in memory at the time of the callmay be altered at the time of checkpointing .the implementation must then copy the data into separately allocated memory .examples occur for calls to glteximage , gltexcoordpointer , glcolorpointer , glvertexpointer , glbufferdata , gltexparameterfv , and others .virtualization of opengl ids follows a standard paradigm for dmtcp plugins . for each opengl function that refers tothe type of i d of interest , the dmtcp plugin defines a wrapper function with the same type signature as the original opengl function .when the graphics application is run under dmtcp , the wrapper function is interposed between any calls from the graphics application to the opengl library .thus , each wrapper function is called by the application , possibly modifies parameters of the i d type of interest , and then calls the real opengl function in the opengl library .if the opengl function normally returns an i d ( possibly as an `` out '' parameter ) , then the wrapper function creates a new virtual i d , enters the ( virtual , real ) pair into a translation table , and then returns the virtual i d to the application . if the opengl function has an argument corresponding to that i d ( an `` in '' parameter ) , then the wrapper function assumes that the application has passed in a virtual i d .the application uses the translation table to replace it by the corresponding real i d , before calling the real opengl function in the opengl library .finally , at the time of restart , the dmtcp plugin must recreate all graphics objects by replaying the log . at that time, the opengl library may pass back a new real i d .since the log retained the original virtual i d , the plugin then updates the translation table by setting the translation of the virtual i d to be the new real i d generated by opengl .the current implementation of the log of opengl calls maintains that log as human - readable strings .further , the code for pruning the log is currently a python program .even worse , the log of calls is maintained solely as a file on disk , even though this is much slower than maintaining the log in ram .all of this is done for simplicity of implementation . while pruning the log in this way undoubtedly adds at least one order of magnitude to the time for pruning , the effect on performance is minor .this is because the plugin periodically creates a child process , which prunes the log in parallel , and then notifies the parent of success .the child process uses an otherwise idle core of the cpu . as figure[ fig : cdf - plot ] shows , the overhead of doing this is too small to be measured , when an idle core is available .nevertheless , while a human - readable log on disk is advantageous for software development and debugging , a future version of this software will employ c code for pruning . furthermore , the current implementation represents each opengl function and parameter as an ascii string ( and even integers are written out as decimal numbers in ascii ) .the future implementation will use binary integers to represent opengl function names and most parameters .opengl 3.0 simplified future revisions of the api by deprecating some older features , including glpush , glpop and display lists .programmable vertex and fragment processing , which was introduced in opengl 2.0 , became a core feature of opengl 3.0 , and all fixed - function vertex and fragment processing were deprecated .the programmable pipelines have become the standard for modern opengl programming .shaders and vertex buffer objects ( vbos ) are two important concepts in modern opengl .these two types of objects are both stored in the graphics memory ( on the server - side ) .vertex buffer objects were introduced in opengl 1.5 , but they had not been widely used until opengl 3.0 .display lists were an earlier attempt to store program objects on the server - side , but using a fixed program model in opengl .these were deprecated in modern opengl .shaders are a large part of the motivation for opengl 3.0 .a shader is a program written in the glsl language , which runs on the gpu instead of the cpu . a shader program in the glsl language is separately compiled into binary before being used by an opengl program .the plugin described here places a wrapper function around all opengl functions related to shaders .example functions are glcreateshader , glshadersource , glcompileshader , gllinkprogram , and so on . in particular, we virtualize the shader i d parameter for these functions .since the user application runs on the cpu and shaders run on the gpu , a method of copying data from the cpu to the gpu is needed .vertex buffer objects ( originally introduced in opengl 1.5 ) fulfill this role .this allows various types of data ( including shader programs ) to be cached in the graphics memory of the gpu .the approach described here preserves all memory buffers associated with a vertex buffer object by copying them into a temporary buffer in memory . that memory will be restored on restart , thus allowing the plugin to copy the same buffer from memory into graphics memory .we have found the log - based approach to be especially helpful from the point of view of software engineering . in the first stage of implementation , we wrote wrapper functions to virtualize each of the opengl ids , so as to guarantee that record - replay will work .( in certain cases , such as testing on ioquake3 , even this stage could largely be skipped , since the application chooses its own texture and certain other ids . )since the wrapper functions often follow a common pattern ( e.g. , translate the i d parameter from virtual to real , and then call the real opengl function in the opengl library ) , we also wrote a preprocessing language .this language concisely takes the declaration of an opengl function as input , along with a declaration of which parameter is to be virtualized , and then generates the desired wrapper function in c. thus , full c code is available for compiling and debugging at all times .however , ensuring that all pointers to memory are saved for later replay ( see section [ sec : pointers ] ) continued to consume significant amounts of time for debugging .finally , the code for pruning the log can be developed iteratively .code is written for pruning a set of related opengl functions , and the record - prune - replay cycle is tested for correctness by running it inside a sophisticated opengl application .then another set of related opengl functions is targeted , and similarly tested .this breaks down the code - writing into small , independent modules .the strategy for pruning the log of opengl calls is based on dependencies among opengl functions .opengl functions for drawing a graphics frame are identified and form the root of a dependency tree for all opengl functions ( along with any glx extensions ) .a node is a child of a parent node if executing an opengl call of the type given by the child can potentially affect a later opengl call of the type given by the parent .hence , in this tree , almost every opengl function is a descendant of gldraw , since there is almost always a sequence of calls beginning with and ending with gldraw , such that changing the parameters of a call to , or removing call entirely , will affect what is displayed through gldraw .an example of such a dependency tree is given in figure [ fig : textures ] . the opengl functions that draw the next graphics frame ( gldraw ) , or flush the current graphics to the framebuffer ( glfinish ) , form the root of the dependency tree .note also that a call to glgentextures can not affect how a call to glclearcolor ( setting the `` background '' color when clearing the buffer ) , and similarly , vice versa .hence , if there are two calls to glclearcolor , with an intervening call to glteximage and no intervening calls to gldraw or glfinish , then it is safe to prune the first call to glclearcolor . only the second call to glcolor can affect a later call to gldraw / glfinish , and the first call to glcolor can not affect the call to glteximage .dependency tree for some opengl functions . ] in general , the algorithm begins by finding the last call to gldraw or glfinish .no earlier call to gldraw or glfinish can have an affect on a final call to gldraw .hence , we search within the log for the last opengl call that drew or flushed a graphics frame .we only prune calls until this last drawing call .this has the benefit of re - drawing the last graphics frame properly upon replay , and additionally avoiding any bad corner cases that might occur by replaying only until the middle of the commands for a graphics frame .after this , branches of this dependency tree are identified , such that calls to a function within a branch may affect the calls for drawing a graphics frame , but do not affect the behavior of any other branch .this is illustrated in figure [ fig : textures ] , where one branch is concerned with clearing a buffer , and the other is concerned with creating , deleting or modifying the effect of textures . [ [ textures - a - running - example . ] ] textures : a running example .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + next , we discuss a running example , concerning textures in opengl 1.5 .this illustrates the general approach taken toward pruning the log by maintaining a tree of dependencies among opengl calls .a texture can be assigned to a target , such as the targets gl_texture_1d , gl_texture_2d , gl_texture_3d .the glbindtexture binds a texture i d to a target .other opengl functions operate solely on a specific target , where the target acts as an alias for the texture i d that was most recently bound to that target . thus, a call to draw a graphics frame depends only on the texture i d that was most recently bound to a target , for each of the targets .thus , for a given target , one need only retain the most recent call to glbindtexture for that same target .hence , it may _ seem _ that an older call to glbindtexture may be dropped from this dependency tree , if a more recent call to the same target exists .unfortunately , this is not the case , due to the issue of texture parameters .we continue the example of textures in the following .each texture denoted by a texture i d has many associated parameters .these parameters are affected both by the gltexparameter and the glteximage family of opengl functions .( one can think of glteximage as specifying a single texture parameter , the image associated with a texture .note that these functions specify only a target , and not a texture i d . ) in order to incorporate the dependencies on texture parameters , we conceptually view the `` live objects '' prior to drawing a graphics frame as consisting of _ texture triples_:(texture i d , texture target , texture parameter)a function within the gltexparameter or glteximage family directly specifies a texture target and a texture parameter .it also implicitly specifies a texture i d .more precisely , it specifies the texture i d corresponding to the most recent of the earlier calls to glbindtexture that was applied to the same texture target .hence , a call within the gltexparameter family may be dropped if it refers to the same triple as a later call within the log of opengl calls .if a call within the glteximage family is viewed as affecting an extra `` image '' parameter , then the same rules apply .finally , one can now state the rule for when a call to glbindtexture may be dropped . a call within the gltexparameter orglteximage family refers to a given texture triple .this call to gltexparameter or glteximage refers implicitly to a texture i d through an earlier call to glbindtexture , as described earlier .all such earlier calls to glbindtexture must be retained , while all other such calls may be dropped . in the running example of textures, one can see that the size of the log is limited to those calls relevant to a given texture triple .since the number of texture ids of a graphics program will be limited , this will determine the size of the log , in practice .in this section , following a review of vmgl , section [ sec : designchoices ] compares the different design choices of vmgl and the current approach. then section [ sec : features ] compares the features and expected performance that result from these design choices .[ [ review - of - the - design - of - vmgl . ] ] review of the design of vmgl .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + as mentioned earlier , vmgl uses a shadow driver strategy , similar in spirit to the earlier shadow device driver work of swift .in contrast , the approach described here uses a record - prune - replay strategy . in this section, we will always refer to the three phases of checkpoint - restart as : checkpoint ( writing to disk a checkpoint image ) ; resume ( resuming the original process after writing the checkpoint image ) ; and restart ( restarting a new process , based on a checkpoint image on disk ) . in the vmgl work , they typically refer to checkpoint as `` suspend '' and restart as `` resume '' .they do not use the term `` restart '' at all .further , the vmgl work uses wiregl in its implementation .wiregl has now been absorbed into the current chromium project ) .we continue to refer to wiregl here , for clarity of exposition , although any new implementation would use chromium rather than wiregl .finally , in the vmgl approach , the saved graphics state for opengl contexts falls into three categories . * global context state : including the current matrix stack , clip planes , light sources , fog setting , visual properties , etc . *texture state * display lists this enabled vmgl to support most of opengl 1.5 .the vmgl work is based on a shadow driver design employing a snapshot of a virtual machine , while the the current work is based on the use of record - prune - replay operating directly within the end user s current desktop .these two design choices have a variety of consequences . here, we attempt to codify some of the larger consequences of those design choices . 1. virtual machine versus direct execution : 1 .vmgl is designed to operate with a virtual machine such as xen . at checkpoint time, vmgl uses vnc to detach a graphics viewer at checkpoint time , and to re - attach at restart time .it uses wiregl to pass opengl commands between the guest operating system ( containing the application ) and the host ( where the graphics are displayed ) .the current approach is based on dmtcp , a package for transparent checkpoint - restart .a dmtcp plugin records the opengl calls , and periodically prunes the log .the graphics application executes within the user s standard desktop , with no vnc .checkpoint : ( actions at checkpoint time ) 1 .vmgl calls on the virtual machine snapshot facility to create a checkpoint image ( a virtual machine snapshot ) .the dmtcp plugin deletes the graphics window ( along with the application s opengl contexts ) , and then calls on dmtcp to write a checkpoint image .3 . resume : ( resuming the original process after writing a checkpoint image ) 1 .vmgl has very little to do on resume , and is fully efficient .2 . the dmtcp plugin must recreate the graphics window with the application s opengl context , and then replay the log .restart : ( restarting from a checkpoint image on disk ) 1 .vmgl restores the virtual machine snapshot along with its vmgl library and x server .wiregl connection allows it to re - synchronize with a stub function within a vnc viewer .the dmtcp plugin restores the the graphics application memory from the dmtcp checkpoint image .the plugin then connects to the x server and creates a new graphics window and replays the pruned log , which creates a new opengl context , and restores the opengl state .5 . _ restoring opengl ids : _ ( in the opengl standard , many opengl functions return ids for numerous graphics objects , including textures , buffers , opengl programs , queries , shaders , etc .since the graphics application will usually cache the ids , it is important for a checkpointing package to use the same graphics ids on restart that were passed to the application prior to checkpoint . ) 1 .vmgl takes advantage of the fact that a new opengl context will provide ids for graphics object in a deterministic manner .so at restart time , vmgl makes opengl calls to recreate the graphics objects in the same order in which they were created .2 . unlike vmgl ,the dmtcp plugin guarantees that the application always sees the same graphics i d because the plugin only passes virtual ids to the application .the plugin provides a wrapper function around any opengl function that refers to a graphics i d .the wrapper function translates between a virtual i d ( known to the application ) and a real i d ( known to the current x server ) . at restart time , the translation table between virtual and real ids is updated to reflect the real ids of the new x server .next , we discuss some of the differences in features and performance for vmgl and dmtcp plugin .we _ do not _ argue that one approach is always better than the other .our goal is simply to highlight what are the different features and performance to be expected , so that an end user may make an informed choice of which approach is best for a particular application . 1 ._ time for checkpoint - resume : _ the vmgl time for checkpointing and resuming the original process is dominated by the time to write out a snapshot of the entire operating system .the dmtcp time for checkpointing and resuming the original process is dominated by two times : the time to write out the graphics application memory , and the time to prune the log before checkpoint , and then replay the pruned log .note that the plugin periodically prunes the log , so that pruning at checkpoint time does not require excessive time .( currently , the time to replay the pruned log is a little larger .this is expected to change in future work , when the the pruning code will be switched from python to c. ) 2 ._ size of checkpointed data : _ while both approaches must save the server - side data , vmgl also writes out a snapshot of both the graphics application memory and its entire operating system at checkpoint time .in contrast , the dmtcp plugin must write out just the graphics application . however , the plugin has an extra burden in also needing to write out the pruned log ._ rendering over a network : _ since vmgl employs vnc and wiregl , it is trivial for vmgl to operate over a network in which the vnc server and the vnc viewer execute on different machines .( the vmgl project also makes a minor modification to vnc and places a vmgl - specific x extension inside the guest operating system . )this mode is also possible for the dmtcp plugin , since dmtcp can checkpoint and restart a vnc server after the vnc viewer has disconnected .however , most vnc servers do not support gpu - accelerated opengl implementations .thus , the dmtcp plugin approach would have to be extended to include a compatible vnc that uses an analog of wiregl ( chromium ) .this has _ not _ been implemented in the current work .support for heterogeneous gpu architectures : _ while both vmgl and the dmtcp plugin are vendor - independent , vmgl has the further advantage of being able to checkpoint on a computer with one type of gpu and to restart on another computer with a different type of gpu .this is because the application always `` talks '' to the same graphics driver ( the one saved as part of the guest virtual machine snapshot ) , and wiregl insulates this graphics driver within the guest from the graphics driver on the host .one can restart on a different host with a different graphics driver . in the case of the dmtcp plugin , see section [ sec : limitations ] for a discussion of future work in which the plugin could also capture heterogeneous operation ._ support for distinct operating systems : _ the plugin approach operates within x windows , and requires dmtcp , which has been implemented only for linux .the vmgl approach runs well over multiple operating systems , including linux .vmgl can also make use of wined3d to translate from the windows - oriented direct3d graphics to opengl ._ gpu - accelerated opengl graphics within a virtual machine : _ the vmgl project was created specifically to provide gpu - accelerated graphics for a virtual machine .although dmtcp has been used to checkpoint a kvm machine , it is not clear how to extend that work to support gpu - accelerated opengl for virtual machines ._ window manager : _ since the dmtcp plugin operates within an existing desktop , it is possible to restart a graphics application on a different desktop _ with a different window manager_. this is not possible in vmgl , since the window manager is part of the virtual machine snapshot , and can not be changed on restart ._ virtualization of opengl ids : _ vmgl must maintain every i d ever created since the startup of the graphics application .it then recreates them in the original order at the time of restart .it is usually not a burden to maintain all ids , since typical applications create their graphics objects only near the beginning of the program .but an atypical application might choose to frequently create and destroy graphics ids , and this would force vmgl to carefully model the choice of new ids made by an opengl implementation during creation and deletion. this model might be vendor - specific , thereby losing its desirable vendor - independent quality .in contrast , the dmtcp plugin virtualizes the opengl ids ( see section [ sec : virtualization ] ) .this allows the plugin to insulate itself from the particular pattern of opengl ids passed back by the x server .several limitations are discussed .certain graphics operations take pointers to large data objects .a user program may modify these data objects , or even delete them from memory entirely .if the data object is not available in memory at the time of restart , then replaying the graphics operation will not faithfully reproduce the pre - checkpoint operation .this can be compensated for by saving a copy of the data object at the time that the operation is executed . butthis is problematic if the graphics operation is called repeatedly on the same data object .a hash on the memory of the data object can be used to determine if the data object has been modified since the last call , but this creates significant overhead .a good example of this occurs for the opengl function glbufferdataarb , which enables a graphics client to store the data associated with a server - side buffer object .the approach presented here works well in homogeneous architectures ( same gpu and drivers on the pre - checkpoint machine and the restart machine ) .it does not support heterogeneous architectures .this is because the graphics driver library in system memory ( in ram ) is saved with the rest of the checkpoint image .if the checkpoint image is transferred to another computer with a different gpu architecture , then the original graphics driver library will fail to operate correctly .the current approach could be generalized to heterogeneous architectures in the future by loading and unloading appropriate graphics driver libraries , but only at the expense of additional complication .opengl 4.0 adds support for arb_get_program_binary , which retrieves the contents of vendor - specific program objects for later reuse .this illustrates a situation in which any checkpointing package for gpus is likely to lose the ability to do heterogeneous checkpointing .the current work stresses checkpointing of hardware - accelerated opengl state for a standalone application .vmgl presented a pioneering approach that demonstrated checkpointing of hardware - accelerated opengl state for the first time .that work was motivated by an approach of swift et al . within the linux kernel . in that work , a separate shadow device driver within the kernel records inputs to the real driver , and models the state of the real driver .if the device driver temporarily fails due to unexpected input , the shadow driver intercepts calls between the real device driver and the rest of the kernel , and passes its own calls to the real device driver , in order to place the driver back into a `` sane '' state .this is comparable to the approach of vmgl , except that swift discuss a shadow device driver inside the kernel , and vmgl maintains a library within the graphics application itself to model the entire opengl state ( drivers and gpu hardware ) , and to restore that state during restart .the current work is distinguished from the previous two examples in that we do not maintain shadow driver state , but instead we log the calls to the opengl api , along with any parameters .both vmgl and the current work potentially suffer from having to save the state of large data buffer objects .both approaches are potentially limited when a user - space vendor - dependent library ( such as dri ) is saved as part of the application , and the application is then restarted under another vendor s graphics hardware . both approaches could unload the old library and load a new library at the time of restart . this issue does not arise in virtual machines for which the user - space vendor - dependent library may be eliminated through paravirtualization . there is also a rich literature concerned with extending virtual machine support to include run - time access from within the virtual machine to hardware - accelerated graphics .vmgl also appears to have been the first example in this area . to date , aside from vmgl , none of these approaches provide the ability to checkpoint and restore the state of the hardware - accelerated graphics .several open source or free packages , including wiregl ( since incorporated into chromium ) , virtualgl , apitrace provide support for extending opengl and other 3d graphics apis , so that calls to the graphics api for a client within the guest o / s of the virtual machine are passed on to a graphics server outside of the virtual machine .this external graphics server is often part of a host virtual machine , but other schemes are available , including passing graphics commands across a network .schmitt present virtgl for virtualizing opengl while preserving hardware acceleration . among the solutions for providing virtual access to hardware - accelerated gpusare those of dowtyetal ( vmware workstation and vmware fusion ) , with a virtualized gpu for access to the gpu of a host virtual machine .similarly , duato et al . have presented a gpu virtualization middleware to make remote gpus available to all cluster nodes .gupta present gvim , which demonstrates access to nvidia - based gpu hardware acceleration from within the xen virtual machine .along similar lines , wegger present virtgl .lin present live migration of virtual machines across physical machines , while supporting opengl .finally , the approach of record - prune - replay has also been applied in the context of deterministic record - replay for a virtual machine . in that context ,one applies time slices and abstraction slices in order to limit the record log to those calls that do not reveal sensitive information to an unprivileged user .there are two larger questions that we wish to answer in this experimental evaluation .first , we measure the overhead of logging the opengl calls ( section [ sec : logging ] ) , and also of periodically pruning the log ( section [ sec : pruning ] ) to prevent the log from growing too large. these two forms of overhead are measured .second , in section [ sec : pruning ] we measure the growth of the size of the log over a long - running graphics program , in order to verify that the size of the log approaches a plateau and does not grow excessively .finally , section [ sec : ckptrestarttimes ] shows that checkpointing requires up to two seconds , even at the larger resolutions , while restarting requires up to 17 seconds .while the above results are concerned with a detailed analysis of opengl 1.5 , section [ sec : opengl3experiments ] demonstrates that the same approach extends to opengl 3.0 .that section relies on pymol ( molecular visualization ) for real - world testing .-10pt the detailed testing for opengl 1.5 relies on ioquake3 , the well - known open source game quake3 ( see figure [ fig : ioquake - screen ] ) .ioquake3 is chosen due to its existing use as an open source benchmark , and particularly for comparability with the use of vmgl .the implementation of ioquake3 is described by stefyn . for each of the experiments , we ran the freely available 60 second quake3 demo .[ [ configuration . ] ] configuration .+ + + + + + + + + + + + + + the experiments were run on an 8-core intel core - i7 laptop computer with 8 gb of ram .the host operating system was a 64-bit version of ubuntu-13.10 with linux kernel 3.11 .for these experiments , we used an nvidia gt 650 m graphics card and the open - source nouveau driver based on mesa version 9.2.1 .dmtcp-2.0 was used for the plugin implementation .table [ tab : performance ] shows the impact on the performance of ioquake3 at different resolutions when running under dmtcp . at higher resolutions , the time spent is dominated by the time in the gpu . in this casethe overhead of recording calls to the log is negligible . .[ tab : performance ] frames per second when running natively and under dmtcp for ioquake3 [ cols="^,^,^ " , ] to test the proposed approach with opengl 3.0 we used a widely - used open - source molecular visualization software , pymol .figure [ fig : pymol ] shows the application running a demonstration with different representations of a protein .we observe that the application runs with a 9% overhead on the frames - per - second because of the dmtcp plugin . the time to checkpoint , and the time to restart ( that includes replaying of the logs ) are 1.4 seconds , and 10 seconds , respectively .the performance overhead , and the restart times are better because of the opengl 3.0 features calls to shader processing units , which do more work for the same log entry by the plugin . pymol screenshot ]a production - quality version of this software will be contributed to the dmtcp open source project for checkpoint - restart . in that version, the log of opengl calls will be stored in a binary format , instead of using ascii strings .two other enhancements will make it still faster .first , the program for pruning the log will be re - written in c , and made into a software module in the main executable .second , the log will be maintained in ram , instead of being stored as a file on disk .next , support for heterogeneous gpu architectures will be considered , in which the gpu on the restart machine is different from the gpu on the pre - checkpoint machine . see section [ sec : limitations ] for a further discussion of this issue . since software such as wiregl can be used as part of a larger rendering farm , andsince dmtcp directly supports checkpointing of networks , this work can be extended to support checkpointing over a cluster of gpu - enabled workstations for visualization .the work of vmgl has already demonstrated the possibility of using wiregl and a vnc implementation to render the 3d graphics inside a virtual machine through transport to a gpu - enabled host .the same scheme could be applied in this approach .while vnc had used wiregl and tightvnc , it is currently possible to use either wiregl with a choice of vnc implementations , or else to use virtualgl and its recommended pairing with turbovnc .support for direct3d under windows will be considered by running windows inside a virtual machine , and making use of the wined3d library for translation of direct3d to opengl .the approach of this work will be extended to work with xlib .this will eliminate the need for supporting glut and sdl .furthermore , this will directly support all 2d applications based on xlib .this improves on the current practice of using vnc to support checkpointing of 2d graphics .this work shows feasibility for a record - prune - replay approach to checkpointing 3d graphics for both opengl 1.5 and opengl 3.0 .checkpointing requires up to two seconds . while the current run - time overhead is high ( for example , 40% overhead for periodically pruning the log ) , this is attributed to the current inefficient representation of opengl calls in the log as ascii strings in a file on disk .a binary data representation is planned for the future .this will eliminate the need to do pruning in a second cpu core .previously , the only approach to transparently saving the state of gpu - accelerated opengl as part of checkpoint - restart was to use vmgl s shadow device driver along with a virtual machine snapshot .that approach was demonstrated for opengl through 78,000 lines of code . as an indication of the relative levels of effort , the current approach required only 4,500 lines of code for opengl 1.5 and and 6,500 lines in total .the current approach is implemented as a plugin that extends the functionality of the dmtcp checkpoint - restart package .this allows the graphics programs to be checkpointed directly , without the intervention of a virtual machine and its snapshot capability .we would especially like to thank james shargo for demonstrating that apitrace supports a record - replay approach , and then suggesting that although he did not personally have the time , we should pursue a record - prune - replay approach for checkpoint - restart .we would also like to thank andrs lagar - cavilla for his earlier discussions on the design and implementation of vmgl .finally , we would also like to thank daniel kunkle for his insights during an earlier investigation into the complexities of checkpointing opengl in 2008 , at a time when much of the software landscape was less mature . an efficient implementation of gpu virtualization in high performance clusters . in _ proc .of 2009 international conference on parallel processing _( berlin , heidelberg , 2010 ) , euro - par09 , springer - verlag , pp . 385394 .adaptive display algorithm for interactive frame rates during visualization of complex virtual environments . in _ proceedings of the 20th annual conference on computer graphics and interactive techniques _( new york , ny , usa , 1993 ) , siggraph 93 , acm , pp .247254 .pengl application live migration with gpu acceleration in personal cloud . in _ proceedings of the 19th acm international symposium on high performancedistributed computing _( new york , ny , usa , 2010 ) , hpdc 10 , acm , pp .
providing fault - tolerance for long - running gpu - intensive jobs requires application - specific solutions , and often involves saving the state of complex data structures spread among many graphics libraries . this work describes a mechanism for transparent gpu - independent checkpoint - restart of 3d graphics . the approach is based on a record - prune - replay paradigm : all opengl calls relevant to the graphics driver state are recorded ; calls not relevant to the internal driver state as of the last graphics frame prior to checkpoint are discarded ; and the remaining calls are replayed on restart . a previous approach for opengl 1.5 , based on a shadow device driver , required more than 78,000 lines of opengl - specific code . in contrast , the new approach , based on record - prune - replay , is used to implement the same case in just 4,500 lines of code . the speed of this approach varies between 80 per cent and nearly 100 per cent of the speed of the native hardware acceleration for opengl 1.5 , as measured when running the ioquake3 game under linux . this approach has also been extended to demonstrate checkpointing of opengl 3.0 for the first time , with a demonstration for pymol , for molecular visualization .
many empirical studies report broad distributions of income and wealth of individuals and these distributions are often claimed to have power - law tails with exponents around two for most countries .the first models attempting to explain the observed properties appeared over fifty years ago .much more recently , physics - motivated kinetic models based on random pairwise exchanges of wealth by agents have attracted considerable interest .an alternative point of view is adopted in the wealth redistribution model ( wrm ) where agents continuously exchange wealth in the presence of noise .there are also several specific effects which can lead to broad wealth distributions .( for reviews of power laws in wealth and income distributions see , while for general reviews of power laws in science see . ) in this paper we analyze the wrm with two complementary goals in mind .firstly we investigate the simplest case when exchanges of all agents are identical , focusing on the validity of the mean - field approximation which is the standard tool to solve the model and derive the stationary wealth distribution .in particular , we show that for any finite number of agents there is no such stationary distribution ( other finite - size effects are discussed for a similar model in ) .secondly we investigate the model s behaviour when the network of agent exchanges is heterogeneous .previous attempts to investigate the influence of network topology on the model were all based on the mean - field approximation .we show that this is questionable because heterogeneity of the exchange network strongly limits the validity of results obtained using the mean - field approximation .adopting the notation used in , we study a simple model of an economy which is composed of agents with wealth ( ) . the agents are allowed to mutually exchange their wealth ( representing trade ) and they are also subject to multiplicative noise ( representing speculative investments ) .the time evolution of agents wealth is given by the system of stochastic differential equations ( sdes ) where controls the noise strength .the coefficient quantifies the proportion of the current wealth that agent spends on the production of agent per unit time .we assume the it convention for sdes and is standard white noise .hence , denoting averages over realisations by , we have , , and .by summing over all agents one can see that the average wealth is not influenced by wealth exchanges and obeys the sde .therefore and is constant . for simplicitywe assume ( ) and thus and .( the influence of the initial conditions is discussed in . )the system behaviour is strongly influenced by the exchange coefficients .the simplest choice is where all exchanges are equally intensive we say that the exchange network is homogeneous . by rescaling the time we can set which means that during unit time agents exchange all their wealth .consequently , simplifies to where is the average wealth of all agents but agent . in the limit ,fluctuations of are negligible and one can replace as in .agents then effectively interact only with the `` mean field '' and their wealth levels are independent . using the fokker - planck equation for the wealth distribution , the stationary solution can be found in the form ^{-1-\lambda},\quad \lambda:=1 + 1/\sigma^2.\ ] ] for , decays approximately as a power - law with exponent , while the cumulative distribution has exponent .when is well described by , we say that the system is in the _ power - law regime_. the empirical studies mentioned above report power - law exponents around , indicating that in this model , is needed to obtain realistic power - law behaviour of the wealth distribution .in our analytical calculations we assume ; strong noise ( ) is discussed separately at the end of the following section .to examine when the power - law regime is realised , we first investigate the time needed to reach the mean - field solution .such relaxation times were studied very recently in kinetic models of wealth distribution . given the homogeneous initial conditions ( ) , the exchange terms proportional to are zero at and can be neglected for small times .hence when is small , each evolves independently due to multiplicative noise , is lognormally distributed , and its variance is (t)=\exp[2\sigma^2t]-1=2\sigma^2t+o(t^2) ] grows without limit , in contradiction with the variance of which is finite for . to resolve this disagreement we have to accept that as given by is not a stationary solution .but what comes after the power - law regime ?since the fokker - planck equation for the joint probability distribution can not be solved analytically , we answer this question by investigating the average quantities and ( ) ; now we are considering and hence both are well defined .due to the assumed homogeneous network of interactions and the chosen initial conditions , all averages are identical and the same holds for the cross - terms ; effectively we are left with only two variables . from the it lemmait follows that and .after substitution of and averaging over all possible realisations , we obtain the exact set of equations,\\ \frac{{\mathrm{d}}{\langle v_i(t)v_j(t)\rangle}}{{\mathrm{d}}t}&=\frac2{n-1}\,\big [ { \langle v_i^2(t)\rangle}-{\langle v_i(t)v_j(t)\rangle}\big].}\ ] ] since we set ( ) , and the initial conditions are and ; for the general case see. independently of the initial conditions , for , has only the trivial stationary solution .this confirms that for a finite , there is no stationary distribution . by solving oneobtains the variance (t)={\langle v_i^2(t)\rangle}-{\langle v_i(t)\rangle}^2 ] .apart from a constant , it contains only terms proportional to ] cause the initial saturation of (t) ] eventually take over and cause the divergence of (t) ] the variance is almost constant and correlations are still small the system is in the power - law regime ( due to large computational complexity , no numerical results are shown here ) . eventually , for , the synchronized regime is established .the transition times given by , , and are shown as vertical dotted lines and agree well with the described changes of the system behaviour .( t) ] and for agents ( a ) and for agents ( b ) .analytical results following from are shown as lines , numerical results obtained by averaging over realisations are shown as symbols , .vertical dotted lines indicate the transition times , , and , left to right , respectively.,title="fig : " ] we should sound here a note of caution about the interpretation of the averages and and the wealth distribution .all these quantities are ensemble - based : if many copies of the system evolve independently for time , by examining the final wealths of agent one can estimate both the distribution and the averages .by contrast , when one speaks about an empirical wealth distribution , that is based on the wealth of all agents in one realisation only , it is population - based .however , when the number of realisations and the number of agents are large and the wealth correlations are small , ensemble- and population - based quantities are alike .such behaviour was observable also in the numerical simulations presented above .in the free and power - law regimes , the variance of wealth in each realisation was similar to ] is increasing . in the initial regime , this increase is due to growing variances of all agents wealth . in the power - law regime ,variances of wealth levels are approximately constant but their growing correlations lead to increasing ] is caused by exponentially growing variances of wealths . since correlations are large , ensemble- and population - based quantities are no longer equivalent .finally we remark that since , is fixed , and ] and diverge and must be replaced by different quantities . instead of the variance , one can use the mean absolute deviation which avoids second moments of the wealth distribution and hence can be used for any .the pearson s correlation coefficient can be replaced by a rank correlation coefficient ( kendall s or spearman s ) .all three proposed quantities are hard to handle in analytical calculations and with strong noise , numerical simulations of the system are extremely time - demanding .while we have obtained no definite results yet , preliminary outcomes suggest that in this case too the transition from the power - law regime occurs at a time proportional to the number of agents .now we generalize the exchange network to an arbitrary graph : denoting the set of neighbours of agent by , the number of neighbours by , the average number of neighbours by .we assume that each agent interacts equally with all neighbours and per unit time exchanges the whole wealth , hence notice that the matrix of exchanges is asymmetric .now , generalizes to where . by averaging over realisationswe obtain the set of equations for the stationary values of the average wealths which is solved by . assuming average wealth equal to , has the unique solution .this means that the topology of the exchange network is crucial for the distribution of wealth among the agents .consequently , when is small and hence wealth fluctuations are negligible , a power - law distribution of wealth can be purely a topological effect of a scale - free degree distribution in the network of agent exchanges .to proceed , and are again the key quantities .they fulfill the equations which can be derived similarly to .we set the initial conditions according to the stationary wealths as and thus and ( the general case is studied in ) .from follows which means that the growth of (t) ] . by comparing this stationary variance with , we obtain the transition time from the free regime to the power - law regime as which is identical to .further , from we see that the transition time from the power - law regime to the synchronized regime is proportional to and thus for the whole network it can be estimated as which is a generalization of .we see that for networks with a relatively small average degree , the power - law regime appears only for a limited time or not at all .we were unable to obtain an equivalent of the transition time for a general network .considering , for example , a simple star - like structure with one agent in the center and the remaining agents connected only to him , one can see that the transition time is small and does not scale with .this suggests that similarly to , is also of the order .this contradicts the findings presented in ( page 541 ) where they report stationary power - law tails for ; it is possible that their numerical results are influenced by finite - time and finite - size effects .there is still one more transition time to investigate .when the initial conditions are not set in line with the stationary wealths given by eq . , a certain time is needed to redistribute the excessive wealth levels over the network ; we say that the system is in the _equilibration regime_. since , noise terms do not contribute to the redistribution .thus , effectively simplifies to which leads to the exponential convergence of to the stationary value . by the substitution we obtain time scale is given by the initial terms as .thus , the initial wealth distribution equilibrates in time .since the transition from the free regime occurs roughly at the same time , the system passes from the equilibration regime directly to the power - law regime .we have shown that in the investigated model , agent wealths have no stationary distribution and the power - law tailed distribution reported in previous works is only transient .in addition , for any finite number of agents , their average wealth follows a multiplicative process with a fixed expected value and an increasing variance $ ] .hence , as illustrated in , the probability approaches for any .we can conclude that the simple economy produced by the model is an uneasy one : the longer it evolves , the higher the probability that a given agent has wealth much smaller than any positive fraction of the expected wealth .there is also a more general lesson to be learned .in essence , the mean - field approximation here anchors the agent wealths to their expected values and thus weakens the diffusive nature of the studied stochastic system .mathematically speaking , the system behaviour depends on the order of limits and : in the former case there is a stationary wealth distribution , in the latter case there is none .this is an undesired consequence of the mean - field approximation which , as with other stochastic models , should be used with great caution . in particular , when using it , one should check if the nature of the studied system is not changed . to achieve this , in this paperwe have used an aggregate quantity ( the average wealth ) and a quantity obtained using the mean - field approximation ( the wealth variance ) . on the other hand , in some casesan anchoring term may be appropriate .for example , a simple taxation of wealth can be achieved by introducing the term to , where represents the tax rate .then the set of equations for and has a nontrivial stationary solution for ; one can say that the proposed taxation stabilizes the system .notably , systems of coupled stochastic equations with multiplicative noise and negative feedback are common in the study of nonequilibrium phase transitions in magnetic systems .our work shows that this negative is crucial for mean - field studies of such systems .in addition to the presented results , several questions remain open .first , for large time , the analytical form of the wealth distribution is unknown .second , for an arbitrary network of exchanges , the limiting value of the correlation and also the transition time are of interest .third , the strong noise case deserves more attention and perhaps an attempt for approximate analytical results . finally , the studied model is simplistic , since it combines two ingredients of economy trade and speculation in a very unrealistic way .devising a more adequate model remains a future challenge .we acknowledge the hospitality of the comenius university ( bratislava , slovakia ) and the fribourg university ( fribourg , switzerland ) .we thank frantiek slanina for early discussions and zoltn kuscsik and joseph wakeling for helpful comments .99 pareto v 1897 _ cours deconomie politique _ ( lausanne : rouge ) piggott j 1984 _ economic record _ * 60 * 252265 aoyama h et al 2000 _ fractals _ * 8 * 293300 drgulescu a and yakovenko v m 2001 _ physica a _ * 299 * 213221 sinha s 2006 _ physica a _ * 359 * 555562 champernowne d g 1953 _ the economic journal _ * 63 * 318351 wold h o a and whittle p 1957 _ econometrica _ * 25 * 591595 stiglitz j e 1969 _ econometrica _ * 37 * 382397 ispolatov s et al 1998 _ eur . phys. j. b _ * 2 * 267276 drgulescu a and yakovenko v m 2000 _ eur .j. b _ * 17 * 723729 slanina f 2004 _ physe _ * 69 * 046102 patriarca m et al 2006 the abcd s of statistical many - agent economy models _ preprint _ arxiv : physics/0611245 chatterjee a and chakrabarti b k 2007 _ eur .j. b _ * 60 * 135149 bouchaud j - p and mzard m 2000 _physica a _ * 282 * 536545 solomon s and richmond p 2001 _ physica a _ * 299 * 188197 di matteo t et al 2004 in _ the physics of complex systems ( new advances and perspectives ) _ eds .mallamace f and stanley h e ( amsterdam : ios press ) sornette d 1998 _ phys .e _ * 57 * 48114813 huang z - f and solomon s 2001 _ physica a _ * 294 * 503513 reed w j 2001 _ economics letters _ * 74 * 1519 quadrini v and ros - rull j - v 1997 models of the distribution of wealth research department , federal reserve bank of minneapolis davies j b and shorrocks a f 2000 the distribution of wealth _ handbook of income distribution ( handbooks in economics ) _ eds . atkinson a b and bourguignon f ( amsterdam : north holland ) garlaschelli d and loffredo m i 2004 _ physica a _ * 338 * 113118 garlaschelli d and loffredo m i 2008 _ j. phys .theor . _ * 41 * 224018 gupta a k 2008 _ physica a _ * 387 * 68196824 gardiner c w 2004 _ handbook of stochastic methods , 3rd edition _ ( berlin : springer )
one of the key socioeconomic phenomena to explain is the distribution of wealth . bouchaud and mzard have proposed an interesting model of economy [ bouchaud and mzard ( 2000 ) ] based on trade and investments of agents . in the mean - field approximation , the model produces a stationary wealth distribution with a power - law tail . in this paper we examine characteristic time scales of the model and show that for any finite number of agents , the validity of the mean - field result is time - limited and the model in fact has no stationary wealth distribution . further analysis suggests that for heterogeneous agents , the limitations are even stronger . we conclude with general implications of the presented results .
topological surgery is a mathematical technique used for changing the homeomorphism type of a topological manifold , thus for creating new manifolds out of known ones .more precisely , manifolds with homeomorphic boundaries may be attached together via a homeomorphism between their boundaries which can be used as ` glue ' .an _ -dimensional topological surgery _ on an -manifold is , roughly , the topological procedure whereby an appropriate -manifold with boundary is removed from and is replaced by another -manifold with the same boundary , using a ` gluing ' homeomorphism along the common boundary , thus creating a new -manifold .for example , all orientable surfaces may arise from the 2-dimensional sphere using surgery . in this paperwe extend significantly the preliminary results and early ideas presented in , , and . as we observe, topological surgery is exhibited in nature in numerous , diverse processes of various scales for ensuring new results .surgery in nature is usually performed on basic manifolds with or without boundary that undergo merging and recoupling .such processes are initiated by attracting ( or repelling ) forces acting on a sphere of dimension 0 or 1 . a large part of this work is dedicated to setting the topological ground for modeling such phenomena in dimensions 1,2 and 3 .namely , we introduce new theoretical concepts which are better adapted to the phenomena and which enhance the formal definition of surgery .more precisely , the new concepts are : * the introduction of forces : * a sphere of dimension 0 or 1 is selected in space and attracting ( or repelling ) forces act on it .these dynamics explain the intermediate steps of the formal definition and extend surgery to a continuous process caused by local forces .note that these intermediate steps can also be explained through morse theory .the theoretical forces that we introduce are also observed in the phenomena exhibiting surgery .for example , in dimension 1 , during chromosomal crossover ( see figure [ chromcrossover ] ) , the pairing is caused by mutual attraction of the parts of the chromosomes that are similar or homologous . in dimension 2 , the creation of tornadoes ( see figure [ tornado ] ( a ) ) is caused by attracting forces between the cloud and the earth while soap bubble splitting ( see figure [ bubbles ] ) is caused by the surface tension of each bubble which acts as an attracting force . * solid surgery : * the interior of the initial manifold is now filled in . for example , in dimension 1 this allows to model phenomena happening on surfaces such as the merging of oil slicks .an oil slick is seen as a disc , that is a continuum of concentric circles together with the center .an example in dimension 2 is the process of mitosis ( see figure [ mitosis ] ) , whereby a cell splits into two new cells .the cell is seen as a 3-ball , that is , a continuum of concentric spheres together with the central point .other examples comprise the formation of waterspouts ( see figure [ tornado ] ( b ) ) where we see the formation of the tornado s cylindrical ` cork ' and the creation of falaco solitons ( see figure [ falaco ] ) where the creation of two discs joined with an invisible thread is taking place in a pool of water . * embedded surgery : * all phenomena exhibiting surgery take place in 3-space . for this reason we introduce the notion of embedded 1- or 2-dimensional surgery which is taking place on an embedding of the initial manifold in 3-space .the ambient 3-space leaves room for the initial manifold to assume a more complicated configuration and allows the complementary space of the initial manifold to participate actively in the process .for example , in dimension 1 any embedding of is a knot and a related phenomenon is the dna recombination of circular dna ( see figure [ dna_recomb ] ) .an example in dimension 2 is the formation of black holes ( see figure [ blackhole ] ) where the whole space is involved in the process .note that the appearance of forces , enhanced with the notions of solid 1- and 2-dimensional surgery , can be all viewed within the context of embedded surgery .in fact all the above culminate to the notion of embedded solid 2-dimensional surgery and can be derived from there . * connection with a dynamical system : * finally , we establish a connection between these new notions applied on 2-dimensional topological surgery and the dynamical system presented in .we analyze how , with a slight perturbation of parameters , trajectories pass from spherical to toroidal shape through a ` hole drilling ' process .we show that our new topological notions are verified by both the local behavior of the steady state points of the system and the numerical simulations of its trajectories .this result give us on the one hand a mathematical model for 2-dimensional surgery and on the other hand a system that can model natural phenomena exhibiting these types of surgeries .this paper is organized as follows : in section [ intr ] we recall the topological notions that will be used and provide specific examples that will be of great help to readers that are not familiar with topological surgery . in section [ 1d ] , we introduce dynamics to 1-dimensional surgery , we define solid 1-dimensional surgery and we discuss 1-dimensional natural processes exhibiting these types of surgeries . in section [ 2d ]we extend these definitions to 2-dimensional surgery and discuss related 2-dimensional natural processes .we then use these new theoretical concepts in section [ connecting ] to pin down the relations among topological surgeries of different dimensions . as all natural phenomena exhibiting surgery ( 1 or 2-dimensional , solid or usual )take place in the ambient 3-space , in section [ s3 ] we present our 3-space and the duality of its descriptions .this allows us to define in section [ es3sp ] the notion of embedded surgery in .finally , our connection of solid 2-dimensional surgery with a dynamical system is established in section [ ds ] .in each dimension the basic closed ( compact without boundary ) , connected , oriented ( c.c.o . ) -manifold , is the -sphere , , which may be viewed as with all points at infinity compactified to one single point . for ,the space is the disjoint union of two one - point spaces and : .the product space is the disjoint union .we also need to recall that the basic connected , oriented -manifold with boundary is the solid -ball , . in particular for , other 3-manifolds with boundary , that we will be referring to ,are : the solid torus , which can be described as the product set , and the handlebodies , which generalize the solid torus , having higher genus .we recall the following topological notions : 1 .an embedding of a submanifold is _ framed _ if it extends to an embedding .an _ -embedding in an -dimensional manifold _ is an embedding .a _ framed -embedding in _ is an embedding , with core -embedding .[ surgery ] an _ m - dimensional n - surgery _ is the topological procedure of creating a new -manifold out of a given -manifold by removing a framed -embedding , and replacing it with , using the ` gluing ' homeomorphism along the common boundary .that is : note that from the definition , we must have .further , the _ dual m - dimensional -surgery _ on removes a dual framed -embedding such that , and replaces it with , using the ` gluing ' homeomorphism ( or ) along the common boundary .that is : note that resulting manifold may or may not be homeomorphic to . from the above definition , it follows that . for preliminary definitions behind the definitions of surgery such as topological spaces , homeomorphisms , embeddings and other related notionssee appendix [ appendix ] .for further reading , excellent references on the subject are .[ 1d_formale ] we only have one kind of surgery on a 1-manifold , the _1-dimensional 0-surgery _ where and : the above definition means that two segments are removed from and they are replaced by two different segments by reconnecting the four boundary points in a different way .there are two ways of reconnecting four points , both of which are illustrated in figure [ 1d_formal ] .for example , if we start with the circle , depending on the type of reconnection , we obtain two circles if is the standard embedding , see figure [ 1d_formal ] ( a ) , or one circle if is modified by twisting one of the emdeddings of , see figure [ 1d_formal ] ( b ) . more specifically , if we define the homeomorphism , the embedding used in ( b ) is .note that in one dimension , the dual case is also a 1-dimensional 0-surgery .for example , as seen in both figure [ 1d_formal ] ( a ) and ( b ) , the reverse processes are 1-dimensional 0-surgeries which give back . more specifically , starting with two circles ,if each segment of is embedded in a different circle , the result of 1-dimensional 0-surgery is one circle ( see the reverse process of figure [ 1d_formal ] ( a ) ) .[ 2d_formale ] starting with a 2-manifold , there are two types of surgeries . for and , we have the _2-dimensional 0-surgery _ whereby two discs are removed from and are replaced in the closure of the remaining manifold by a cylinder , which gets attached via a homeomorphism along the common boundary , comprising two copies of .the gluing homeomorphism of the common boundary can twist one or both copies of . for , the above operationchanges the homeomorphism type from the 2-sphere to that of the torus , see figure [ 2d_formal ] ( a ) . in fact ,every c.c.o .surface arises from the 2-sphere by repeated surgeries and each time the above process is performed the genus of the surface is increased by one .the other possibility of 2-dimensional surgery on is the _2-dimensional 1-surgery _ for and : here an annulus ( perhaps twisted ) is removed from and is replaced in the closure of the remaining manifold by two discs attached along the common boundary . for the result is two copies of .see figure [ 2d_formal ] ( b ) . from definition [ surgery] , it follows that a dual 2-dimensional 0-surgery is a 2-dimensional 1-surgery and vice versa .hence , figure [ 2d_formal ] ( a ) shows that a 2-dimensional 0-surgery on a sphere is the reverse process of a 2-dimensional 1-surgery on a torus . similarly , as illustrated in figure [ 2d_formal ] ( b ) , a 2-dimensional 1-surgery on a sphere is the reverse process of a 2-dimensional 0-surgery on two spheres . in the figurethe symbol depicts surgeries from left to right and their corresponding dual surgeries from right to left .1-dimensional 0-surgery happens in nature , in various scales , in phenomena where 1-dimensional splicing and reconnection occurs .for example , it happens on chromosomes during meiosis and produces new combinations of genes ( see figure [ chromcrossover ] ) , in site - specific dna recombination ( see figure [ dna_recomb ] ) whereby nature alters the genetic code of an organism , either by moving a block of dna to another position on the molecule or by integrating a block of alien dna into a host genome ( see ) , in magnetic reconnection , the phenomenon whereby cosmic magnetic field lines from different magnetic domains are spliced to one another , changing their patterns of connectivity with respect to the sources ( see figure [ magneticrecon ] from ) and in the reconnection of vortices in classical and quantum fluids ( see ) . in this sectionwe introduce dynamics which explains the process of 1-dimensional surgery , define the notion of solid 1-dimensional surgery and examine in more details the aforementioned natural phenomena .the formal definition of 1-dimensional 0-surgery gives a static description of the initial and the final stage whereas natural phenomena exhibiting 1-dimensional 0-surgery follow a continuous process . in order to address such phenomena or to understand how 1-dimensional 0-surgery happens, we need a non - static description .furthermore , in nature , 1-dimensional 0-surgery often happens locally , on arcs or segments .that is , the initial manifold is often bigger and we remove from its interior two segments .therefore , we also need dynamics that act locally . in figure [ 1d_local ], we introduce dynamics which explain the intermediate steps of the formal definition and extend surgery to a continuous process caused by local forces .the process starts with the two points specified on the manifold ( in red ) , on which attracting forces are applied ( in blue ) .we assume that these forces are created by an attracting center ( also in blue ) .then , the two segments , which are neighbourhoods of the two points , get close to one another . when the specified points ( or centers ) of two segments reach the attracting center , they touch and recoupling takes place giving rise to the two final segments , which split apart . as mentioned in previous section , we have two cases ( a ) and ( b ) , depending on the homemorphism .as mentioned in example [ 1d_formale ] , the dual case is also a 1-dimensional 0-surgery as it removes segments and replace them by segments .this is the reverse process which starts from the end and is illustrated in figure [ 1d_local ] as a result of the orange forces and attracting center which are applied on the ` complementary ' points .[ morse ] it is worth mentioning that the intermediate steps of surgery presented in figure [ 1d_local ] can also be viewed in the context of morse theory . by using the local form of a morse function, we can visualize the process of surgery by varying parameter of equation .for it is the hyperbola shown in the second instance of figure [ 1d_local ] where the two segments get close to one another .for it is the two straight lines where the reconnection takes place as shown in the third instance of figure [ 1d_local ] while for it represents the hyperbola of the two final segments shown in case ( a ) of the fourth instance of figure [ 1d_local ] .this sequence can be generalized for higher dimensional surgeries as well , however , in this paper we will not use this approach as we are focusing on the introduction of forces and of the attracting center .these local dynamics produce different manifolds depending on where the initial neighbourhoods are embedded . taking the known case of the standard embedding and , we obtain ( for both regular and dual surgery ) , see figure [ 1d_global ] ( a ) . furthermore , as shown in figure [ 1d_global ] ( b ) , we also obtain even if the attracting center is outside .note that these outcomes are not different than the ones shown in formal surgery ( figure [ 1d_formal ] ) but we can now see the intermediate instances .looking closer at the aforementioned phenomena , the described dynamics and attracting forces are present in all cases .namely , * magnetic reconnection * ( figure [ magneticrecon ] ) corresponds to a dual 1-dimensional 0-surgery ( see figure [ 1d_local ] ( b ) ) where is a dual embedding of the twisting homeomorphism defined in example [ 1d_formale ] of section [ definitions ] .the tubes are viewed as segments and correspond to an initial manifold ( or if they are connected ) on which the local dynamics act on two smaller segments .namely , the two magnetic flux tubes have a nonzero parallel net current through them , which leads to attraction of the tubes ( cf . ) . between them , a localized diffusion region develops where magnetic field lines may decouple .reconnection is accompanied with a sudden release of energy and the magnetic field lines break and rejoin in a lower energy state .in the case of * chromosomal crossover * ( figure [ chromcrossover ] ) , we have the same dual 1-dimensional 0-surgery as magnetic reconnection ( see figure [ 1d_local ] ( b ) ) . during this process , the homologous ( maternal and paternal ) chromosomes come together and pair , or synapse , during prophase .the pairing is remarkably precise and is caused by mutual attraction of the parts of the chromosomes that are similar or homologous .further , each paired chromosomes divide into two chromatids .the point where two homologous non - sister chromatids touch and exchange genetic material is called chiasma . at each chiasma , two of the chromatids have become broken and then rejoined ( cf . ) . in this process, we consider the initial manifold to be one chromatid from each chromosome , hence the initial manifold is on which the local dynamics act on two smaller segments .for * site - specific dna recombination * ( see figure [ dna_recomb ] ) , we have a 1-dimensional 0-surgery ( see figure [ 1d_local ] ( b ) ) with a twisted homeomorphism as defined in example [ 1d_formale ] of section [ definitions ] . here the initial manifold is a knot which is an embedding of in 3-space but this will be detailed in section [ es3sp ] . as mentioned in , enzymes break and rejoin the dna strands , hence in this case the seeming attraction of the two specified points is realized by the enzyme . note that , while both are genetic recombinations , there is a difference between chromosomal crossover and site - specific dna recombination .namely , chromosomal crossover involves the homologous recombination between two similar or identical molecules of dna and we view the process at the chromosome level regardless of the knotting of dna molecules . finally , * vortices reconnect * following the steps of 1-dimensional 0-surgery with a standard embedding shown in figure [ 1d_local ] ( a ) .the initial manifold is again .as mentioned in , the interaction of the anti - parallel vortices goes from attraction before reconnection , to repulsion after reconnection .there are phenomena which undergo the process of 1-dimensional 0-surgery but happen on surfaces , such as * tension on membranes or soap films * and the * merging of oil slicks*. in order to model topologically such phenomena we introduce the notion of solid 1-dimensional 0-surgery ._ solid 1-dimensional 0-surgery on the -disc _ is the topological procedure whereby a ribbon is being removed , such that the closure of the remaining manifold comprises two discs .see figure [ 1d_formal ] where the interior is now supposed to be filled in .this process is equivalent to performing 1-dimensional 0-surgeries on the whole continuum of concentric circles included in .more precisely , and introducing at the same time dynamics , we define : we start with the -disc of radius 1 with polar layering : where the radius of a circle and the limit point of the circles , that is , the center of the disc .we specify colinear pairs of antipodal points , with neighbourhoods of analogous lengths , on which the same colinear attracting forces act , see figure [ 1d_solid_type1 ] ( 1 ) where these forces and the corresponding attracting center are shown in blue .then , in ( 2 ) , antipodal segments get closer to one another or , equivalently , closer to the attracting center .note that here , the attracting center coincides with the limit point of all concentric circles , which is shown in green from instance ( 2 ) and on .then , as shown from ( 3 ) to ( 9 ) , we perform 1-dimensional 0-surgery on the whole continuum of concentric circles .the natural order of surgeries is as follows : first , the center of the segments that are closer to the center of attraction touch , see ( 4 ) .after all other points have also reached the center , see ( 5 ) , decoupling starts from the central or limit point .we define 1-dimensional 0-surgery on the limit point to be the two limit points of the resulting surgeries .that is , the effect of _ solid 1-dimensional 0-surgery on a point is the creation of two new points _ , see ( 6 ) .next , the other segments reconnect , from the inner , see ( 7 ) , to the outer ones , see ( 8) , until we have two copies of , see ( 9 ) and ( 10 ) .note that the proposed order of reconnection , from inner to outer , is the same as the one followed by skin healing , namely , the regeneration of the epidermis starts with the deepest part and then migrates upwards .the above process is the same as first removing the center from , doing the 1-dimensional 0-surgeries and then taking the closure of the resulting space .the resulting manifold is which comprises two copies of .we also have the reverse process of the above , namely ,_ solid 1-dimensional 0-surgery on two discs _ is the topological procedure whereby a ribbon joining the discs is added , such that the closure of the remaining manifold comprise one disc , as illustrated in figure [ 1d_solid_type1 ] .this process is the result of the orange forces and attracting center which are applied on the ` complementary ' points .this operation is equivalent to performing 1-dimensional 0-surgery on the whole continuum of concentric circles in .we only need to define solid 1-dimensional 0-surgery on two limit points to be the limit point of the resulting surgeries .that is , the effect of _ solid 1-dimensional 0-surgery on two points is their merging into one point_. the above process is the same as first removing the centers from the , doing the 1-dimensional 0-surgeries and then taking the closure of the resulting space .the resulting manifold is which comprises one copy of .[ seifert ] in analogy to embedded 1-dimensional 0-surgery , we also have the notion of embedded solid 1-dimensional 0-surgery . as is the boundary of ,any knot is the boundary of a , so - called , seifert surface , so embedded solid 1-dimensional 0-surgery could be extended to a seifert surface .both types of 2-dimensional surgeries are present in nature , in various scales , in phenomena where 2-dimensional merging and recoupling occurs . natural processes undergoing _ 2-dimensional 0-surgery _ comprise , for example , drop coalescence , the formation of tornadoes and falaco solitons ( see figures [ tornado ] and [ falaco ] ) , gene transfer in bacteria ( see figure [ genetransfer ] ) and the formation of black holes ( see figure [ blackhole ] ) . on the other hand , phenomena undergoing _2-dimensional 1-surgery _ comprise soap bubble splitting ( see figure [ bubbles ] ) , the biological process of mitosis ( see figure [ mitosis ] ) and fracture as a result of tension on metal specimen ( see figure [ necking ] ) . in this sectionwe introduce dynamics which explains the process of 2-dimensional surgery , define the notions of solid 2-dimensional surgery and embedded solid 2-dimensional surgery and examine in more details the aforementioned natural phenomena . the key notion of this sectionis the embedded solid 2-dimensional surgery from which 2 and 1-dimensional surgeries can be derived .this will be discussed in section [ connecting ] .note that except for soap bubble splitting which is a phenomena happening on surfaces , the other mentioned phenomena involve all three dimensions and are , therefore , analyzed after the introduction of solid 2-dimensional surgery , in sections [ 2d0 ] and [ 2d1 ] . in order to model topologically phenomena exhibiting 2-dimensional surgery or to understand 2-dimensional surgery through continuity we need , also here , to introduce dynamics . in figure [ 2d_local_hl_vl ] ( a ) ,the 2-dimensional 0-surgery starts with two points , or poles , specified on the manifold ( in red ) on which attracting forces created by an attracting center are applied ( in blue ) .then , the two discs , neighbourhoods of the two poles , approach each other .when the centers of the two discs touch , recoupling takes place and the discs get transformed into the final cylinder . as mentioned in example[ 2d_formale ] , the dual case of 2-dimensional 0-surgery is the 2-dimensional 1-surgery and vice versa .this is also shown in figure [ 2d_local_hl_vl ] ( a ) where the reverse process is the _2-dimensional 1-surgery _ which starts with the cylinder and a specified cyclical region ( in red ) on which attracting forces created by an attracting center are applied ( in orange ) .a ` necking ' occurs in the middle which degenerates into a point and finally tears apart creating two discs . as also seen in figure [ 2d_local_hl_vl ] ( a ) , in the case of 2-dimensional 0-surgery , forces ( in blue )are applied on two points , or , while in the case of the 2-dimensional 1-surgery , forces ( in orange ) are applied on a circle . in figure [ 2d_local_hl_vl ] ( b ) , we have an example of _ twisted 2-dimensional 0-surgery _ where the two discs are embedded via a twisted homemorphism while , in the dual case , the cylinder is embedded via a twisted homemorphism . here rotates the two discs while rotates the top and bottom of the cylinder by and respectively .more specifically , if we define the homeomorphism to be rotations by and respectively , then is defined as the composition .the homeomorphism is defined analogously .these local dynamics produce different manifolds depending on the initial manifold where the neighbourhoods are embedded .taking , the local dynamics of figure [ 2d_local_hl_vl ] ( a ) are shown in figure [ 2d_global ] ( a ) and ( b ) producing the same manifolds seen in formal 2-dimensional surgery ( recall figure [ 2d_formal ] ) . note that , as also seen in 1-dimensional surgery ( figure [ 1d_global ] ( b ) ) , if the blue attracting center in figure [ 2d_global ] ( a ) was outside the sphere and the cylinder was attached on externally , the result would still be a torus . and 2-dimensional 1-surgery on ( b ) 2-dimensional 1-surgery on and 2-dimensional 0-surgery on .,width=566 ] looking back at the natural phenomema happening on surfaces , an example is * soap bubble splitting * during which a soap bubble splits into two smaller bubbles .this process is the 2-dimensional 1-surgery on shown in figure [ 2d_global ] ( b ) .the orange attracting force in this case is the surface tension of each bubble that pulls molecules into the tightest possible groupings .most natural phenomena undergoing 2-dimensional surgery do not happen on surfaces but are three - dimensional .therefore we introduce , also here , the notion of _ solid 2-dimensional surgery_. there are two types of solid 2-dimensional surgery on the -ball , , analogous to the two types of 2-dimensional surgery .the first one is the _ solid 2-dimensional 0-surgery _ which is the topological procedure of removing a solid cylinder homeomorphic to the product set , ( such that the part of its boundary lies in the boundary of ) and taking the closure of the remaining manifold , which is a regular ( or twisted ) solid torus .see figure [ 2d_formal ] ( a ) where the interior is supposed to be filled in .the second type is the _solid 2-dimensional 1-surgery _ which is the topological procedure of removing a solid cylinder homeomorphic to the product set , , ( such that the part of its boundary lies in the boundary of ) and taking the closure of the remaining manifold , which is two copies of .see figure [ 2d_formal ] ( b ) where the interior is supposed to be filled in .those processes are equivalent to performing 2-dimensional surgeries on the whole continuum of concentric spheres included in . more precisely , and introducing at the same time dynamics , we define : [ continuum2d ] start with the -ball of radius 1 with polar layering : where the radius of a 2-sphere and the limit point of the spheres , that is , the center of the ball ._ solid 2-dimensional 0-surgery on _ is the topological procedure shown in figure [ 2d_solid_types1_2 ] ( a ) : on all spheres colinear pairs of antipodal points are specified , on which the same colinear attracting forces act .the poles have disc neighbourhoods of analogous areas .then , 2-dimensional 0-surgeries are performed on the whole continuum of the concentric spheres using the same homeomorphism . moreover , 2-dimensional 0-surgery on the limit point is defined to be the limit circle of the nested tori resulting from the continuum of 2-dimensional surgeries .that is , the effect of _2-dimensional 0-surgery on a point is the creation of a circle_. the process is characterized on one hand by the 1-dimensional core of the removed solid cylinder joining the antipodal points on the outer shell and intersecting each spherical layer in the two antipodal points and , on the other hand , by the homeomorphism , resulting in the whole continuum of layered tori .the process can be viewed as drilling out a tunnel along according to . for a twisted embedding , this agrees with our intuition that , for opening a hole , _ drilling with twisting _ seems to be the easiest way . on the other hand , _solid 2-dimensional 1-surgery on _ is the topological procedure where : on all spheres nested annular peels of the solid annulus of analogous areas are specified and the same coplanar attracting forces act on all spheres , see figure [ 2d_solid_types1_2 ] ( b ) . then , 2-dimensional 1-surgeries are performed on the whole continuum of the concentric spheres using the same homeomorphism .moreover , 2-dimensional 1-surgery on the limit point is defined to be the two limit points of the nested pairs of 2-spheres resulting from the continuum of 2-dimensional surgeries .that is , the effect of _2-dimensional 1-surgery on a point is the creation of two new points_. the process is characterized by the 2-dimensional central disc of the solid annulus and the homeomorphism , and it can be viewed as squeezing the central disc or , equivalently , as pulling apart the upper and lower hemispheres with possible twists if is a twisted embedding .this agrees with our intuition that for cutting a solid object apart , _ pulling with twisting _ seems to be the easiest way . for both types ,the above process is the same as : first removing the center from , performing the 2-dimensional surgeries and then taking the closure of the resulting space .namely we obtain : which is a solid torus in the case of solid 2-dimensional 0-surgery and two copies of in the case of solid 2-dimensional 1-surgery .as seen in figure [ 2d_solid_types1_2 ] , we also have the two dual solid 2-dimensional surgeries , which represent the reverse processes . as already mentioned in example [ 2d_formale ] the dual case of 2-dimensional 0-surgeryis the 2-dimensional 1-surgery and vice versa .more precisely : the dual case of solid 2-dimensional 0-surgery on is the_ solid 2-dimensional 1-surgery on a solid torus _ whereby a solid cylinder filling the hole is added , such that the closure of the resulting manifold comprises one 3-ball .this is the reverse process shown in figure [ 2d_solid_types1_2 ] ( a ) which results from the orange forces and attracting center .it only remain to define the solid 2-dimensional 1-surgery on the limit circle to be the limit point of the resulting surgeries .that is , the effect of _ solid 2-dimensional 1-surgery on the core circle is that it collapses into one point_. the above process is the same as first removing the core circle from , doing the 2-dimensional 1-surgeries on the nested tori , with the same coplanar acting forces , and then taking the closure of the resulting space .given that the solid torus can be written as a union of nested tori together with the core circle : , the resulting manifold is which comprises one copy of .further , the dual case of solid 2-dimensional 1-surgery on is the _solid 2-dimensional 0-surgery on two -balls _ whereby a solid cylinder joining the balls is added , such that the closure of the resulting manifold comprise of one 3-ball .this is the reverse process shown in figure [ 2d_solid_types1_2 ] ( b ) which results from the blue forces and attracting center .we only need to define the solid 2-dimensional 0-surgery on two limit points to be the limit point of the resulting surgeries .that is , as in solid 1-dimensional surgery , the effect of _ solid 2-dimensional 0-surgery on two points is their merging into one point_. the above process is the same as first removing the centers from the , doing the 2-dimensional 0-surgeries on the nested spheres , with the same colinear forces , and then taking the closure of the resulting space .the resulting manifold is which comprises one copy of .the notions of 2-dimensional ( resp .solid 2-dimensional ) surgery , can be generalized from ( resp . ) to a surface ( resp . a handlebody ) of genus creating a surface ( resp . a handlebody ) of genus .solid 2-dimensional 0-surgery is often present in natural phenomena where attracting forces between two poles are present , such as the formation of tornadoes , the formation of falaco solitons , the formation of black holes , gene transfer in bacteria and drop coalescence .we shall discuss these phenomena in some detail pinning down their exhibiting of topological surgery .regarding * tornadoes * : except from their shape ( see figure [ tornado ] ) which fits the cylinder that gets attached in the definition of 2-dimensional 0-surgery , the process by which they are formed also follows the dynamics introduced in section [ solid2d ] .namely , if certain meteorological conditions are met , an attracting force between the cloud and the earth beneath is created and funnel - shaped clouds start descending toward the ground .once they reach it , they become tornadoes . in analogy to solid 2-dimensional 0-surgery , first the poles are chosen , one on the tip of the cloud and the other on the ground , and they seem to be joined through an invisible line .then , starting from the first point , the wind revolves in a helicoidal motion toward the second point , resembling ` hole drilling ' along the line until the hole is drilled .therefore , tornado formation undergoes the process of solid 2-dimensional 0-surgery with a twisted embedding , as in figure [ 2d_local_hl_vl ] ( b ) . the initial manifoldcan be considered as , that is , one 3-ball on the cloud and one on the ground .note that in this realization of solid 2-dimensional 0-surgery , the attracting center coincides with the ground and we only see helicoidal motion in one direction .another natural phenomenon exhibiting solid 2-dimensional 0-surgery is the formation of * falaco solitons * , see figure [ falaco ] ( for photos of pairs of falaco solitons in a swimming pool , see ) .note that the term ` falaco soliton ' appears in 2001 in .each falaco soliton consists of a pair of locally unstable but globally stabilized contra - rotating identations in the water - air discontinuity surface of a swimming pool .these pairs of singular surfaces ( poles ) are connected by means of a stabilizing thread .this thread corresponds to the ` invisible line ' mentioned in the process of tornado formation which is visible in this case .the two poles get connected and their rotation propagates below the water surface along the joining thread and the tubular neighborhood around it .this process is a solid 2-dimensional 0-surgery with a twisted embedding ( see figure [ 2d_local_hl_vl ] ( b ) ) where the initial manifold is the water contained in the volume of the pool where the process happens , which is homeomorphic to a 3-ball , that is .two differences compared to tornadoes are : here the helicoidal motion is present in both poles and the attracting center is not located on the ground but between the poles , on the topological thread joining them .it is also worth mentioning that the creation of falaco solitons is immediate and does not allow us to see whether the transitions of the 2-dimensional 0-surgery shown in figure [ 2d_local_hl_vl ] ( b ) are followed or not .however , these dynamics are certainly visible during the annihilation of falaco solitons .namely , when the topological thread joining the poles is cut , the tube tears apart and slowly degenerates to the poles until they both stops spinning and vanish .therefore , the continuity of our dynamic model is clearly present during the reverse process which corresponds to a solid 2-dimensional 1-surgery on a pair of falaco solitons , that is , a solid torus degenerating into a still swimming pool .note that it is conjectured in that the coherent topological features of the falaco solitons and , by extension , the process of solid 2-dimensional 0-surgery appear in both macroscopic level ( for example in the wheeler s wormholes ) and microscopic level ( for example in the spin pairing mechanism in the microscopic fermi surface ) . for more detailssee .another phenomenon undergoing solid 2-dimensional 0-surgery is the formation of a * black hole*. most black holes form from the remnants of a large star that dies in a supernova explosion and have a gravitational field so strong that not even light can escape . in the simulation of a black hole formation ( see ) , the density distribution at the core of a collapsing massive star is shown . in figure [ blackhole ]matter performs solid 2-dimensional 0-surgery as it collapses into a black hole .matter collapses at the center of attraction of the initial manifold creating the singularity , that is , the center of the black hole , which is surrounded by the toroidal accretion disc ( shown in white in figure [ blackhole ] ( c ) ) .solid 2-dimensional 0-surgery is also found in the mechanism of * gene transfer in bacteria*. see figure [ genetransfer ] ( also , for description and instructive illustrations see ) .the donor cell produces a connecting tube called `` pilus '' which attaches to the recipient cell , brings the two cells together and transfers the donor s dna .this process is similar to the one shown in figure [ 2d_solid_types1_2 ] ( b ) as two copies of merge into one , but here the attracting center is located on the recipient cell .this process is a solid 2-dimensional 0-surgery on two 3-balls .finally , * drop coalescence * is the merging of two dispersed drops into one .as gene transfer in bacteria , this process is also a solid 2-dimensional 0-surgery on two 3-balls , see figure [ 2d_solid_types1_2 ] ( b ) .the process of drop coalescence also exhibits the forces of our model .namely , the surfaces of two drops must be in contact for coalescence to occur .this surface contact is dependent on both the van der waals attraction and the surface repulsion forces between two drops .when the van der waals forces cause rupture of the film , the two surface films are able to fuse together , an event more likely to occur in areas where the surface film is weak . the liquid inside each drop is now in direct contact , and the two drops are able to merge into one . [ dsgenerality ] although in this section some natural processes were viewed as a solid 2-dimensional topological surgery on , we could also consider the initial manifold as being a 3-ball surrounding the phenomena and view it as a surgery on .concerning the process of tornado formation , this approach also has a physical meaning .namely , as the process is triggered by the difference in the conditions of the lower and upper atmosphere , the initial manifold can be considered as the 3-ball containing this air cycle .as already mentioned , the collapsing of the central disc of the sphere caused by the orange attracting forces in figure [ 2d_solid_types1_2 ] ( b ) can also be caused by pulling apart the upper and lower hemispheres of the 3-ball , that is , the causal forces can also be repelling .for example , during fracture of metal specimens under tensile forces , solid 2-dimensional 1-surgery is caused by forces that pull apart each end of the specimen .on the other hand , in the biological process of mitosis , both attracting and repelling forces forces are present . when the tension applied on metal specimens by tensile forces results in * necking * and then * fracture * , the process exhibits solid 2-dimensional 1-surgery .more precisely , in experiments in mechanics , tensile forces ( or loading ) are applied on a cylindrical specimen made of dactyle material ( steel , aluminium , etc . ) . up to some critical value of the forcethe deformation is homogeneous ( the cross - sections have the same area ) . at the critical valuethe deformation is localized within a very small area where the cross - section is reduced drastically , while the sections of the remaining portions increase slightly .this is the ` necking phenomenon ' .shortly after , the specimen is fractured ( view for details ) . in figure [ necking ]are the the basic steps of the process : void formation , void coalescence ( also known as crack formation ) , crack propagation , and failure . here , the process is not as smooth as our theoretical model and the tensile forces applied on the specimen are equivalent to repelling forces .the specimen is homeomorphic to the sphere shown in figure [ 2d_solid_types1_2 ] ( b ) hence the initial manifold is .solid 2-dimensional 1-surgery on also happens in the biological process of * mitosis * , where a cell splits into two new cells .see figure [ mitosis ] ( for description and instructive illustrations see for example ) .we will see that both aforementioned forces are present here . during mitosis ,the chromosomes , which have already duplicated , condense and attach to fibers that pull one copy of each chromosome to opposite sides of the cell ( this pulling is equivalent to repelling forces ) .the cell pinches in the middle and then divides by cytokinesis .the structure that accomplishes cytokinesis is the contractile ring , a dynamic assembly of filaments and proteins which assembles just beneath the plasma membrane and contracts to constrict the cell into two ( this contraction is equivalent to attracting forces ) . in the end , two genetically - identical daughter cells are produced .it is worth noting that the splitting of the cell into two coincide with the fact that 2-dimensional 1-surgery on a point is the creation of two new points ( see definition [ continuum2d ] ) .as shown in figure [ crossections12 ] , a 1-dimensional surgery is a cross - section of the corresponding 2-dimensional surgery which , in turn , is a crossection of the corresponding solid 2-dimensional surgery .this is true for both 1 or 0-surgeries ( see figure [ crossections12 ] ( a ) and ( b ) respectively ) . on the left - hand top and bottom pictures of figure [ crossections12 ] ( a ) and ( b ) we see the initial and final stage of solid 2-dimensional surgery .taking the intersection with the boundary of the 3-ball we pass to the middle pictures where we see the the initial and final pictures of 2-dimensional surgery .taking finally the intersection with a meridional plane gives rise to the initial and final stages of 1-dimensional surgery ( rightmost illustrations ) .the above concerns 0-surgeries in figure [ crossections12 ] ( a ) and 1-surgeries in figure [ crossections12 ] ( b ) .furthermore , as seen in figure [ crossections12_solid ] , we see the relation between solid surgeries in dimensions 2 and 1 .namely , solid 2-dimensional 0-surgery on the central point of the spherical nesting results in the central circle of the toroidal nesting .this circle has two intersecting points with the plane which are the result of solid 1-dimensional 0-surgery on the central point , see figure [ crossections12_solid ] ( a ) . on the other hand ,both solid 2-dimensional 1-surgery and solid 1-dimensional 0-surgery on the central point creates two points , see figure [ crossections12_solid ] ( b ) .all natural phenomena exhibiting surgery ( 1- or 2-dimensional , solid or usual ) take place in the ambient 3-space . as we will see in the next section , the ambient space can play an important role in the process of surgery . by _3-space _ we mean here the compactification of which is the 3-sphere .this choice , as opposed to , takes advantage of the duality of the descriptions of . in this sectionwe present the three most common descriptions of in which this duality is apparent and which will set the ground for defining the notion of embedded surgery in . beyond that, we also demonstrate how the descriptions are interrelated via solid 2-dimensional 0-surgery which , due to the duality of the dimensions , takes place in both the initial 3-ball and its complement . in dimension 3 , the simplest c.c.o .3-manifolds are : the 3-sphere and the lens spaces . in this paperhowever , we will focus on .we start by recalling its three most common descriptions : can be viewed as with all points at infinity compactified to one single point : .see figure [ layeredspheres ] ( b ) . can be viewed as an unbounded continuum of nested 2-spheres centered at the origin , together with the point at the origin , see figure [ layeredspheres ] ( a ) , and also as the de - compactification of .so , minus the point at the origin and the point at infinity can be viewed as a continuous nesting of 2-spheres .is the compactification of .*,width=302 ] can be viewed as the union of two -balls : , see figure [ twoballs ] ( a ) .this second description of is clearly related to the first one , since a ( closed ) neighbourhood of the point at infinity can stand for one of the two -balls .note that , when removing the point at infinity in figure [ twoballs ] ( a ) we can see the concentric spheres of the 3-ball ( in red ) wrapping around the concentric spheres of the 3-ball , see figure [ twoballs ] ( b ) .this is another way of viewing as the de - compactification of .this picture is the analogue of the stereographic projection of on the plane , whereby the projections of the concentric circles of the south hemisphere together with the projections of the concentric circles of the north hemisphere form the well - known polar description of with the unbounded continuum of concentric circles .is the result of gluing two 3-balls.*,width=340 ] the third well - known representation of is as the union of two solid tori , , via the torus homeomorphism along the common boundary . maps a meridian of to a longitude of which has linking number zero with the core curve of .the illustration in figure [ splittingofs3 ] ( a ) gives an idea of this splitting of . in the figure ,the core curve of is in dashed green .so , the complement of a solid torus in is another solid torus whose core curve ( in dashed red ) may be assumed to pass by the point at infinity .note that , minus the core curves and of and ( the green and red curves in figure [ splittingofs3 ] ) can be viewed as a continuum of nested tori . when removing the point at infinity in the representation of as a union of two solid tori , the core of the solid torus becomes an infinite line and the nested tori of can now be seen wrapping around the nested tori of .see figure [ splittingofs3 ] ( b ) .therefore , can be viewed as an unbounded continuum of nested tori , together with the core curve of and the infinite line .this line joins pairs of antipodal points of all concentric spheres of the first description . note that in the nested spheres description ( figure [ layeredspheres ] ) the line pierces all spheres while in the nested tori description the line is the ` untouched ' limit circle of all tori . as a union of two solid tori ( b ) de - compactificated view .* , width=566 ] [ glatz ] it is worth observing the resemblance of figure [ splittingofs3 ] ( b ) with the well - known representation of the * earth magnetic field*. a numerical simulation of the earth magnetic field via the glatzmaier - roberts geodynamo model was made in , see figure [ geodynamo ] .the magnetic field lines are lying on nested tori and comprise a visualization of the decompactified view of as two tori . as two tori*,width=113 ] [ hopf ]it is also worth mentioning that another way to visualize as two solid tori is the * hopf fibration * , which is a map of into .the parallels of correspond to the nested tori of , the north pole of correspond to the core curve of while the south pole of corresponds to the core curve of .an insightful animation of the hopf fibration can be found in .the connection between the first two descriptions of was already discussed in previous section .the third description is a bit harder to connect with the first two .we shall do this here . a way to seethis connection is the following .consider the description of as the union of two 3-balls , and ( figure [ twoballs ] ) .combining with the third description of ( figure [ splittingofs3 ] ) we notice that both 3-balls are pierced by the core curve of the solid torus .therefore , can be viewed as the solid torus to which a solid cylinder is attached via the homeomorphism : this solid cylinder is part of the solid torus , a ` cork ' filling the hole of .its core curve is an arc , part of the core curve of .view figure [ s3ballstotori ] .the second ball ( figure [ twoballs ] ) can be viewed as the remaining of after removing the ` cork ' : in other words the solid torus is cut into two solid cylinders , one comprising the ` cork ' of and the other comprising the 3-ball . as two tori to ( b ) as two balls.*,width=340 ] [ truncate ] if we remove a whole neighbourhood of the point at infinity and focus on the remaining 3-ball , the line of the previous picture is truncated to the arc and the solid cylinder is truncated to the ` cork ' of .we will now examine how we can pass from the two - ball description to the two - tori description of via solid 2-dimensional 0-surgery .we start with two points that have a distance between them .let be the solid ball having arc as a diameter .we define this 3-ball as the ` truncated ' space on which we will focus .when the center of becomes attracting , forces are induced on the two points of and solid 2-dimensional 0-surgery is initiated .the complement space is the other solid ball containing the point at infinity , see figure [ twoballs ] .this joining arc is seen as part of a simple closed curve passing by the point at infinity . in figure [ s3andsurgery_2d0 ]( 1 ) this is shown in while figure [ s3andsurgery_2d0 ] ( 1 ) shows the corresponding decompactified view in . in figure [ s3andsurgery_2d0 ]( 2 ) , we see the ` drilling ' along as a result of the attracting forces .this is exactly the same process as in figure [ 2d_solid_types1_2 ] if we restrict it to .but since we are in , the complement space participates in the process and , in fact , it is also undergoing solid 2-dimensional 0-surgery . in figure [ s3andsurgery_2d0 ]( 3 ) , we can see that , as surgery transforms the solid ball into the solid torus , is transformed into .that is , the nesting of concentric spheres of ( respectively ) is transformed into the nesting of concentric tori in the interior of ( respectively ) .this is a double surgery with one attracting center which is inside the first 3-ball ( in grey ) and outside the second 3-ball ( in red ) . by definition [ continuum2d ] ,the point at the origin ( in green ) turns into the core curve of ( in green ) .figure [ s3andsurgery_2d0 ] ( 3 ) is exactly the decompactified view of as two solid tori as shown in figure [ splittingofs3 ] ( b ) while figure [ s3andsurgery_2d0 ] ( 3 ) is the corresponding view in as shown in figure [ splittingofs3 ] ( 1 ) .figure [ s3andsurgery_2d0 ] shows that one can pass from the second description of to the third by performing solid 2-dimensional 0-surgery ( with the standard embedding homeomorphism ) along the arc of .it is worth mentioning that this connection between the descriptions of and solid 2-dimensional 0-surgery is a dynamic way to visualize the connection established in section [ corking ] . via solid 2-dimensional 0-surgery* , width=566 ]in this section we define the notion of _ embedded surgery in 3-space_. as we will see , when embedded surgery occurs , depending on the dimension of the manifold , the ambient space either leaves ` room ' for the initial manifold to assume a more complicated configuration or it participates more actively in the process .we will now concretely define the notion of embedded -dimensional -surgery in some sphere and we will then focus on the case ._ an embedded -dimensional -surgery _ is a -dimensional -surgery where the initial manifold is an -embedding of some -manifold .namely , according to definition [ surgery ] : from now on we fix .embedding surgery allows to view it as a process happening in 3-space instead of abstractly . in the case ofembedded 1-dimensional 0-surgery on a circle , the ambient space gives enough ` room ' for the initial 1-manifold to become any type of knot .hence , embedding _ allows the initial manifold to assume a more complicated homeomorphic configuration_. this will be analyzed further in section [ embedded1d ] .passing now to 2-dimensional surgeries , let us first note that embedded 2-dimensional surgery is often used a theoretical tool in various proofs in low dimensional topology .further , an embedding of a sphere in presents no knotting because knots require embeddings of codimension 2 .however , in this case the ambient space plays a different role .namely , embedding 2-dimension surgeries _ allows the complementary space of the initial manifold to participate actively in the process_. indeed , while some natural phenomena undergoing surgery can be viewed as ` local ' , in the sense that they can be considered independently from the surrounding space , some others are intrinsically related to the surrounding space .this relation can be both _, in the sense that the ambient space is involved in the triggering of the forces causing surgery , and _ consequential _ , in the sense that the forces causing surgery , can have an impact on the ambient space in which they take place. this will be analyzed in sections [ embedded2d0 ] and [ embedded2d1 ] .we will now get back to site - specific * dna recombination * ( see section [ 1dphenomena ] ) , in order to better define this type of surgery . as seen in this process ( recall figure [ dna_recomb ] )the initial manifold of 1-dimensional 0-surgery can be a knot , in other words , an embedding of the circle in 3-space .we therefore introduce the notion of _ embedded 1-dimensional 0-surgery _ whereby the initial manifold is embedded in the 3-space .this notion allows the topological modeling of phenomena with more complicated initial 1-manifolds .as mentioned , for our purposes , we will consider as our standard 3-space .for details on the descriptions of , see section [ decsr ] .since a knot is by definition an embedding of in or , in this case embedded 1-dimensional surgery is the so - called _ knot surgery_. it is worth mentioning that there are infinitely many knot types and that 1-dimensional surgery on a knot may change the knot type or even result in a two - component link .a good introductory book on knot theory is among many other .looking back to the process of dna recombination which exhibits embedded 1-dimensional 0-surgery , a dna knot is the self - entanglement of a single circular dna molecule . with the help of certain enzymes ,site - specific recombination can transform supercoiled circular dna into a knot or link .the first electron microscope picture of knotted dna was presented in . in this experimental study , we see how genetically engineered circular dna molecules can form dna knots and links through the action of a certain recombination enzyme . a similar picture is presented in figure [ dna_recomb ] , where site - specific recombination of a dna molecule produces the hopf link .another theoretical example of knot surgery comprises the knot or link diagrams involved in the _ skein relations _satisfied by * knot polynomials * , such as the jones polynomial and the kauffman bracket polynomial .for example , the illustration in figure [ skeinrelations ] represents a so - called ` conway triple ' , that is , three knot or link diagrams , and which are identical everywhere except in the region of a crossing and the polynomials of these three links satisfy a given linear relation .in section [ consurg ] we showed how we can pass from the two - ball description to the two - tori description of . although we had not yet defined it at that point , the process we described is , of course , an embedded solid 2-dimensional 0-surgery in on an initial manifold .it is worth mentioning that all natural processes undergoing embedded solid 2-dimensional 0-surgery on an initial manifold can be also viewed in this context .for example , if one looks at the formation of black holes and examines it as an independent event in space , this process shows a decompactified view of the passage from a two 3-ball description of , that is , the core of the star and the surrounding space , to a two torus description , that is , the accretion disc surrounding the black hole ( shown in white in the third instance of figure [ blackhole ] ) and the surrounding space . in this section , we will see how some natural phenomena undergoing solid 2-dimensional 0-surgery exhibit the causal or consequential relation to the ambient space mentioned in section [ es3sp2 ] and are therefore better described by considering them as embedded in .for example , during the formation of * tornados * , recall figure [ tornado ] ( a ) , the process of solid 2-dimensional 0-surgery is triggered by the difference in the conditions of the lower and upper atmosphere .although the air cycle lies in the complement space of the initial manifold , it is involved in the creation of funnel - shaped clouds that will join the two spherical neighborhoods ( one in the cloud and one in the ground ) .therefore _ the cause of the phenomenon extends beyond its initial manifold and surgery is the outcome of global changes_. we will now discuss phenomena where _ the outcome of the surgery process propagates beyond the final manifold_. a first example are * waterspouts*. after their formation , the tornado s cylindrical ` cork ' , that is , the solid cylinder homeomorphic to the product set , has altered the whole surface of the sea ( recall figure [ tornado ] ( b ) ) . in other words ,the spiral pattern on the water surface extends beyond the initial spherical neighborhood of the sea , which is represented by one of the two 3-balls of the initial manifold . as another example , during the formation of * black holes *, the strong gravitational forces have altered the space surrounding the initial star and the singularity is created outside the final solid torus . in all these phenomena ,the process of surgery alters matter outside the manifold in which it occurs .in other words , the effect of the forces causing surgery propagates to the complement space , thus causing a more global change in 3-space .[ duality2d0 ] looking back at figure [ s3andsurgery_2d0 ] , it is worth pinning down the following duality of embedded solid 2-dimensional 0-surgery for : the attraction of two points lying on the boundary of segment by the center of can be equivalently viewed in the complement space as the repulsion of these points by the center of ( that is , the point at infinity ) on the boundary of curve ( or line , if viewed in ) . we will now discuss the process of embedded solid 2-dimensional 1-surgery in in the same way we did for the embedded solid 2-dimensional 0-surgery in , recall figure [ s3andsurgery_2d0 ] .taking again as the initial manifold , embedded solid 2-dimensional 1-surgery is illustrated in figure [ s3andsurgery_2d1 ] .the process begins with disc in the 3-ball on which colinear attracting forces act , see instances ( 1 ) and ( 1 ) for the decompactified view . in ( 3 ) , the initial 3-ball is split in two new 3-balls and . by definition [ continuum2d ], the point at the origin ( in green ) evolves into the two centers of and ( in green ) .this is exactly the same process as in figure [ 2d_solid_types1_2 ] if we restrict it to , but since we are in , the complement space is also undergoing , by symmetry , solid 2-dimensional 1-surgery .again , this is a double surgery with one attracting center which is inside the first 3-ball ( in yellow ) and outside the second 3-ball ( in red ) .this process squeezes the central disc of while the central disc of engulfs disc and becomes the separating plane .as seen in instance ( 3 ) of figure [ s3andsurgery_2d1 ] , the process alters the existing complement space to and creates a new space which can be considered as the void between and . by viewing the process in this way , we pass from a two 3-balls description of to another one , that is , from to .[ duality2d1 ] the duality described in 2-dimensional 0-surgery is also present in 2-dimensional 1-surgery .namely , the attracting forces from the circular boundary of the central disc to the center of can be equivalently viewed in the complement space as repelling forces from the center of ( that is , the point at infinity ) to the boundary of the central disc , which coincides with the boundary of .all natural phenomena undergoing embedded solid 2-dimensional 1-surgery take place in the ambient 3-space .however , we do not have many examples of such phenomena which demonstrate the causal or consequential effects discussed in section [ es3sp2 ] . yet one could , for example , imagine taking a solid material specimen that has started necking and immerse it in some liquid until its pressure causes fracture to the specimen . in this casethe complement space is the liquid and it triggers the process of surgery . finally , the annihilation of falaco solitons is also a case of embedded solid 2-dimensional 1-surgery .the topological thread can be cut by many factors but in all cases these are related to the complement space .so far , inspired by natural processes undergoing surgery , we have extended the formal definition of topological surgery by introducing new notions such as forces , solid surgery and embedded surgery . however , in our schematic models , time and dynamics were not introduced by equations . in this sectionwe connect topological surgery , enhanced with these notions , with a dynamical system .we will see that , with a small change in parameters , the trajectories of its solutions are performing embedded solid 2-dimensional 0-surgery .therefore , this dynamical system constitutes a specific set of equations modeling natural phenomena undergoing embedded solid 2-dimensional 0-surgery .more specifically , we will see that the change of parameters affects the eigenvectors and induces a flow along a segment joining two steady state points .this segment corresponds to the segment introduced in section [ s3 ] and the induced flow represents the attracting forces shown in figure [ 2d_solid_types1_2 ] ( a ) .finally , we will see how our topological definition of solid 2-dimensional 0-surgery presented in section [ solid2d ] is verified by our numerical simulations , and we will see , in particular , that surgery on a steady point becomes a limit cycle . in ,n.samardzija and l.greller study the behavior of the following dynamical system ( ) that generalizes the classical lotka volterra problem into three dimensions : in subsequent work , the authors present a slightly different model , provide additional numerical simulations and deepen the qualitative analysis done in .since both models coincide in the parametric region we are interested in , we will use the original model and notation and will briefly present some key features of the analyses done in and .the system ( ) is a two - predator and one - prey model , where the predators do not interact directly with one another but compete for prey . as are populations , only the positive solutions are considered in this analysis .it is worth mentioning that , apart from a population model , ( ) may also serve as a biological model and a chemical model , for more details see .the parameters are analyzed in order to determine the bifurcation properties of the system , that is , to study the changes in the qualitative or topological structure of the family of differential equations ( ) .as parameters affect the dynamics of constituents , the authors were able to determine conditions for which the ecosystem of the three species results in steady , periodic or chaotic behavior .more precisely , the authors derive five steady state solutions for the system but only the three positive ones are taken into consideration .these points are : it is worth reminding here that a steady state ( or singular ) point of a dynamical system is a solution that does not change with time .let , now , be the jacobian of ( ) evaluated at for and let the sets and to be , respectively , the eigenvalues and the corresponding associated eigenvectors of .these are as follows : , \left[\begin{array}{c } 0 \\ 1 \\ 0 \end{array}\right ] , \left[\begin{array}{c } 0 \\ 0 \\ 1 \end{array}\right ] \right\}\ ] ] , \left[\begin{array}{c } 1 \\ \frac{c-\sqrt{(c-2)^2 - 8}}{2 }\\ 0 \end{array}\right ] , \left[\begin{array}{c } 1 \\ \frac{c+\sqrt{(c-2)^2 - 8}}{2 } \\ 0 \end{array}\right ] \right\}\ ] ] , \left[\begin{array}{c } 1 \\ 0 \\ \frac{-1-\sqrt{1 - 8b(1+c\sqrt{b / a})}}{2b } \end{array}\right ] , \left[\begin{array}{c } 1 \\ 0 \\ \frac{-1+\sqrt{1 - 8b(1+c\sqrt{b / a})}}{2b } \end{array}\right ] \right\}\ ] ] using the sets of eigenvalues and eigenvectors presented above , the authors characterize in , the local behavior of the dynamical system around these three points using the hartman - grobman ( or linearization ) theorem . since and , is a saddle point for all values of parameters .however , the behavior around and changes as parameters are varied .the authors show that the various stability conditions can be determined by only two parameters : and .it is also shown in that stable solutions are generated left of and including the line while chaotic / periodic regions appear on the right of the line .we are interested in the behavior of ( ) as it passes from stable to chaotic / periodic regions . therefore we will focus and analyze the local behavior around and and present numerical simulations for : stable region ( a ) where and and chaotic / periodic region ( b ) where and . * region ( a ) * setting and equating the right side of ( ) to zero , one finds as solution the one - dimensional singular manifold : that passes through the points and .since all points on are steady state points , there is no motion along it . for , _ is an unstable center _ while _ is a stable center _( for a complete analysis of all parametric regions see ) .this means that if denote the eigenvalues of either or with and , then and for while and for .moreover , the point is the center of .the line segment , and supports attracting type singularities ( and includes ) while the line segment defined by , and supports unstable singularities ( and includes ) , for details see .more precisely , each attracting point corresponds to an antipodal repelling point , the only exception being the center of which can be viewed as the spheroid of 0-diameter . the local behavior of ( ) around and in this region together with line shown in figure [ eigenvectors2 ] ( a ) . a trajectory ( or solution ) initiated near in the repelling segmentexpands until it gets trapped by the attracting segment , forming the upper and lower hemisphere of a distinct sphere .hence , a nest of spherical shells surrounding line is formed , see figure [ lv_surgery ] ( a ) . moreover , the nest fills the entire positive space with stable solutions .by changing parameter space from ( a ) to ( b ) .indices 1,2 and 3 indicate the first , second and third component in and .,width=529 ] * region ( b ) * for and , _ is an inward unstable vortex _ and _ is an outward stable vortex_. this means that in both cases they must satisfy the conditions and with , the conjugate of .the eigenvalues of must further satisfy and , while the eigenvalues of must further satisfy and .the local behaviors around and for this parametric region are shown in figure [ eigenvectors2 ] ( b ) .it is worth mentioning that figure [ eigenvectors2 ] ( b ) reproduces figure 1 of with a change of the axes so that the local behaviors of and visually correspond to the local behaviors of the trajectories in figure [ lv_surgery ] ( b ) around the north and the south pole .note now that the point as well as the eigenvectors corresponding to its two complex eigenvalues , all lie in the . on the other hand , the point and also the eigenvectors corresponding toits two complex eigenvalues all lie in the .the flow along line produced by the actions of these eigenvectors forces trajectories initiated near to wrap around and move toward in a motion reminiscent of hole drilling .the connecting manifold is also called the ` slow manifold ' in due to the fact that trajectories move slower when passing near it . as trajectories reach , the eigenvector corresponding to the real eigenvalue of breaks out of the and redirects the flow toward . as shown in figure [ lv_surgery ] ( a ) and ( b ) , as moves to , this process transforms each spherical shell to a toroidal shell .the solutions scroll down the toroidal surfaces until a limit cycle ( shown in green in figure [ lv_surgery ] ( b ) ) is reached .it is worth pointing out that this limit cycle is a torus of 0-diameter and corresponds to the sphere of 0-diameter , namely , the central steady point of also shown in green in figure [ lv_surgery ] ( a ) .to ( b ) .*,width=642 ] however , as the authors elaborate in , while for the entire positive space is filled with nested spheres , when , only spheres up to a certain volume become tori .more specifically , quoting the authors : to preserve uniqueness of solutions , the connections through the slow manifold are made in a way that higher volume shells require slower , or higher resolution , trajectories within the bundle " . as they further explain , to connect all shells through , ( )would need to possess an infinite resolution .as this is never the case , the solutions evolving on shells of higher volume are ` choked ' by the slow manifold .this generates solution indetermination , which forces higher volume shells to rapidly collapse or dissipate .the behavior stabilizes when trajectories enter the region where the choking becomes weak and weak chaos appears . as shown in both and , the outermost shell of the toroidal nesting is a fractal torus . note that in figure [ lv_surgery ] ( b ) we do not show the fractal torus because we are interested in the interior of the fractal torus which supports a topology stratified with toroidal surfaces .hence , all trajectories are deliberately initiated in its interior where no chaos is present .it is worth pointing out that figure [ lv_surgery ] reproduces the numerical simulations done in .more precisely , figure [ lv_surgery ] ( a ) represents solutions of ( ) for and trajectories initiated at points ] , ] and ] , ] and $ ] . as already mentioned , as changes to , changes from an unstable center to an inward unstable vortex and changes from a stable center to an outward stable vortex .it is worth reminding that this change in local behavior is true not only for the specific parametrical region simulated in figure [ lv_surgery ] , but applies to all cases satisfying . for detailswe refer the reader to tables ii and iii in that recapitulate the extensive diagrammatic analysis done therein .finally , it is worth observing the changing of the local behavior around and in our numerical simulations . in figure [ lv_surgery ] ( a ) ,for we have : while in figure [ lv_surgery ] ( b ) , for , both centers change to vortices ( inward unstable and outward stable ) through the birth of the first eigenvalue shown in bold ( negative and positive respectively ) : [ nummethods ] the use of different numerical methods may affect the shape of the attractor .for example , as mentioned in , higher resolution produces a larger fractal torus and a finer connecting manifold .however , the ` hole drilling ' process and the creation of a toroidal nesting is always a common feature . in this section, we will focus on the process of embedded solid 2-dimensional 0-surgery on a 3-ball viewed as a continuum of concentric spheres together with their common center : .recall from section [ solid2d ] that the process is defined as the union of 2-dimensional 0-surgeries on the whole continuum of concentric spheres and on the limit point . for each spherical layer ,the process starts with attracting forces acting between , i.e two points , or poles , centers of two discs . in natural phenomena undergoing solid 2-dimensional 0-surgery , such as tornadoes ( recall figure [ tornado ] ) or falaco solitons ( recall figure [ falaco ] ) , these forces often induce a helicoidal motion from one pole to the other along the line joining them .having presented the dynamical system ( ) in section [ ds ] and its local behavior in section [ lbandsimu ] , its connection with embedded solid 2-dimensional 0-surgery on a 3-ball is now straightforward . to be precise, surgery is performed on the manifold formed by the trajectories of ( ) . indeed , as seen in figure [ lv_surgery ] ( a ) and ( b ) , with a slight perturbation of parameters , trajectories pass from spherical to toroidal shape through a ` hole drilling ' process along a slow manifold which pierces all concentric spheres .the spherical and toroidal nestings in figures [ 2d_solid_types1_2 ] ( a ) and [ lv_surgery ] are analogous .the attracting forces acting between the two poles shown in blue in the first instance of figures [ 2d_solid_types1_2 ] ( a ) are realized by the flow along ( also shown in blue in figure [ eigenvectors2 ] ( b ) ) .when , the action of the eigenvectors is an attracting force between and acting along , which drills each spherical shell and transforms it to a toroidal shell .furthermore , in order to introduce solid 2-dimensional 0-surgery on as a new topological notion , we had to define that 2-dimensional 0-surgery on a point is the creation of a circle .the same behavior is seen in ( ) .namely , surgery on the limit point , which is a steady state point , creates the limit cycle which is the limit of the tori nesting . as mentioned in ,this type of bifurcation is a ` hopf bifurcation ' , so we can say that we see surgery creating a hopf bifurcation .hence , instead of viewing surgery as an abstract topological process , we may now view it as a property of a dynamical system .moreover , natural phenomena exhibiting 2-dimensional topological surgery through a ` hole - drilling ' process , such as the creation of falaco solitons , the formation of tornadoes , of whirls , of wormholes , etc , may be modeled mathematically by a one - parameter family of the dynamical system ( ) .[ dsands3 ] it is worth pointing out that ( ) is also connected with the 3-sphere .we can view the spherical nesting of figure [ lv_surgery ] ( a ) as the 3-ball shown in figure [ s3andsurgery_2d0 ] ( 1 ) and ( 1 ) .surgery on its central point creates the limit cycle which is the core curve of shown in figure [ s3andsurgery_2d0 ] ( 3 ) and ( 3 ) .if we extend the spherical shells of figure [ lv_surgery ] to all of and assume that the entire nest resolves to a toroidal nest , then the slow manifold becomes the infinite line .in the two - ball description of , pierces all spheres , recall figure [ s3andsurgery_2d0 ] ( 1 ) , while in the two - tori description , it is the core curve of or the ` untouched ' limit circle of all tori , recall figure [ s3andsurgery_2d0 ] ( 3 ) and ( 3 ) .[ kiehn ] it is also worth mentioning that in r.m .kiehn studies how the navier - stokes equations admit bifurcations to falaco solitons . in other words ,the author looks at another dynamical system modeling this natural phenomenon which , as we showed in section [ 2d0 ] , exhibits solid 2-dimensional 0-surgery . to quote the author : it is a system that is globally stabilized by the presence of the connecting 1-dimensional string " and the result is extraordinary for it demonstrates a global stabilization is possible for a system with one contracting direction and two expanding directions coupled with rotation " .it is also worth quoting langford which states that computer simulations indicate the trajectories can be confined internally to a sphere - like surface , and that falaco soliton minimal surfaces are visually formed at the north and south pole " .one possible future research direction would be to investigate the similarities between this system and ( ) in relation to surgery .topological surgery occurs in numerous natural phenomena of various scales where a sphere of dimension 0 or 1 is selected and attracting ( or repelling ) forces are applied .examples of such phenomena comprise : chromosomal crossover , magnetic reconnection , mitosis , gene transfer , the creation of falaco solitons , the formation of whirls and tornadoes , magnetic fields and the formation of black holes .in this paper we explained these natural processes via topological surgery . to do thiswe first enhanced the static description of topological surgery of dimensions 1 and 2 by introducing dynamics , by means of attracting forces .we then filled in the interior spaces in 1- and 2-dimensional surgery , introducing the notions of solid 1- and 2-dimensional surgery .this way more natural phenomena can fit with these topologies .further , we introduced the notion of embedded surgery , which leaves room for the initial manifold to assume a more complicated configuration and the complementary space of the initial manifold to participate actively in the process ._ thus , instead of considering surgery as a formal and static topological process , it can now be viewed as an intrinsic and dynamic property of many natural phenomena ._ equally important , all these new notions resulted in pinning down the connection of solid 2-dimensional 0-surgery with a dynamical system .this connection gives us on the one hand _ a mathematical model for 2-dimensional surgery_ and , on the other hand , _ a dynamical system modeling natural phenomena exhibiting 2-dimensional topological surgery through a ` hole - drilling ' process_. we are currently working with louis h.kauffman on generalizing the notions presented in this paper to 3-dimensional surgery and higher dimensional natural processes .another future research direction includes using the proposed dynamical system as a base for establishing a more general and theoretical connection between topological surgery and bifurcation theory .we hope that through this study , topology and dynamics of natural phenomena , as well as topological surgery itself , will be better understood and that our connections will serve as ground for many more insightful observations .we are grateful to louis h.kauffman and cameron mca.gordon for many fruitful conversations on morse theory and 3-dimensional surgery .we would also like to acknowledge preliminary discussions with nick samardzija on topological aspects of dynamical systems .finally , we would like to acknowledge a comment by tim cochran pointing out the connection of our new notions with morse theory .* manifolds * 1 . a hausdorff space with countable base is said to be an _n - dimensional topological manifold _if any point has a neighborhood homeomorphic to or to , where .for example , a surface is a 2-dimensional manifold .the set of all points that have no neigbourhoods homeomorphic to is called the _ boundary _ of the manifold and is denoted by .when , we say that is a _ manifold without boundary_. it is easy to verify that if the boundary of a manifold is nonempty , then it is an -dimensional manifold . 1 .if is a topological space , a _ base _ of the space is a subfamily such that any element of can be represented as the union of elements of . in other words, is a family of open sets such that any open set of can be represented as the union of sets from this family . in the casewhen at least one base of is countable , we say that is a space with _countable base_. 2 . to define the topology , it suffices to indicate a base of the space .for example , in the space , the standard topology is given by the base , where and .we can additionally require that all the coordinates of the point , as well as the number , be rational ; in this case we obtain a countable base .3 . to the set let us add the element and introduce in the topology whose base is the base of to which we have added the family of sets .the topological space thus obtained is called the _ one - point compactification _ of ; it can be shown that this space is homeomorphic to the sphere . 1 .a _ topological space _ is a set with a distinguished family of subsets possessing the following properties : * the empty set and the whole set belong to * the intersection of a finite number of elements of belongs to * the union of any subfamily of elements of belongs to + the family is said to be the _ topology _ on .any set belonging to is called _open_. a _ neighborhood _ of a point is any open set containing .any set whose complement is open is called _closed_. the minimal closed set ( with respect to inclusion ) containing a given set is called the of and is denoted by .the maximal open set contained in a given set is called the _ interior _ of and is denoted by .the map of one topological space into another is called _ continuous _ if the preimage of any open set is open .a map is said to be a _ homeomorphism _if it is bijective and both and are continuous ; the spaces and are then called _ homeomorphic _ or _ topologically equivalent_. 3 .a topological space is said to be a _ hausdorff space _ if any two distinct points of the space have nonintersecting neighborhoods .suppose and are topological spaces without common elements , is a subset of , and is a continuous map . in the set , let us introduce the relation .the resulting quotient space is denoted by ; the procedure of constructing this space is called _ gluing _ or _ attaching _ to along the map .5 . if is the cartesian product of the topological spaces and ( regarded as sets ) , then becomes a topological space ( called the _ product _ of the spaces and ) if we declare open all the products of open sets in and in and all possible unions of these products . 6 .an injective continuous map between topological spaces is called an _ embedding _ if is an homeomorphism between and .s. antoniou .the chaotic attractor of a 3-dimensional lotka volterra dynamical system and its relation to the process of topological surgery .diplom thesis , national technical university of athens .2005 .s.lambropoulou , n.samardzija , i.diamantis , s.antoniou .topological surgery and dynamics ,mathematisches forschungsinstitut oberwolfach report no . 26/2014 ,workshop : algebraic structures in low - dimensional topology .2014 .d. kondrashov , j. feynman , p.c .liewer , a. ruzmaikin .three - dimensional magnetohydrodynamic simulations of the interaction of magnetic flux tubes .the astrophysical journal 519 . 1999 ; 884898 .doi : 10.1086/307383 .
topological surgery is a mathematical technique used for creating new manifolds out of known ones . we observe that it occurs in natural phenomena where a sphere of dimension 0 or 1 is selected , forces are applied and the manifold in which they occur change type . for example , 1-dimensional surgery happens during chromosomal crossover , dna recombination and when cosmic magnetic lines reconnect , while 2-dimensional surgery happens in the formation of tornadoes , in falaco solitons , in drop coalescence , in the cell mitosis and in the formation of black holes . inspired by such phenomena , we introduce new theoretical concepts which enhance topological surgery with the observed forces and dynamics . to do this , we first extend the formal definition to a continuous process caused by local forces . next , for modeling phenomena which do not happen on arcs or surfaces but are 2-dimensional , or respectively 3-dimensional , we fill the interior space by defining the notion of solid topological surgery . we also introduce the notion of embedded surgery in for modeling phenomena which involve more intrinsically the ambient space such as the appearance of knotting and phenomena where the causes and effect of the process lies beyond the initial manifold . finally , we connect these new theoretical concepts with a dynamical system and present it as a model for both 2-dimensional 0-surgery and natural phenomena exhibiting it . we hope that through this study , topology and dynamics of many natural phenomena , as well as topological surgery itself , will be better understood .
the random flights have been introduced for describing the real motions with finite speed .the original formulation of the random flight problem is due to pearson , which considers a random walk with fixed and constant steps .indeed , the pearson s model deals with a random walker moving in the plane in straight lines with fixed length and turning through any angle whatever .the main object of interest is the position reached by the random walker after a fixed number of the steps . over the yearsmany researchers have proposed generalizations of the previous pearson s walk randomizing the spatial displacements . in particular , the random flights have been analyzed independently by several authors starting from the same two main assumptions .the first one concerns the directions which are supposed uniformly distributed on the sphere .furthermore , the length of the intervals between two consecutive changes of direction is an exponential random variable .therefore , there exists an underlying homogeneous poisson process governing the changes of direction .the reader can consult , for instance , stadje ( 1987 ) , masoliver _et al_. ( 1993 ) , franceschetti ( 2007 ) , orsingher and de gregorio ( 2007 ) , garcia - pelayo ( 2008 ) .a planar random flight with random time has also been studied in beghin and orsingher ( 2009 ) .exponential times are not the best choice for many important applications in physics , biology , and engineering , since they assign high probability mass to short intervals .for this reason , recently , the random flight problem has been tackled modifying the assumption on the exponential intertimes .for example , beghin and orsingher ( 2010 ) introduced a random motion which changes direction at even - valued poisson events .in other words this model assumes that the time between successive deviations is a gamma random variable .it can also be interpreted as the motion of particles that can hazardously collide with obstacles of different size , some of which are capable of deviating the motion . very recently , multidimensional random walks with gamma intertimes have been also taken into account by le car ( 2011 ) and pogorui and rodriguez - dagnino ( 2011 ) . le car ( 2010 ) and de gregorio and orsingher ( 2011 ) , considered the joint distribution of the time displacements as dirichlet random variables with parameters depending on the space in which the walker performs its motion . by means of the dirichlet law we are able to assign higher probability to time displacement with intermediate length .the dirichlet density function in somehow generalizes the uniform law and permits to explicit for each space the exact distribution of the position reached by the motion at time . in the cited papers, it is assumed that the directions are uniformly distributed on the sphere .this last assumption is not completely appropriate .indeed , the real motions are persistent and tend to move along the same direction .therefore , it seems not exactly realistic to have the same probability to move along each direction of the space when the motion chooses a new orientation .the aim of this paper is to analyze a spherically asymmetric random motion in , that is with non - uniformly distributed directions .essentially , the literature is devoted to analyze random walks with uniform distribution , while on the random flights with non - uniform distributed angles few references exist : see for instance grosjean ( 1953 ) and barber ( 1993 ) . for this reason we believe that this topic is interesting and merits further investigation . for the sake of completeness ,we point out that other multidimensional random models with finite velocity , different from the random flights , have been proposed in literature : see , for example , orsingher ( 2000 ) , samoilenko ( 2001 ) , di crescenzo ( 2002 ) , lachal ( 2006 ) and lachal _et al_. ( 2006 ) .we describe the random flight model analyzed in this paper .let us consider a particle which starts from the origin of , and performs its motion with a constant velocity .hereafter , we assume that in the time interval ] . on the rightit is depicted the density with ].,scaledwidth=100.0% ] [ plotpath ] with ,j=1,2, ... ,d-2, ] and .hence , the particle chooses the direction not uniformly on the sphere , but following an asymmetric density law .furthermore , the particle chooses a new direction independently from the previous one . in figure [ plotdens2 ]we have depicted the behavior of for and . from the picture on the left we can observe that the function is bimodal and assigns high probability mass near the points and .similar considerations hold for the picture related to .clearly when increases the function tends to concentrate around the points and .for we obtain the uniform distribution over the -dimensional hypersphere , that is another important assumption is the following : the vector has distribution given by which represents a rescaled dirichlet random variable , with parameters .let us denote with the process representing the position reached , at time , by the particle following the random rules above described . in the rest of the paper we are going to analyzethe random flights defined by and their probabilistic characteristics represent the main object of our analysis .recalling that the motion develops at constant velocity , and exploiting the hyperspherical coordinates , the -dimensional random flight has components equal to the probability distribution of random flight is concentrated inside the -dimensional hypersphere with radius , which we will indicate by where and represents the euclidean norm .the density law leads to a drift effect in the random motion .indeed , since the directions are not chosen with the same probability , the particle tends to move in some portions of the space with higher probability .clearly the parameter defines the degree of drift of the motion .therefore , if the value of increases , the particle will tend to move along the same part of the space .this behavior of the random motion seems to be consistent with the persistence of the real motions .the sample paths described by the moving particle having random position appear as straight lines with sharp turns and each steps are randomly oriented and with random length ( see figure 2 ) .the drawbacks emerging in the study of the random flights defined in this paper are discussed in section 2 .therefore instead of studying directly we analyze its projections onto the lower space .this approach leads to another random motions for which we obtain the probability distributions .for , we provide the characteristic function for ( section 3 ) .furthermore , we infer a closed - form expression for the density function of the random flight with . for itis possible to explicit completely the above density function .the appendix contains results on the integrals involving bessel functions , which assume a crucial role in this paper .in order to study the process introduced in the previous section , we take into account the same approach developed in orsingher and de gregorio ( 2007 ) and de gregorio and orsingher ( 2011 ) .the first step consists in the calculation of the characteristic function of which plays a key role in our analysis .let us indicate by the scalar product , then we have that where and the integrals with respect to the angles are of the form which seems to have an explicit solution only for particular values of ( see appendix ) .thus the choice of the non - uniform family of distributions for the directions of the random motion , makes the analysis of the process , at least in its general setting , very complicated .clearly for we get the uniform case studied in the paper mentioned in the introduction . for this reason , instead of studying directly , we deal with the random process namely the projection of onto the lower space , with , having components equal to the vector ( with ) appearing in , has distribution given by the marginal density of , that is therefore , can be interpreted as the shadow of in the lower spaces ( see for instance figure 2 ) . in other words , if we observe in the particle moving in according to the random rules of , we perceive a random flight with components having vector velocity with random intensity . then , in what follows we analyze the random motion with orientations distributed according to .our first result concerns the characteristic function of .[ teo1 ] the characteristic function of is equal to where is the bessel function . the characteristic function of becomes with since we observe that , after the integrations with respect to becomes we are able to perform all the integrations with respect to the angles by applying successively the formula in the appendix .the integration with respect to yields in the last step we applied formula for and also considered that the integration with respect to the variables follows similarly by applying again and yields continuing in the same way we obtain that then we can write that in order to work out this -fold integral , the result assumes a crucial role .indeed , we apply recursively the formula to calculate each integral with respect to the variables .therefore , in the first step we have that the second integral is given by then , the last integral becomes therefore , plugging the result into the expression , and by observing that and some manipulations lead to .let us denote by and .now , by means of , we derive the explicit probability distribution of the random motion which is concentrated in the hypersphere .[ th2 ] the probability law of is equal to with and . by inverting the characteristic function , we are able to show that the density law of the process is given by . indeed , by passing to the hyperspherical coordinates, we have that in the first step above we have performed calculations similar to those leading to and then while in the last step we have used formula for , , and . [ rem ] by taking into account, we observe that represents an isotropic random walk ; that is its probability distribution depends on the distance , from the origin , of the position of the random walker .furthermore , by setting in , we reobtain the result ( 2.26 ) in de gregorio and orsingher ( 2011 ) .the cumulative distribution function for the process is equal to with . if , we can write down in an alternative form involving a finite sumthen , one has that by assuming that the number of steps is a random value , it is possible to get the unconditional distribution for . in other words , we suppose that the number of changes of direction are governed by a fractional poisson process introduced by orsingher and beghin ( 2009 ) .therefore , we have that where is the mittag - leffler function .furthermore is supposed independent from and .then , in order to obtain the unconditional distribution of we can average with the distribution .let us consider the radial process .we observe that with .therefore , let be the density function of , the result implies that furthermore , by taking into account , we derive the moments of that is with .as observed in the previous section for a general value of , we are not able to work out the integral . nevertheless , it is possible to derive a closed - form expression for the characteristic function of for some values of .the next theorem provides the explicit characteristic function for the random flight with .it is worth to mention that this result is remarkable since in this context it is hard to obtain explicit results .[ teo : cfnu1 ] for , the process admits the following characteristic function with and . in order to prove the result, we will use the same approach and tools exploited in the proof of theorem [ teo1 ] .nevertheless the proof of is not a simple adjustment of the proof of theorem [ teo1 ] and it requires a particular care as we will show in the next steps .we start by taking into account the expression for .thus , we have to handle the following integral for the integrals with respect to , we get that where in the last step we have used the result . therefore the integral becomes \bigg\}.\end{aligned}\ ] ] the integrals with respect to are equal to \bigg\}\\ & = \prod_{k=1}^{n+1}\bigg\{2\int_0^{\pi/2 } d\theta_{d-2,k}\cos(c\tau_k\alpha_{d-2}\sin\theta_{1,k}\sin\theta_{2,k}\cdot\cdot\cdot\cos\theta_{d-2,k})\sin^3\theta_{d-2,k}\\ & \quad\bigg[\frac{j_1(c\tau_k\sin\theta_{1,k}\sin\theta_{2,k}\cdot\cdot\cdot\sin\theta_{d-2,k}\sqrt{\alpha_d^2+\alpha_{d-1}^2})}{c\tau_k\sin\theta_{1,k}\sin\theta_{2,k}\cdot\cdot\cdot\sin\theta_{d-2,k}\sqrt{\alpha_d^2+\alpha_{d-1}^2}}\\ & \quad-\frac{\alpha_{d}^2}{\alpha_{d}^2+\alpha_{d-1}^2}j_2(c\tau_k\sin\theta_{1,k}\sin\theta_{2,k}\cdot\cdot\cdot\sin\theta_{d-2,k}\sqrt{\alpha_d^2+\alpha_{d-1}^2})\bigg]\bigg\}\\ & = \prod_{k=1}^{n+1}\bigg\{2\sqrt{\frac \pi2}\bigg[\frac{j_{3/2}(c\tau_k\sin\theta_{1,k}\sin\theta_{2,k}\cdot\cdot\cdot\sin\theta_{d-3,k}\sqrt{\alpha_d^2+\alpha_{d-1}^2+\alpha_{d-2}^2})}{(c\tau_k\sin\theta_{1,k}\sin\theta_{2,k}\cdot\cdot\cdot\sin\theta_{d-3,k}\sqrt{\alpha_d^2+\alpha_{d-1}^2+\alpha_{d-2}^2})^{3/2}}\\ & \quad-\frac{\alpha_{d}^2j_{5/2}(c\tau_k\sin\theta_{1,k}\sin\theta_{2,k}\cdot\cdot\cdot\sin\theta_{d-3,k}\sqrt{\alpha_d^2+\alpha_{d-1}^2+\alpha_{d-2}^2})}{(c\tau_k\sin\theta_{1,k}\sin\theta_{2,k}\cdot\cdot\cdot\sin\theta_{d-3,k})^{1/2}(\sqrt{\alpha_d^2+\alpha_{d-1}^2+\alpha_{d-2}^2})^{5/2}}\bigg]\bigg\},\end{aligned}\ ] ] where in the last step we have used formula for and and also considered that by means of the same arguments , we can calculate the further integrals with respect to and then we obtain that the characteristic function for becomes by developing the product appearing in , the characteristic function reduces to a sum , over to elements , with generic terms given by with , and .therefore , we focus our attention on the calculation of. we deal with the following -fold integral by applying formula . indeed , the integral with respect to given by where in the last step we have used formula . the integral with respect to becomes where in the last step we have used the formula again .we can continue at the same way and then we have that now , we perform the calculations concerning the -fold integral by means of the same approach adopted above , we can write down that then , carrying on the same calculations for the integrals with respect to the variables , we get that where in the last step we have also exploited the formula . now , bearing in mind the same steps used to work out , we are able to explicit , omitting the calculations , the following integral analogously to , we get that )}\\ & \quad(c(t-\sum_{k=1}^{n_{s-3}}\tau_k)||\underline{\alpha}_d||)^{\frac{(n+1-n_{s-3})(d+3)}{2}-[(n_s - n_{s-1})+(n_{s-2}-n_{s-3})+1/2]}\\ & \quad\times j_{\frac{(n+1-n_{s-3})(d+3)}{2}-[(n_s - n_{s-1})+(n_{s-2}-n_{s-3})+1/2]}(c(t-\sum_{k=1}^{n_{s-3}}\tau_k)||\underline{\alpha}_d||).\end{aligned}\ ] ] finally , by applying recursively the same arguments exploited so far , we get that }\gamma^{n+1-n_{2}}((d+1)/2)}{(c||\underline{\alpha}_d||)^{(n+1-n_{2})(d+1)-1}(\sqrt{2\pi})^{n - n_{2}}\gamma(\frac{(n+1-n_{2})(d+3)}{2}-[(n_s - n_{s-1})+(n_{s-2}-n_{s-3})+ ... +(n_{3}-n_{2})])}\\ & \quad(c(t-\sum_{k=1}^{n_{2}}\tau_k)||\underline{\alpha}_d||)^{\frac{(n+1-n_{2})(d+3)}{2}-[(n_s - n_{s-1})+(n_{s-2}-n_{s-3})+ ... +(n_{3}-n_{2})+1/2]}\\ & \quad j_{\frac{(n+1-n_{2})(d+3)}{2}-[(n_s - n_{s-1})+(n_{s-2}-n_{s-3})+ ...+(n_{3}-n_{2})+1/2]}(c(t-\sum_{k=1}^{n_{2}}\tau_k)||\underline{\alpha}_d||)\\ & = \left\{\frac{2^{d/2}\gamma(1+d/2)}{\gamma(d+1)}\right\}^{n+1}\frac{\gamma((n+1)(d+1))}{t^{(n+1)(d+1)-1}}\left(-\frac{\alpha_{d}^2}{||\underline{\alpha}_d||^2}\right)^{n+1-j}\\ & \quad\frac{((d+1)/2)^{n+1-j}\gamma^{n+1}((d+1)/2)}{(c||\underline{\alpha}_d||)^{(n+1)(d+1)-1}(\sqrt{2\pi})^n\gamma(\frac{(n+1)(d+3)}{2}-j)}(ct||\underline{\alpha}_d||)^{\frac{(n+1)(d+3)}{2}-j-\frac12}j_{\frac{(n+1)(d+3)}{2}-(j+\frac12)}(ct||\underline{\alpha}_d||)\\ & = \frac{\sqrt{\pi}\gamma((n+1)(d+1))}{2^{\frac{(d+1)(n+1)-1}{2}}\gamma(\frac{(n+1)(d+3)}{2}-j)}\frac{\left(-\frac{\alpha_{d}^2}{||\underline{\alpha}_d||^2}((d+1)/2)\right)^{n+1-j}}{(ct||\underline{\alpha}_d||)^{\frac{(n+1)(d-1)+2j-1}{2}}}j_{\frac{(n+1)(d+3)-(2j+1)}{2}}(ct||\underline{\alpha}_d|| ) .\end{aligned}\ ] ] the integrals have different configurations involving the bessel functions and , according to the values .nevertheless , from the previous calculations , we observed that the value of is the same if the does not change for different combinations of .in other words , in order to calculate the values of the different terms of the sum appearing in , we have to consider the number of bessel functions with index ( or ) appearing in .then , we can conclude that and the proof is completed .the analytic form of the characteristic function is more complicated than its counterpart in the uniform case ( ) ( see formula ( 2.1 ) in de gregorio and orsingher , 2011 ) which is only given by a bessel function ( with a suitable constant ) .this is because the random motions considered in this paper have drift .indeed , the choice of directions non - uniformly distributed , for each step performed by the motion , implies the loss of spherical symmetry . by means of and, the characteristic function of could be calculated for . clearly , in these cases the necessary calculations for completing the prooffollow the same steps of the proof of theorem [ teo : cfnu1 ] are very long and cumbersome .indeed , the expression involves the product of linear combinations of bessel functions .theorem [ teo : cfnu1 ] allows to show our next result .we are able to provide the density law of ( up to some positive constants ) .the density function of for assumes the following form where and are positive constants defined as in .we invert the characteristic function as follows by taking into account the formula , the integral with respect to becomes then by means of the same approach used to get , the -fold integral with respect to the angles becomes therefore , the formula implies that it is worth to point out again that the result stated in the above theorem does not provide the whole analytic form of the density law of the random flight with .indeed the constants appearing in the expression are not determined for an arbitrary value of , but only for some values of and .for instance and ( see formula in appendix ) .furthermore , in a few cases we can explicit . by taking into account , for and , after some calculations , we obtain that and let be the density function of with . as done for obtaining , from and we get that and the expressions and represent proper density functions .indeed , it is possible to verify after some calculations that .furthermore , and , or equivalently and , are non negative functions ( see figure [ radial ] ) .( on the left ) and ( on the right ) for and .,scaledwidth=100.0% ] it is useful to compare the density functions and with the probability distributions obtained in the uniform case ( ) by orsingher and de gregorio ( 2007 ) . as emerges from table[ tab : first]-[tab : second ] , the uniform random flight and the random motion with drift lead to quite different distributions . from the results , and emerge that the probability distributions of the random flight with are a linear combination of functions with .furthermore , permits to claim that , as expected , the random flight with drift is not isotropic .it is interesting to point out that the observer in , perceives the random motion as an isotropic walk ( see remark [ rem ] ) , while the original motion is non - isotropic .in the random flight problem the integrals involving bessel functions have a crucial role . for this reason in this sectionwe summarize some important results which are often used in the manuscript .if , we get that as proved in de gregorio ( 2010 ) .the previous result has been obtained by expanding the exponential function inside the integral .hence , one observes that splitting the integral into two parts , i.e. , after a change of variable , the sum is not equal to zero if the index values are even . therefore , by taking into account similar manipulations appearing in , and, we can infer that the following equality holds where are positive constants .we are not able to explicit for all values of and because for the above constants seem that a recursive rule does not hold .nevertheless , it is easy to see that important results are also the following ones with and ( see gradshteyn - ryzhik , 1980 , formula 6.581(3 ) ) , with ( see gradshteyn and ryzhik , 1980 , formula 6.533(2 ) ) , and
this paper deals with a new class of random flights defined in the real space characterized by non - uniform probability distributions on the multidimensional sphere . these random motions differ from similar models appeared in literature which take directions according to the uniform law . the family of angular probability distributions introduced in this paper depends on a parameter which gives the level of drift of the motion . furthermore , we assume that the number of changes of direction performed by the random flight is fixed . the time lengths between two consecutive changes of orientation have joint probability distribution given by a dirichlet density function . the analysis of is not an easy task , because it involves the calculation of integrals which are not always solvable . therefore , we analyze the random flight obtained as projection onto the lower spaces of the original random motion in . then we get the probability distribution of although , in its general framework , the analysis of is very complicated , for some values of , we can provide some results on the process . indeed , for , we obtain the characteristic function of the random flight moving in . furthermore , by inverting the characteristic function , we are able to give the analytic form ( up to some constants ) of the probability distribution of + _ keywords _ : bessel functions , dirichlet distributions , hyperspherical coordinates , isotropic random motions , non - uniform distributions on the sphere .
one of the unanswered questions in modern physics is whether the photon has a tiny rest mass or not . from a classical point of view and without the notion of the theory of relativity it is totaly unacceptable to assume that a particle like the photon has a zero rest mass .such a suggestion would be rejected for the following reasons : 1 .how come a particle that is carrying energy like any other particles does not have something which the rest of the particles have in common , that is a rest mass .2 . from the newtonian mechanical point of view, zero mass means that a particle will be accelerated to infinite velocity by any force no matter how small it is .if a particle reaches a certain velocity it must be accelerated from rest to this velocity ; it meaningless to say that a particle has no mass at rest , or , in other words has no existence at rest , and then when this nonexisting entity gain momentum it came to creation . again this may be acceptable from a quantum a mechanical point of view , but not from classical point of view .even after special relativity there are scientists who thought that the photon may have a very small rest mass . to shed light on this point i will quote from a conversation between feynman and a french professor [ 1 ] .the professor asked tell me professor feynman , how sure are you that the photon has no rest mass , ? " .feynman answeredwell , it depends on the mass ; evidently if the mass is infinitesimally small , so that it would have no effect whatsoever , i could not disprove its existence , but i would be glad to discuss the possibility that the mass is not certain definite size .the condition is that after i give you arguments against such mass " the professor then chose a mass of electron mass , feynman answeredif we agreed that the mass of the photon was related to the frequency as , photons of different wave lengths would travel with different velocities .then in observing the eclipsing double star , which is sufficiently far away , we would observe the eclipse in blue light and red light at different times .since nothing like this is observed we can put an upper limit on the mass , which , if we do the numbers , turns out to be of the order of electron mass " .the professor asked if it is possible that the photon has a mass of .feynman answeredif the photon has a small mass equal for all photons , larger fractional differences from the massless behavior are expected as the wave length gets longer . so that from the sharpness of the known reflection of radar pulses , we can put an upper limit to the photon mass which is somewhat better than the eclipsing double star argument .it is turn out to be smaller than electron mass " .after this the professor asked what if the mass electron mass .feynman answered from field theory argument the potential should go as .then the earth has a static magnetic field , which is known to extend out into space for some distance , from the behavior of the cosmic ray of the order of few earth radii . butthis means that the photon mass must be of a size smaller than that corresponding to a decay length of the order of 8000 miles , or some , mass " .but what if the mass of the photon is much smaller than that .in fact l. de broglie noted that a rest mass a photon of order g would be impossible to detect . hereit must be noted that , if there is really a photon with a rest mass no matter how small it is , then this will impose a change on the basic assumptions of the theory of special relativity because the second postulate stated that the velocity of light is always constant and it is equal to no matter what is the velocity of the source [ 2 ] .clearly that is not the case for a photon with a rest mass , because the velocity of light will depend on the velocity of the source as well as the light s wave length .the second postulate only applied to photons with infinity energy since only then the velocity of light will be independent of the velocity of the source and it will be exactly , and practically there is no such photons . the purpose of this work is to introduce a new scheme under which the relativistic energy momentum relation ( einstein dispersion relation ) can be derived without using lorentz transformations and without putting any limit on the maximum velocity with which particles can propagate , and to investigate the possibility of rest mass photons .the treatment is based on the assumption that the exchange of momentum between particles is not a continuous process , but it happens in the form of discrete pulses .the derived dispersion relation is an approximation to einstein dispersion relation ( edr ) ; an approximation that is becoming very high with the increase of energies .it will be shown that under such an assumption it is possible for particles with nonzero mass to travel at the velocity and faster , which means that it is possible for the photon to have a nonzero rest mass .it will also be shown that the new understanding will explain the results of the michelson - morley experiment .the constancy of the velocity of light will be a result and not a postulate like in the case of sr .however it will be shown that the principal of the constancy of the velocity of light can be violated for low energy photons .the second section will discuss the basic postulates of the work , the third and fourth will show the steps to find the total energy of a particle and the correction terms to edr as a power series .section four will show a way to find the correction as functions of momentum .section five will examine the convergence of the solution , section six will discuss the relation between velocity and momentum and between velocity and energy , and faster than light transition , the last section will be for conclusions .in this work there are two postulates and one preposition . particles in nature are interacting through exchanging momentum . we are use to deal with momentum in classical mechanics and in most of the cases of quantum mechanics as a continuous quantity .the core idea of this work is to deal with linear momentum as a sum of vectors of small values each of which has a universal magnitude denoted by , this make linear momentum in some sense a discrete vector entity .the first postulate stated that : _ i-the change of the linear momentum of a body or a particle with time has a minimal vector quantity of , with as a universal constant_. to understand what exactly this postulate means let us consider the change of momentum , say from zero value to a value during a time .the interval can be divided in to smallest possible subintervals such that in each of them the momentum is changing by the smallest possible value . since according to the first postulate ,the smallest possible change is , the momentum of a particle can be written as , where is the change of momentum during . is the change of momentum during , ... and so forth , and it is assumed here that the particle was initially at rest .since the discrete nature of linear momentum is so far undetectable experimentally , this suggests that the value of is extremely small .it is so small that the momentum of the particles seems to be continuous .the equivalence between energy and mass can be reached without using relativity .in fact , a work by poincar [ 3 ] had led to such a relation by studying the momentum of radiation .another work by hasenhrl [ 4],[5 ] can lead to the relation by studying the pressure inside a system composed of a hollow enclosure filled with radiation .braunbeck [ 6 ] had shown in 1937 that the verification of the mass energy equivalence relation by experiment must not be regarded as a theorem that can be derived from other principles of less direct and less empirical evidence , but should be taken as a fundamental principle .this point of view will be adopted here .the mass energy relation in this work is a postulate ( or a principle ) , but i stress here that in the mass energy relation . , the constant will not be defined as the velocity of light , it will be considered only as a constant that has the units of velocity. it will be one of the results of this work that particles with will travel with a velocity approaching .the second postulate is : _ mass and energy are equivalent . _the third postulate is supposed to specify the relation between the energy change of a particle due to a vector change of one momentum pulse and the momentum vector . by knowing this relation , the dispersion relation can be found after summing the values of energies due to a change of momentum by -pulses . to show how this phenomenological relation has been suggested ,consider a particle that has a change of linear momentum equal to . according to newton s mechanicsthe change of energy due to this change of momentum will be where is the initial momentum vector of the particle , or it is the momentum vector before the change of momentum by , and is the rest mass of the particle .in this work it will be purposed that newton relation is applicable but with a difference , according to the second postulate mass and energy are equivalent therefore the kinetic mass must be used instead of the rest mass .more about the preposition will be discussed in another work under preparation [ 7 ] . accordinglythe change of the energy of a particle due to a change of its momentum by one momentum pulse is where is the total energy of the particle .relation ( 5 ) will be used for calculating the energy that is gained by the particle due to the momentum pulse . according to the previous discussion the state of the proposition will be : _ if a particle with momentum changed its momentum by one momentum pulse then the change of energy due to that will be according to relation(5)_. in this workonly an accelerated particle from rest will be studied , it will be assumed that the acceleration process will not cause any change in the inner state of particle , and therefore the rest mass will not change .if a force acting on a particle has fixed direction , then it would be reasonable to believe that all the momentum pulses will be pointing in the same direction of the accelerating force .accordingly , for all momentum pulses , in relation ( 5 ) . the total linear momentum of the particle after the end of the action of the accelerating force will be : , and therefore the energy change due to the momentum pulse can be written in the following form , where is the total energy of the particle due to momentum pulses , or here it must be noted that quantities like , , ... etc .are measured relative to one fixed observer .the total kinetic energy of a particle can be calculated by adding the kinetic energy gained from the first pulse plus the energy gained from the second pulse ... and so forth .therefore can be expressed as , where the delta was dropped from for short hand .expressing will become more problematic with the increasing of .this can be shown by calculating and for a particle with rest mass , for example , ( 8) will give the negative energy change solution will lead to a jump of the total energy to negative value due to a small change in momentum equal to . the aspect of such solution will not be discussed in this work .here it is important to note that the positive solution of ( 11 ) is deviating clearly from the sr solution , but this will not contradict the experiment since there is no measurement to energies of particles with low value of momentum such as one or a few . on the other hand , it will be shown that will approach the one of sr as the momentum increases .the expression of will be found by using the expression of , because .therefore ( 8) and ( 11 ) give \end{aligned}\ ] ] it is clear that calculating in this way will be unpractical .the fact that is very small will help in finding a series solution so that the expression of , , ... , can be written as a power series in .for example , can be expanded as , and the expansion of will be the kinetic energy for this case , where , is for large this will also be an unpractical way of calculation . a rule must be found for the coefficients of the power series for for arbitrary values of . to reach to this rule will be written as so the next step is to find which represents the coefficients of the expansion of for arbitrary . to dothat , ( 8) and ( 9 ) will be used to write the change in kinetic energy due to the the pulse in terms of the total energy due to pulses , expanding as a power series in gives the following expression ^{j_0}c(1/2,j_0)}{(m_oc^2+\sum_{k=1}^{i-1}e_{k})^{2j_0 - 1}}\right]\ ] ] , where the binomial theorem has been used to get the above equation , and are the binomial coefficients .using the binomial theorem again for the term in the denominator of ( 18 ) gives ^{j_0}c(1/2,j_0)\sum_{r=0}^{r=\infty}c(-2j_0 + 1,r)\left(\frac{\sum_{k=1}^{i-1}e_{k}}{m_o}\right)^r\right]\ ] ] from ( 16 ) it is easy to prove that after substituting with , equation ( 20 ) can be written as , where is defined by the following relation and later will not be named , they are just a substitution to make the equations more compact .these functions have nothing to do with tensors , and in general there is no use of any tensors in this work ] substituting ( 22 ) into ( 19 ) will give ^{j_0}2^{j_0 } \\\nonumber c(1/2,j_0 ) c(-2j_0 + 1,r)g_{jr}(i-1)\bigg]\end{aligned}\ ] ] , where means that the summation will begin from if and begins from for . by substituting into ( 24 ) , and using ( 16 )the expression of will be ^{j - j}2^{j - j-1}g_{jr}(l-1)\end{aligned}\ ] ] equating the coefficients of in both sides of ( 26 ) gives ^{j - j}2^{j - j-1}g_{jr}(l-1)\ ] ] it is easy to see that any that appears on the right hand side of ( 27 ) with order , therefore will be expressed in terms of with lower orders .the calculations will involve the values of the binomial coefficients , and it also involves summations like , that demand using rules concerning the values of a series of integers in which bernoulli numbers are used .no details will be mentioned here about these things since they are available in many calculus books , like [ 8 ] , [ 9 ] .equation ( 27 ) gives to see what these calculation may lead to , a digression would be useful at this point .let us consider a sufficiently high value of such that in all the equations from ( 28 ) to ( 35 ) only the terms with higher power of will be dominating and the other terms can be neglected , then from ( 16 ) and knowing that , the expression of will be the generating function for the above series is this is nothing but the kinetic energy that was expressed from sr by using lorentz transformations .to put things in more specific form , it is useful to write as a power series of functions , where by comparing ( 38 ) with ( 16 ) and by using the equations from ( 28 ) to ( 35 ) , the functions can be expressed as power series , which are generating function must be found in order to explore the dispersion relation corrected to the order , that is because one of the aims of this work is to calculate corrections to edr and find the effect of these corrections on the velocity .finding the form of the power series of will lead to guess only which represents the kinetic energy term in edr , and , which is the first order correction .they have the following expressions the other series of are too complicated to give any clue about their generating functions .therefore it is important to find another technique that can express the functions directly .the following section will discuss such a technique that involves solving first order differential equations .the equations will become increasingly long and complicated with the increase of the order of correction . according to my experience the series solution is vital for checking the results .the change of energy of a particle due to pulse can be written in terms of the total energy of the particle when .this can be done by using ( 8) , and ( 9 ) that give by using a taylor expansion it is easy to see that where the operator is defined by the following equation from ( 47 ) , ( 48 ) , ( 49 ) , and ( 38 ) , and after some elaboration , a useful expression can be reached , that is = \frac{2xm_oc+p_s}{2m_oc}\frac{p_s}{m_oc}\end{aligned}\ ] ] , where .taking the coefficients of on both sides of ( 50 ) will give a first order differential equation for , that is solving the above equation after applying the condition gives exactly the same expression of ( 45 ) for .to express the coefficients of must be equated on both sides of ( 50 ) to get substituting for from ( 45 ) and after that solving the resultant first order differential equation under the condition will give exactly the expression of in ( 46 ) .the process can be continued in similar way to find .because of the increasing complexity , there will be no mention here of the details of calculating .the results can be summarized as follows to check whether these results are correct or not , the maclaurin expansion should be found for each of the functions written in the above equation . if the expansion coincide with the corresponding one in ( 40 ) to ( 44 ) then this will certify that the results are correct .the series of can be written as , where the third term on the left hand side is a series of functions with in this section , the conditions for the convergence of the series will be discussed . the expression given for the functions in equations from ( 54 ) to ( 57 )give no clue about a general rule that can be reached by mathematical induction to express for any value of .the weierstrass theorem will be useful to verify the convergence for a case when the general term of the series is unknown [ 10],[11 ] .the state of the theorem is : suppose is a sequence of functions defined on , and suppose then converges uniformly on if converges .for this case is the closed interval ] .the graphs give }|f_1(x)|=0.5\\\sup_{x \in [ 0 , \infty ] } |f_2(x)|=0.0565\\\sup_{x \in [ 0 , \infty ] } |f_3(x)|=0.0159\\\sup_{x \in [ 0 , \infty ] } |f_4(x)|=0.0073 \end{aligned}\ ] ] equations from ( 61 ) to ( 64 ) together with ( 59 ) and ( 60 ) suggest more than one form of , such as : the -test for both two expressions of ( 65 ) and ( 66 ) will give convergence under the condition , which means that the suggested mathematical treatment by series will give the correct result only if .one might say that the expression also satisfies ( 60 ) , and gives a more flexible result since it leads to the condition , but this would nt be safe . adopting such a result demands the knowledge of at least and in order to conform that ( 60 ) will be satisfied . finding these functions demand doing a very long and tedious calculations . to calculate the dispersion relation for ,the mathematical approach must be modified , that will not be discussed in this work , only for special case when the momentum is few s , that is when the velocity of the particle will be studied next section .the relation between the velocity and momentum is important , it will be used to explain why the velocity of light seems to be independent on the velocity of the source , and under what conditions this will be changed .the relation between the velocity and total energy of a particle will allow us to check whether the present treatment will lead to the same einstein relation between kinetic mass , rest mass and velocity .it will be shown that this relation will be expressed as a zeroth approximation . by definition, a velocity of a particle is related to it s linear momentum by the following relation or where from ( 70 ) and ( 71 ) , ( 38 ) can be written as applying the binomial theorem to the term at the denominator in ( 72 ) gives after substituting , ( 73 ) can be written as , where to get an iterated value for the velocity , will be written as , where is the velocity corrected to the zero order , is the first order correction to the velocity as a function of momentum , is the second order correction , ... and so forth . from ( 76 ) , ( 74 ) , the expression for can be found , which is to see what will be the velocity for a particle with , the functions must be calculated , and then find for . using ( 75 ) , ( 53),(54 ) and ( 55 ) give for the case ( or ) it is obvious from ( 78 ) that all the functions , except the function . therefore ( 77 ) will give for and , and directly ( 74 ) will give that coincides with experiment . to prove that the velocity of light is independent of the velocity of the source a simple fact will be used . if the source moved away or approached the observer , the momentum of the photon will change , that is decrease in the first case and increase in the second case .this fact is applied not only to the emission of photons but to the emission of any particle . performing any experiment to measure the effect of the velocity of the source on the velocity of lightwill not give any indication that the velocity of light is changing as long as the condition does not change . to see why, the derivative of with respect to will be found , it is from ( 78 ) and by induction it can be proved that which means that there will be no change in the velocity as a result of a change in momentum as long as .more precisely the change will be so small that it is beyond detection .this is a result and not a postulate as in the case of sr . in this workthe difference is that , in principle , there could be an experiment that measures a change in velocity of photons , but this experiment must involve a source moving away from the observer with extremely high velocity to produce a high doppler shift for photons that have originally a very long wave length , such that will not be big enough to make the right hand side of ( 80 ) approaching zero . then the change of the velocity can be detected .no further specifications can be given about such an experiment since the rest mass of the photon is still unknown . to find the relation between total energy and velocity , ( 9 )will be written as the above equation can be reached directly by substituting the expansion of the velocity as a function of total energy is , where , and is the j - correction of the velocity as a function of energy . writing the equations in terms of will give the equations in a more compact form , as we will see later . to find a series solution ,the following substitution will be made , where applying the chain rule and using ( 86 ) gives here it must be noted that for the only solution for ( 86 ) and ( 87 ) is accordingly the taylor series of ( 82 ) about for the several variables will be \end{aligned}\ ] ] where the above equation was reached by arranging term with equal power of . taking the coefficients of in ( 89 )gives the above relation gives exactly the einstein relation between kinetic mass , rest mass , and velocity : equation ( 91 ) gives equating the coefficients of equal on both sides of ( 89 ) for gives ^{-1}\bigg(f_j(\xi_0)+ \sum_{j=0}^{j = j-1}\sum_{l=1}^{'l = j - j}\frac{1}{l!}\left(\frac{\partial l f_j}{\partial x^l}\right)_{x=\xi_0}\\\nonumber\left(\frac{e^{(t)}}{m_oc^2}\right)^l\left[\sum_{k_1+ ...+k_l = j - j } \beta_{k_1 } ... \beta_{k_l}\right ] \bigg)\end{aligned}\ ] ] the prime on the second integral means that the term with and is not included in the summation .the first correction to the velocity as a function of total energy is ^{-1}\left(\sqrt{\frac { e^{(t)2}}{m_o^2c^4}-1}-tan^{-1}(\sqrt{\frac { e^{(t)2}}{m_o^2c^4}-1})\right)\ ] ] the plot of the above function against has a maximum value of at and the function rapidly approach zero as .in any event , for particles with like electrons and protons the effect of this term on the value of the velocity will be very small , while for particles with small masses such that the correction has larger impact when .one of the important result of this work is the possibility of having particles that can travel faster than light , but as will be shown later , these particles must have a small mass in order to reach such velocities .moreover the momentum of these particles must be very low , so that the particle can exceed the velocity of light in a measurable value .the calculations in this section will be done by using exact relations and not approximate relations as it was done in the previous sections , that is because the energy can be calculated exactly by using ( 8) and ( 9 ) when the calculation involves only momenta with few s .it will be shown next how the velocity changes with the change of mass for a certain value of the momentum .1.the case : this gives , where .it is easy to prove that the above relation will give for which means that a particle with momentum and mass will have a velocity equal to .the velocity will increase with the mass decrease further until it reaches , which represent the upper bound of the velocity for any mass with momentum no matter how small it is .2.the case : ^{-1}\ ] ] the plot of against for this case shows that the velocity of a particle will increase with the decrease of the mass until it will reach at .it will increase further with the decrease of ( increase of ) until it will reach an upper bound when , then 3.the case : the plot(will notbe shpwn here ) of against for this case shows that the velocity of a particle increases with the decrease of the mass until it reaches at .it increases further with the decrease of ( increase of ) until it reaches an upper bound when .then it is obvious that the value decreases with the increase of , and it will be equal to for , which means that the value that a particle can exceed the velocity of light is getting smaller and smaller with the increase of .therefore the faster than light transition will be discernable only at very low values of , and when the particles involved have very small masses .for the first case the mass must be smaller than for the second case it must be smaller than and for the third case it must be smaller than .this work has proved that it is possible to get the einstein dispersion relation without using lorenz transformations .the edr appears as a zeroth approximation .the plot of the functions against shows that , and while for .this means that the edr will be more and more accurate with the increase of momentum .the derived dispersion relation agrees with the one of sr for high energies .but for low energies and masses this will not be the case because these terms may have a considerable value. one of the difficulties here is the unknown value of the universal constant .because of that it is not possible to predict at what low energies the corrections are needed .the first assumption of sr that the velocity of light is independent of the velocity of the source is a result here that could be concluded .this new understanding lead to an experimental conditions in which the first assumption of sr can be violated .the violation can occur when the wave length of the photons is very long and the source is moving away from the observer with very high velocity .another important result is it that is permissible for nonzero mass particles to travel with a velocity of and faster .this will solve the conceptual difficulty of zero mass photon since it is permitted to all particles to have a rest mass here including photons .the faster than light transition is permitted here only under a condition that the masses of the particles must have a certain degree of smallness compared to .if there are no such particles in nature ( including photons ) then there will be no ftl .on the other hand , if ftl is detected then it is the proof of an existence of such particles with such small rest masses .the results of the previous section show that the there will be no hope of detecting such transition for photons with short wave length ( high momentum ) because from ( 77 ) such a photon will travel with velocity very close to , the ftl could be detected if the experiment is designed to measure the velocity of photons with extremely long wave length .it has been shown that the largest possible velocity of a particle is that is when and the mass approaches zero . herei must admit that there is no usefulness of the new approach in applied physics unless faster than light transition is detected and measured experimentally , it would be hard to justify the replacement of one postulate of sr with three postulate if flt will not be detected .[ 1 ] r. p. feynman , f. b. morinigo , w. g.wagner , b. hatfield , j. preskill , & k. s. thorne , feynman lectures on gravitation , ( addison - wesely publishing company , 1995).h .p. schwartz , introduction to special relativity , ( mcgraw - hill , 1968 .h. poincar , arch .sci vol2 , 5 , p252 ( 1900 ) f. hasenhrl , ann .physik vol.16 , 4 , p593(1905 ) cunningham , the principle of relativity ( cambridge university press , 1914 ) .m. jammer , concepts of mass in classical and modern physics , ( dover publications , inc . , 1997 ) al - hashimi , m . h. k. , discrete momentum mechanics and collision of particles , under preparation . k. f. riley , m. p. hobson , & s. j. bence , , mathematical methods for physics and engineering , ( cambridge university press , 2002 ) .j. spanier , & k. b. oldham , an atlas of functions , ( washington a.o . : hemisphere publishing corporation ; berlin a.o . : springer , 1987 ) . w. rudin , principle of mathematical analysis , ( mcgraw - hill , 1976 ) .y. s. bugrov , s. m. nikolsky , higher mathematics , ( mir publisher , 1982 ) .
in this work a new mechanics will be studied which is based on the hypothesis that the change of linear momentum of a particle happens as a discrete pulses . by using this hypothesis and by considering newton s relation between energy and momentum , and the law of mass and energy conservation as a priori , the einstein dispersion relation can be derived as a zero approximation without using lorentz transformations . other terms will be derived as a corrections to this relation . it will be shown that the effect of the corrections will be smaller and smaller with the increase of momentum . the work will offer an explanation of why the velocity of light seems to be constant regardless of the velocity of the source , and under which condition this will be changed . also a prediction is made that faster than light transition could happen theoretically under certain conditions , and a nonzero mass photon can exist in nature . the work is purely classical in the sense that it does nt involve any uncertainty relations .
trying to make inferences based on incomplete , uncertain knowledge is a common everyday problem .computer scientists have found that this problem can be handled admirably well using bayesian networks ( a.k.a .causal probabilistic networks) .bayesian nets allow one to pose and solve the inference problem in a graphical fashion that possesses a high degree of intuitiveness , naturalness , consistency , reusability , modularity , generality and simplicity .this paper was motivated by a series of papers written by me , in which i define some nets that describe quantum phenomena .i call them quantum bayesian nets"(qb nets ) .they are a counterpart to the conventional classical bayesian nets " ( cb nets ) that describe classical phenomena .in particular , this paper gives an example of a general technique , first proposed in ref. , of embedding cb nets within qb nets .the reader can understand this paper easily without having to read ref. first .he might consult ref. if he wants to understand better the motivation behind the constructs used in this paper and how they can be generalized .the quick medical reference ( qmr ) is a compendium of statistical knowledge connecting diseases to symptoms .the original version of qmr was compiled by miller et al .shwe et al designed a cb net based on the information of ref. .the qmr cb net of shwe et al is of the form shown in fig.[fig : cbnet - general ] .it contains two layers : a top layer of parent nodes corresponding to distinct diseases , and a bottom layer of ,000 children nodes corresponding to distinct findings. the inference problem ( or , in more medical language , giving a diagnosis ) for the qmr is to , given some findings , find the probability of each disease , or at least the more likely diseases . making an inference with a cb netusually requires summing over the states of a subset of the set of nodes of the graph .if each node in contains just 2 states , a sum over all the states of is a sum over terms .these sums of exponential size are the bane of the bayesian network formalism .it has been shown that making exact ( or even approximate ) inferences with a general cb net is np - hard . in 1988 ,lauritzen and spiegelhalter(ls ) devised a technique for making inferences with cb nets for which the subset is relatively small ( for them , the maximal clique of the moralized graph ) .this led to a resurgence in the use of cb nets , as it allowed the use of nets that hitherto had been prohibitively expensive computationally . according to ref. , for the qmr cb net , ,so the ls technique does not help in this case .researchers have found(ref. gives a nice review of their work ) many exact and approximate algorithms for making inferences from the qmr cb net .still , all currently known algorithms require performing an exponential number of operations .rejection sampling and likelihood weighted sampling ( a.k.a . likelihood weighting ) are two simple algorithms for making approximate inferences from an arbitrary cb net ( and from the qmr cb net in particular ) .heretofore , the samples for these two algorithms have been obtained with a conventional classical computer " . in this paper, we will show that two analogous algorithms exist for the qmr cb net , where the samples are obtained with a quantum computer .we will show that obtaining each sample , for these two algorithms , for the qmr cb net , on a quantum computer , requires only a polynomial number of steps .we expect that these two algorithms , implemented on a quantum computer , can also be used to make inferences ( and predictions ) with other cb nets .in this section , we will define some notation that is used throughout this paper . for additional information about our notation, we recommend that the reader consult ref. .ref. is a review article , written by the author of this paper , which uses the same notation as this paper .let . as usual, let represent the set of integers ( negative and non - negative ) , real numbers , and complex numbers , respectively . for integers , such that , let .for any set , let be the number of elements in .the power set of , i.e. , the set of all subsets of ( including the empty and full sets ) , will be denoted by .note that .we will use to represent the truth function " ; equals 1 if statement is true and 0 if is false . for example , the kronecker delta function is defined by .random variables will be represented by underlined letters . for any random variable , will denote the set of values that can assume .samples of will be denoted by for .consider an n - tuple , and a set .by we will mean ; that is , the -tuple that one creates from , by keeping only the components listed in .if , then we will use the statement to indicate that all components of are 0 .likewise , will mean all its components are 1 . for any matrix , will stand for its complex conjugate , for its transpose , and for its hermitian conjugate .when we write a matrix , and leave some of its entries blank , those blank entries should be interpreted as zeros . for any set and any function , we will use to mean .this notation is convenient when is a long expression that we do not wish to write twice .next we explain our notation for quantum circuit diagrams .we label single qubits ( or qubit positions ) by a greek letter or by an integer .when we use integers , the topmost qubit wire is 0 , the next one down is 1 , then 2 , etc ._ note that in our quantum circuit diagrams , time flows from the right to the left of the diagram ._ careful : many workers in quantum computing draw their diagrams so that time flows in the opposite direction .we eschew their convention because it forces one to reverse the order of the operators every time one wishes to convert between a circuit diagram and its algebraic equivalent in dirac notation .in this section , we describe the qmr cb net . before describing the qmr cb net ,let us describe the noisy - or cb net ( invented by pearl in ref. ) .consider a cb net of the form of fig.[fig : cbnet - nd-1f](a ) , consisting of parent nodes ( diseases " ) , with , all pointing into a single child node ( finding " ) , .the cb net of fig.[fig : cbnet - nd-1f](a ) represents a probability distribution that satisfies : p(f , ) = p(f| ) _j=1^n p(d_j ) .[ eq : noisy - or-1 ] we say the probability distribution of eq.([eq : noisy - or-1 ] ) and fig.[fig : cbnet - nd-1f](a ) is a * noisy - or * if it also satisfies : [ eq : noisy - or-2 ] p(f , ) = \ { _ p(f| ) _ j p(d_j|d_j ) } _ jp(d_j ) , with p(f| ) = ( f , d_1d_2 d_n ) . for example , when , p(f| ) & = & \ { r d_1,d_2 + llllll & & 00 & 01&10&11 + f&0&1&0&0&0 + & 1&0&1&1&1 + .+ & = & ( f , d_1d_2 ) .[ eq : or - of-2 ] eqs.([eq : noisy - or-2 ] ) are represented by fig.[fig : cbnet - nd-1f](b ) .sometimes , one also restricts the distributions to have the special form : p(d_j|d_j)&=&\ { r d_j + llll & & 0 & 1 + d_j&0&1 & 1-q_1j + & 1&0 & q_1j + .+ & = & ( 1-q_1j)^d_j_d_j^0 + ( q_1jd_j)_d_j^1 , [ eq : pdd ] where ] and is the set of parents of node .let , and constitute a disjoint partition of .( unk " stands for unknown . )the inference problem for the qmr cb net consists in calculating ] of the previous section will be replaced in this section by a sine squared .let q_ij = ^2_ij = s^2_ij , 1-q_ij = ^2_ij = c^2_ij , for some real number .we will also abbreviate by and by .we begin by considering the simple case of a cb net consisting of two diseases pointing to one finding , as displayed in fig.[fig : qbnet-2d-1f](a ) .we will next show that fig.[fig : qbnet-2d-1f](b ) is a quantum circuit that can generate some of the same probability distributions as the cb net fig.[fig : qbnet-2d-1f](a ) .the state vectors , and the unitary transformations that appear in the quantum circuit of fig.[fig : qbnet-2d-1f](b ) are defined as follows . for ,define by = u_j , where u_j = , = . for ,let a_j(d_j , _j|_j , d_j ) & = & \ { r _ j , d_j + llllll & & 00 & 01&10&11 + & 00&1 & 0&0&0 + d_j,_j&01&0 & c_j&0&-s_j + & 10&0 & 0&1&0 + & 11&0 & s_j&0&c_j + .[ eq:2-bit - q - embed ] + & = & ^0__j + ^1__j .for those familiar with ref. , note that the probability amplitude is a q - embedding of the probability distribution defined in eq.([eq : pdd ] ) .note also that source and sink nodes are denoted by letters with tildes over them .the matrix given by eq.([eq:2-bit - q - embed ] ) is a 2 qubit unitary transformation .such transformations can be decomposed ( compiled ) into an expression containing at most 3 cnots , using a method due to vidal and dawson ( for software that performs this decomposition , see ref. ) .let a_or(f , _ 1 , _ 2| ,d_1 , d_2 ) & = & \ { r , d_1,d_2 + llllllllll & & 000 & 001&010&011&100 & 101&110&111 + & 000&1&&&&0 & & & + f,_1,_2&001&&0&&&&i & & + & 010&&&0&&&&i & + & 011&&&&0&&&&i + & 100&0&&&&1 & & & + & 101&&i&&&&0 & & + & 110&&&i&&&&0 & + & 111&&&&i&&&&0 + .[ eq : quan - or - matrix ] + & = & [ i^f_f^d_1d_2 _ d_1,d_2^_1,_2 ] _ ^0 + [ ] _ ^1 . for those familiar with ref. ,note that the probability amplitude is a q - embedding of the probability distribution defined in eq.([eq : or - of-2 ] ) .the matrix given by eq.([eq : quan - or - matrix ] ) can be compiled as follows : & = & e^i_(b , b)bool^2-(0,0)p_b , b + & = & e^ii_4 e^-ip_00 + & = & i(2)[-i(2)]^n(0)n(1 ) .the probability for the quantum circuit fig.[fig : qbnet-2d-1f](b ) is given by : + & = & | _ d_1,d_2,d_1,d_2 a_or(f,_1,_2|=0,d_1,d_2 ) _j=1,2 \{a_j(d_j,_j|_j=0,d_j ) } |^2 + & = & | _ d_1,d_2,d_1,d_2 i^f _f^d_1d_2 _ d_1,d_2^_1,_2 _j=1,2 \{[(c_j^d_j ) _ _ j^d_j _ d_j^0 + ( s_jd_j ) _ _ j^d_j _ d_j^1 ] } |^2 .in particular , when , p(f=0 , _ 1,_2 , _ 1,_2)= _ j=1,2 c_j^2_j p(_j ) _ _ j^0 . if and are not observed , we may sum over them to get p(f=0 , _ 1,_2)= _ j=1,2 c_j^2_j p(_j ) .if we replace by , for the quantum circuit fig.[fig : qbnet-2d-1f](b ) is identical to for the cb net fig.[fig : qbnet-2d-1f](a ) .this is no coincidence .the quantum circuit was designed from the cb net to make this true . in a sense defined in ref. ,the cb net is embedded in the quantum circuit .one can easily generalize this example with and to arbitrary and .fig.[fig : qbnet-2d-3f ] gives an example with and .in the example with , , we set : = i(2)[-i(2)]^n(0)n(1 ) . for arbitrary , this equation can be generalized to : = i(_i)[-i(_i)]^ _ k_in ( ) , for , where is the qubit label of qubit , and is the set of qubit labels for the parents of qubit . for arbitrary , we can generalize this construction to obtain a quantum circuit that yields probabilities .if the external outputs are not observed , then we measure . if we replace by , the probability for the quantum circuit is identical to the probability for the cb net that was embedded in that quantum circuit . as discussed previously , the inference problem for the cb netis to find ] divided by ] can be calculated exactly numerically on a conventional classical computer .not so the denominator ] .the empirical distribution converges to the exact one .the two modes that we are referring to are rejection sampling and likelihood weighted sampling .we describe each of these separately in the next two sections .assume that we are given the number of samples that we intend to collect , and the sets which are a disjoint partition of .then the rejection sampling algorithm goes as follows ( expressed in pseudo - code , pidgin c language ) : a convergence proof of this algorithm goes as follows . for any function , as , the sample average tends to : = _ k g ( , ) _,p(,)g(, ) . therefore , & = & + & & + & & p[|()_i_0=0,()_i_1=1 ] . for likelihood weighted sampling ,the quantum circuit must be modified as follows : we assume that all gates in the quantum circuit are elementary ; that is , either single - qubit transformations or controlled elementary gates ( like cnots or multiply - controlled nots or multiply - controlled phases ) . 1 . for any qubit with , initialize the qubit to state .( for any qubit with , initialize the qubit to state , same as before . ) 2 . for any qubit with ,remove those elementary gates that can change the state of .in particular , remove any single - qubit gates acting on , and any controlled elementary gates that use as a target . do not remove controlled elementary gates that use as a control only .assume that we are given the number of samples that we intend to collect , and the sets which are a disjoint partition of .then the likelihood weighted sampling algorithm goes as follows ( expressed in pseudo - code , pidgin c language ) : a convergence proof of this algorithm goes as follows .define the likelihood functions and by ( evi " stands for evidence and unk " for unknown ) : l_evi ( ) = _ ii_0p[f_i=0|()_pa(_i ) ] _ii_1p[f_i=1|()_pa(_i ) ] , and l_unk ( , ) = _ ii_unkp[f_i|()_pa(_i ) ] . clearly , p ( , ) = l_evi()l_unk(,)p ( ) . for any function , as , the sample average tends to : = _ k g ( , ) _ , _ ( )_i_0 ^ 0_()_i_1 ^ 1 l_unk(,)p( ) g(, ) .therefore , & = & + & & + & & p[|()_i_0=0,()_i_1=1 ] .in this appendix , we will sum ] for all and , assuming large but arbitrarily .we label the rows and columns of in order of increasing set size .the top - left corner entry is and the bottom - right corner entry is .note that for all , .the shaded top part ( corresponding to small or moderate ) of this matrix can be calculated numerically with a classical computer . but not the unshaded bottom part ( corresponding to large ) .an empirical approximation of the bottom part can be obtained with a quantum computer .consider any set and any functions .the mobius inversion theorem states that g(j ) = _ jj(-1)^|j - j| f(j ) f(j ) = _ jjg(j ) . using the fact that when , , and replacing by in the previous equation , we get g(j ) = _ jj(-1)^|j|f(j ) f(j ) = _ jj ( -1)^|j| g(j ) .[ eq : mobius2 ] we showed in appendix [ app : d - sum ] that p_i_1,i_0 = _ s_1i_1 ( -1)^|s_1|t_s_1,i_0 .[ eq : t - alt - sum ] thus , by virtue of eq.([eq : mobius2 ] ) , t_i_1,i_0 = _ s_1i_1 ( -1)^|s_1|p_s_1,i_0 .[ eq : p - alt - sum ] more generally , if , , and m_s_1 , s_1 = ( -1)^s_1 , then p_s_1,s_0 = _ s_1s_1 m_s_1 , s_1t_s_1,s_0 , and t_s_1,s_0 = _ s_1s_1 m_s_1 , s_1p_s_1,s_0 .eq.([eq : p - alt - sum ] ) implies p_i_1,i_0 = ( -1)^|i_1| \ { t_i_1,i_0 -_s_1i_1 ( -1)^|s_1|p_s_1,i_0 } . to approximate , one can estimate the right hand side of the last equation . , and for small and moderate , can be calculated exactly numerically on a classical computer . for large can be obtained empirically on a quantum computer .in this appendix , we review the rejection sampling algorithm for arbitrary cb nets on a classical computer . consider a cb net whose nodes are labeled in topological order by .assume that ( evidence set ) and ( hypotheses set ) are disjoint subsets of , with not necessarily empty .assume that we are given the number of samples that we intend to collect , and the prior evidence .then the rejection sampling algorithm goes as follows ( expressed in pseudo - code , pidgin c language ) : a convergence proof of this algorithm goes as follows . for any function , as , the sample average tends to : _ k g ( ) _ xp(x)g(x ) .therefore , & = & + & & + & & .in this appendix , we review the likelihood weighted sampling algorithm for arbitrary cb nets on a classical computer . consider a cb net whose nodes are labeled in topological order by .assume that ( evidence set ) and ( hypotheses set ) are disjoint subsets of , with not necessarily empty .let for any .assume that we are given the number of samples that we intend to collect , and the prior evidence .then the likelihood weighted sampling algorithm goes as follows ( expressed in pseudo - code , pidgin c language ) :
the quick medical reference ( qmr ) is a compendium of statistical knowledge connecting diseases to findings ( symptoms ) . the information in qmr can be represented as a bayesian network . the inference problem ( or , in more medical language , giving a diagnosis ) for the qmr is to , given some findings , find the probability of each disease . rejection sampling and likelihood weighted sampling ( a.k.a . likelihood weighting ) are two simple algorithms for making approximate inferences from an arbitrary bayesian net ( and from the qmr bayesian net in particular ) . heretofore , the samples for these two algorithms have been obtained with a conventional classical computer " . in this paper , we will show that two analogous algorithms exist for the qmr bayesian net , where the samples are obtained with a quantum computer . we expect that these two algorithms , implemented on a quantum computer , can also be used to make inferences ( and predictions ) with other bayesian nets .
there is a long debate about essential differences between social and natural sciences , and in particular between sociology and physics . in these discussions ,the term physics is used with its most deterministic and methodologically clear part in mind , i.e. the newton equations . however , the proud claim of newton : `` hypotheses non fingo '' can hardly be repeated , when talking about any branch of modern physics . in particular ,statistical mechanics is known to be a phenomenological science and the validity of its results should always be confirmed by experiment .statistical mechanics is maybe a branch which is most comparable to sociology , for the great role of the law of large numbers in both these sciences . on the other hand , in sociologythere is no unique and commonly accepted method , but rather a rich set of schools , grown within different intellectual traditions .its most mathematical part , the game theory , was initialized by janos von neumann in 40 s . however , most of the game theory is oriented rationally , whereas the human mind is not .it is of interest to develop a quantitative theory of society , which could include this aspect of ours .such a theory was initialized ( again in 40 s ) by fritz heider .the heider balance ( hb ) is a concept in social psychology . for the purposes of this work, the following description will be sufficient .a group of people is initially connected with relations which are randomly hostile or friendly .these random relations cause a cognitive dissonance : which can be of two kinds .first is when the relations of three people are mutually hostile .each member of this triad is willing to decide whom he dislikes less , and in turn to improve relations with him .as this kind of relations usually tends to be symmetric , the dissonance is removed by a friendship between two persons , third one being excluded .second case of the dissonance within a triad is when two persons dislike each other , but both like a third one .a mutual hostility between good friends of someone seems strange : if both are ok , why they are as stubborn as not to recognize it ?usually either we look for someone guilty and we like him less , or we try to invite them to a consent . then again, the stable situation in a triad is either two hostile links , or a friendship . on the contrary ,an unbalanced triad is of one or three hostile relations .this recalls the concept of frustration in spin glasses ; indeed , the product of three links is negative for an unbalanced triad , positive for a balanced one .as it was early recognized , a gradual modification of the unbalanced triads leads the whole system towards a separation of two subsets , with friendly relations within each set and hostile relations between the sets .this is the so - called first structure theorem . for more detailed discussion of this stage of work see ref .removing of the cognitive dissonance in the sense explained above leads then to a kind of social mitosis .algorithm of repairing the unbalanced triads was used in to investigate the role of initial distribution of relations , described as and eventually zero .the approach was generalized recently by the present authors with using real numbers instead of integers . in this model ,each relation between group members is modelled by a matrix element .such a relation is equivalent to a kind of social distance between and , and it can be measured with polls in the bogardus scale . for brevity , we denote from now on .the range is a limitation of relations from below and from above , and its use is motivated by the fact that our feelings affirmed openly are usually moderate , not to insult the public opinion . in ref . , was used , and the initial distribution of was for . the difference between initial and ultimate limit , i.e. between and ,was found to be large enough not to influence the time and dynamics of achieving the heider balance , which were the same as for . however , it is of interest to investigate how the dynamics changes when decreases .tightening of allowed range of the relations is interpreted here as _ gedankenexperiment _, where the public opinion becomes more restrictive .to observe possible consequences of such a tightening is one of the aims of this work .another aim is to check an ifluence of the initial probability distribution of on the process dynamics .the same distribution is used for all matrix elements . in subsequent sectionwe recapitulate our formulation of the problem of the heider balance , including the equations of motion of the matrix elements .section iii contains new numerical results , which are discussed in the last section .in the simplest picture , the group members can be visualised as nodes of a fully connected graph , and the relations between them - as links between the nodes .these relations are described by the matrix elements , .the proposed equations of time evolution of the relations are \sum_k r(i , k ) r(k , j)\ ] ] with =1-(r / r)^2 ] is of minor importance here , as well as the assumed values of and .we guess that the results should scale with . for computational reasons ,the function is chosen to be as elementary as possible .diagonal elements are zero . for the sociological interpretation , the summation over the nodes in eq .1 is crucial .it means , that the velocity of change of the relations between and is determined by the relations of and with all other group members .if both the relation between and and the relation between and are friendly or both of them are hostile , the relation between and is improved . on the contrary ,if are friends and are enemies , the relation between and gets worse .the ruling principle is : my friend s friend is my friend , my friend s enemy is my enemy , my enemy s friend is my enemy , my enemy s enemy is my friend .unlike the classic formulation in terms of integers , here the above velocity depends also on the intensity of the relations , and not only on their sign .in ref . we have demonstrated that for large systems the time dependence of the number of unbalanced triads is similar to the theta fuction : initially flat , after some time abruptly goes to zero .this kind of dynamics of a social system can be considered as unwelcome , if we remember that the heider balance in a fully connected graph is accompanied to the division of the group into mutually hostile camps . also , as a byproduct of this kind of removing of cognitive dissonance we get the relations polarized .this means that members of each group strongly believe that their group is right , while the other is wrong .we are interested in an influence of the narrowing of the range on the time of reaching the heider balance and on the character of the process .the interest is motivated by the following question : to what extent can the group unity be preserved if the dynamics of the relations is bound ?it is obvious that in the trivial limit of , the relations do not evolve at all .however , as soon as , the fixed point is not stable . in fig .1 a , we show the time of getting the heider balance as dependent on the system size , for various . here , the results are obtained for systems somewhat larger , than it was shown in ref .the results point out that decreases with , i.e. the whole process takes more time if the relations are more limited .the same rule is demonstrated to be true in fig . 1 b , where we show an example of the dependence for a given system size .it seems that this time can go to infinity if is small enough .we have also analyzed the system dynamics as dependent on the value of . as it was remarked in ref . , for and higher the time dependence of the number of unbalanced triads decreases abruptly just before the balance is achieved .some examples of such a course is given in fig .2 a for . in the same conditions but for , the time is much longer . moreover , in the last stage the number changes very slowly . actually , for one of the examples shown in fig . 2 b .2 a , b the vertical axis is in units , where is the number of unbalanced triads .this scale is selected as to obtain unity when the sign of only one link is different than it should be in the balance state . in this scalewe can easily notice that the number of unbalanced triads varies quickly indeed in fig .2 a , but quite slowly in fig . 2 b. all the results of ref . andthose reported above are obtained for the uniform initial distribution of the matrix elements with zero average . within our sociological interpretation, this assumption can be interpreted as some symmetry of relations , with equal number of their positive and negative values and intensities .this symmetry can be broken if the initial values of are randomly selected from the range . if , all the relations are improved ; if , all are deteriorated .if is large enough , the output of the dynamics is that there is no division of the group .instead , all relations become positive , i.e. . a glance at eq .1 ensures that in this case all the matrix elements increase in time ; then this unity will continue forever .this is a kind of phase transition , with as a control parameter .the role of the order parameter can be assigned to the size of a smaller part of the group in the final state , when all triads are balanced .when the division does not take place , is zero . in fig .3 a , we show as dependent on .the plot is expected to be sharper if is larger , but the time of calculation increases with at least as and therefore the precision of determining the transition point is limited . as it can be seen in fig .3 a , this transition point is close to .this means that accordance is reached when less than 4 percent of the matrix elements change their initial sign from negative to positive .we note that in the vicinity of the transition , does not show any discontinuity .this result is shown in fig .3 b. we checked that displays a maximum at .in particular , consider the case when , i.e. all relations are initially hostile .absolute values of some of them are large , and the time derivatives of some others are large as well .then , the heider balance is reached rather quickly .one could say that a social state , when all relations are hostile , is rather unstable ; soon we decide to collaborate with those who are less abominable , than the others .main goal of this work is to develop the mathematical refinement of the problem of the heider balance on networks .already in previous works , the problem remained somewhat aside from the main stream of sociology , as taught in student textbooks .if the concept of the balance is taken as granted , the next step should be to include it into more established subjects , as the group dynamics and the game theory .however , such a task excesses the frames of this paper .it is tempting to interpret the results directly , taking literally all the associations suggested above .there , the limitation can be seen as a variable which reflects the amount of freedom allowed by the public opinion .if is large , strong feelings can be expressed freely and they influence emotions of the others . if is small , we speak mainly about the weather , and emotions remain hidden . by an appropriate shift of average initial social distance , we could manipulate the public opinion , at least temporarily preventing social instabilities .these analogies or metaphors can be continued , but we should ask to what extent all of that can be realistic . this question is particularly vivid when mathematical models are laid out for a humanistically oriented audience , when equations and plots bear some magic force of ultimate true . doing the sociophysics , it is fair to make a short warning .deeper discussion of relations between sociology and natural sciences can be found in refs . .our experience with mathematical models is that their internal logic is false rarely , and errors are easy to be detected and eliminated .often , the problem is rather with the model assumptions and interpretation - how they fit to reality ?this point can be easily attacked by people without mathematical education , and this kind of discussion can be equally instructive for both sides . indeed , a mathematical realization of a model is in many cases correct , but its assumptions and interpretation - input and output - are no more true when expressed with mathematical symbols , than with words .bearing this in mind , we see the validity of our results mainly in improving the relation between the heider idea and its mathematical realization .we mean that in order to symbolize interpersonal relations real numbers can be used with more sociological reality than integers .we mean also that differential equations are more appropriate to describe the time dependence of human opinions , than just flipping units from positive to negative .we believe that in the present version , the model is improved .internal logic of its equations allows to draw the results , reported above . they may be true or not - it depends on how the model is used .we hope it can be useful in sociological applications .99 r. k. merton , _ social theory and social structure _ , free press , 1949 ( polish edition : _ teoria socjologiczna i struktura spoeczna _ , wyd .pwn , warszawa 2002 ) .j. s. coleman , _ introduction to mathematical sociology _ , free press , new york 1964 ( polish edition : _wstep do socjologii matematycznej _ , pwe , warszawa 1968 ) .m. toda , r. kubo and n. sait , _ statistical physics i _ , springer - verlag , berlin 1983 ( polish edition : _ fizyka statystyczna i _ , pwn , warszawa 1991 , p.15 ) .j. szacki , _ history of sociological thought _ , greenwood press , westport , conn .1979 ( polish 1st edition : _ historia mysli socjologicznej _ , pwn , warszawa 1981 ;2nd edition : wyd .pwn , warszawa 2002 ) .j. von neumann and o. morgenstern , _ theory of games and economic behavior _ , wiley 1967 ( orig . ed .1944 ) ph .d. straffin , _ game theory and strategy _ , math .association of america , washington , d. c. 1993 ( polish edition : _ teoria gier _ , scholar , warszawa 2004 ) .z. bauman , _ thinking sociologically _ , basil blackwell ltd , oxford 1990 ( polish edition : _ socjologia _ , zysk i s - ka , warszawa 2002 ). f. heider , j. of psychology * 21 * ( 1946 ) 107 .f. heider , _ the psychology of interpersonal relations _ , j.wiley and sons , new york 1958 . c. kadushin ,_ introduction to social network theory _, ( http://home.earthlink.net/ ckadushin / texts/ ) t. m. newcomb , r. h. turner and p. e. converse , _ social psychology _ , holt , rinehart and winston , inc . new york 1965 ( polish edition : _ psychologia spoeczna _ ,pwn , warszawa 1970 ) .k. h. fischer and j. a. hertz , _ spin glasses _ , cambridge up , cambridge 1991 .f. harary , r. z. norman and d. cartwright , _ structural models : an introduction to the theory of directed graphs _ , john wiley and sons , new york 1965 .n. p. hummon and p. doreian , social networks * 25 * ( 2003 ) 17 .z. wang and w. thorngate , j.artificial societies and social simulation , vol . 6 , no 3 ( 2003 ) ( http://jasss.soc.surrey.ac.uk/6/3/2.html ) k. kuakowski , p. gawroski and p. gronek , int . j. mod .c ( 2005 ) , in print ( physics/0501073 ) e. s. bogardus , j. appl .sociology * 9 * ( 1925 ) 299 .d. khanafiah and h. situngkir , _ social balance theory _ , ( http://econpapers.hhs.se/paper/wpawuwpio/0405004.htm )j. h. turner , _ sociology : concepts and uses _ , mcgraw - hill , new york 1994 ( polish edition : _ socjologia : koncepcje i ich zastosowanie _ , zysk i s - ka , pozna 2001 ) .p. sztompka , _ socjologia _ ( in polish ) , znak , krakw 2003 .
the heider balance is a state of a group of people with established mutual relations between them . these relations , friendly or hostile , can be measured in the bogardus scale of the social distance . in previous works on the heider balance , these relations have been described with integers and . recently we have proposed real numbers instead . also , differential equations have been used to simulate the time evolution of the relations , which were allowed to vary within a given range . in this work , we investigate an influence of this allowed range on the system dynamics . as a result , we have found that a narrowing of the range of relations leads to a large delay in achieving the heider balance . another point is that a slight shift of the initial distribution of the social distance towards friendship can lead to a total elimination of hostility .
scaling relations between observable properties and total gravitating mass are a critical ingredient for cosmological tests based on galaxy clusters ( for a review , see ) . for tests using the abundance , clustering and growth of clusters ,scaling relations provide essential mass proxies and are fundamentally important in accounting for selection biases .our knowledge of these relations and the systematics that affect them currently limits the achievable constraints on some cosmological parameters ( e.g. , and references therein ) .measurements of cluster gas mass fractions , which constrain the mean matter density cosmic expansion history ( e.g. ) , can also be expressed in terms of a scaling relation , namely gas mass as a function of total mass .in addition , the scaling relations are of considerable astrophysical interest , reflecting the complex response of the baryonic components of these systems to their overall gravitational potentials , environments and formation histories . for example , departures from the self - similar form introduced by provide clues to non - gravitational processes at work in clusters ( e.g. and references therein ) . in previous work ,we have emphasized the need to model covariance between measured quantities in the analysis of cluster scaling relations .here , we distinguish further between various contributing factors to such covariance : 1 . covariance that is intrinsic to the measurement process . for example, when multiple quantities are measured from an x - ray observation , poisson uncertainties due to photon counting affect these quantities in a coherent rather than independent way .covariance due to explicit use of one measurement to inform another .for example , if the compton signal from a sunyaev - zeldovich observation is measured within a radius determined from an x - ray observation , the statistical error in the radius determination coherently affects the errors on and on any x - ray quantities measured within that radius .covariance that is introduced by models that are fitted to the cluster data .the first concern above is straightforwardly addressed by jointly fitting or measuring all quantities of interest from the observations , and propagating the measurement errors using monte carlo sampling .the distribution of the resulting samples automatically contains all the information about the measurement covariance . unlike the first , the second and third issues results from decisions on the part of the observer . forthe second issue noted above , the use of monte carlo sampling to handle the error propagation can again allow the measurement covariance to be straightforwardly understood . in this paper , we are concerned with the third issue identified above , in particular as it applies to mass measurements of galaxy clusters based on the assumption of hydrostatic equilibrium ( hse ) , and scaling relations that are formed with such masses .we argue below that some widely employed procedures used to model cluster data introduce strong priors that can influence the resulting scaling relation constraints , and thus hamper our ability to perform robust astrophysical and cosmological measurements .fortunately , there are simple , alternative approaches that do not suffer from these problems , which are also discussed here .the paper is organized as follows .section [ sec : background ] provides a brief introduction to galaxy cluster mass measurements , scaling relations and the self - similar model . in section [ sec : methods ] , we discuss the various methods for estimating hydrostatic masses that have been proposed , with particular emphasis on the priors that each methods imposes on both masses and the resulting scaling relations . in section [ sec : meta ] , we review and discuss results on the mass temperature relation slope from the literature , with attention to the impact of these modeling priors .our conclusions are summarized in section [ sec : summary ] .many galaxy cluster mass estimates in the literature are based on x - ray observations .x - ray data provide two observables that scale physically with total mass , namely the luminosity in the observed energy band and the temperature of the x - ray emitting , hot intracluster medium ( icm ) . under the assumption of spherical symmetry , spectral and surface brightness data measured in projectioncan be de - projected , yielding three - dimensional profiles of emissivity and temperature .these two can be combined to infer the icm density profile , ) , low - redshift clusters , and luminosity measured in the soft x - ray band ( e.g. 0.52.0 ) , this conversion is essentially independent of temperature . ] and thus the gas mass , as well as the bolometric luminosity ( e.g. * ? ? ?* ) . for clusters that are approximately spherical and close to hse, such data can also be used to constrain the total mass profile . specifically , hse implies a relationship between the density and temperature profiles of the icm , and , and the total mass , where is boltzmann s constant , is newton s constant , and is the mean molecular weight .conventionally , scaling relations are formed by relating the observables of interest to the total mass , , within a particular radius , , jointly defined by where is the critical density of the universe at the cluster s redshift .typical choices for range from 2500 ( intermediate radius ) to 200 ( approximately the virial radius ) . combining these equations ,one can immediately write ^{-1 } \left[\frac{k}{\mu{{\ensuremath{m_{\mathrm{p}}}}}g}\right ] t(r_\delta ) \ , \mathcal{f}(r_\delta ) , \\m_\delta & = & \left[\frac{4\pi}{3}\delta{{\ensuremath{\rho_{\mathrm{c}}}}}(z)\right]^{-1/2 } \left[\frac{k}{\mu{{\ensuremath{m_{\mathrm{p}}}}}g}\right]^{3/2 } t(r_\delta)^{3/2 } \ , \mathcal{f}(r_\delta)^{3/2 } , \nonumber\end{aligned}\ ] ] where clusters forming from idealized , spherical gravitational collapse with no additional heating or cooling are expected to have self - similar ( i.e. described by the same function of for every cluster ) gas and dark matter density profiles . assuming hse , the temperature profiles of clusters will also have a self - similar shape ( though not a common normalization ) .this case was studied by , who derived power - law predictions for scaling relations using masses defined by equation [ eq : radius ] : are written in terms of , the normalized hubble parameter . ]^{2/3 } , \nonumber \\{ { \ensuremath{\rho_{\mathrm{c}}}}}(z)^{1/2}\,y_\delta & \propto & \left [ { { \ensuremath{\rho_{\mathrm{c}}}}}(z)^{1/2}m_\delta \right]^{5/3 } , \nonumber \\ \frac{l_\delta}{{{\ensuremath{\rho_{\mathrm{c}}}}}(z)^{1/2 } } & \propto & \left [ { { \ensuremath{\rho_{\mathrm{c}}}}}(z)^{1/2}m_\delta \right]^{4/3}.\end{aligned}\ ] ] a constant gas mass fraction ( the first line ) is a direct consequence of the self - similar hypothesis , while the second follows from equation [ eq : rmsolution ] , since self - similarity implies that is a constant . here is the integrated , intrinsic sunyaev - zeldovich signal ( i.e. the thermal energy of the gas ) , and refers to the bremsstrahlung luminosity of the plasma ( ) . may be the temperature at radius or some weighted average of within , since the scalings are identical given self similarity [ is constant ] . for simplicity, we will henceforth eliminate most of the constants , setting in practice , the redshift dependence represented by must be properly accounted for ; however , it is incidental to the focus of this work . by eliminating these terms ,we effectively consider the simplified case of scaling relations at a single redshift and fixed density contrast . as an example, we consider the isothermal model .this case is deliberately simplistic ( indeed , the results below are well known ) , but it serves to illustrate some features of hse mass estimation using parametrized models that are relevant to our discussion in section [ sec : methods ] . in this model, the three - dimensional gas density and temperature profiles are parametrized by the gas mass is given by where is the gauss hypergeometric function .applying the hydrostatic equation , the mass profile is which yields a solution for the characteristic radius , from this , we can write the relationship between temperature and mass for clusters described by this model , in the self - similar case , all clusters have the same values of and , and so the self - similar scaling follows directly from equation [ eq : betatm ] . furthermore , the hypergeometric function in ( equation [ eq : betamgas ] ) assumes a constant value , leading to a constant gas mass fraction , and the other scaling laws in equation [ eq : selfsimilar ] follow straightforwardly : conversely , departures from self - similarity result in changes to these scaling laws . for example , consider the case in which remains constant , but varies from cluster to cluster .the expectation value of the characteristic mass at fixed temperature can be written as ^{3/2 } , \nonumber\\ & = & { \ensuremath{\left\langle \left.\beta^{3/2 } \right| t_0 \right\rangle } } \left[\frac{3t_0}{1+\left({{\ensuremath{r_{\mathrm{c}}}}}/r_\delta\right)^2}\right]^{3/2},\end{aligned}\ ] ] where is the distribution of values for clusters with temperature .thus , if varies systematically with temperature as , then the slope implied by this model is modified to ( figure [ fig : simple ] ) . the generalization when both and vary is straightforward , and the effect on the other scaling lawscan be derived similarly .when deriving hydrostatic cluster masses , a common practice is to fit parametric functions for the three - dimensional gas density and temperature profiles to the observed surface brightness and temperature data , and then to derive the total cluster mass profile using equation [ eq : hse ] . for comparison with the other methods described below, it should be noted that selecting parametrized models for and is completely equivalent ( via equation [ eq : hse ] ) to choosing parametrizations for and , and thus implicitly imposes a prior on the form of . because these functions share parameters , varying model parameters produces covariance in and .that is , the choice of parametrized models also constitutes an implicit prior on the scaling relations , as described below .generalizing equation [ eq : betamtexpect ] , we can write the mean mass temperature relation resulting from fits to parametrized and models as ( equation [ eq : rmsolution ] ) where represents the full set of parameters describing and , and is the measured gas temperature used to form the scaling relation .typically , is an emission - weighted average , dominated by the signal from relatively small radii , in which case is a measure of the overall shape of the temperature profile . for self - similar clusters ,both and are constant , to _ measured _ vary , even for self - similar clusters .we do not consider such effects here . ] and the self - similar slope of is trivially recovered . however ,if either of these quantities varies systematically with mass , the slope may be perturbed from the self - similar value .the explicit appearance of the exponent in equation [ eq : parammt ] makes clear that , in practice , our ability to detect departures from self - similarity using this approach depends on measuring the shape of the temperature and density profiles at . in the case of temperature , this is a challenging task for current x - ray observatories at even intermediate cluster radii ( e.g. , a common choice ) .furthermore , and not incidentally , the priors on the forms of the temperature and density ( or mass , equivalently ) profiles must be flexible enough to admit departures from self - similarity . in practice ,the parametrizations employed are generally motivated by observable features of the surface brightness and temperature profiles , raising the possibility that the relatively low signal - to - noise at intermediate radii , and subsequent assumption of regular behavior in the profiles [ i.e. similar values of and ] , produces a bias favoring self - similarity .conversely , if the parametrizations provide too much flexibility near to be effectively constrained by the data , then departures from self - similarity can not be constrained either . in the extreme case where the data provide no constraint at all on the profiles near , the bracketed expression in equation [ eq : parammt ] simply samples the prior ( the allowed region in model parameter space ) .if that prior is independent of , as common practice would dictate , the resulting scaling relation must have the self - similar slope on average , with the prior simply determining the size of the scatter about the relation .this behavior is explicitly demonstrated in right panel of figure [ fig : simple ] , which shows a mass temperature relation resulting from random realizations of a non - isothermal , parametrized cluster model ( see appendix [ sec : sims ] ) .the randomized density and temperature profile models vary widely in shape and normalization , but because these variations have no mass dependence , the structure of equation [ eq : parammt ] results in a self - similar mean scaling relation .thus , the inability of current observatories to constrain high - resolution temperature profiles at the radii of interest poses a dilemma for the fully parametric approach to mass estimation . allowing too little freedom in the adopted forms of the and profiles risks assuming implicitly that clusters are self - similar . on the other hand , allowing too much freedomcan result in the profiles at being so poorly constrained that departures from self - similarity in individual clusters can not be constrained either ; based on the argument above , this case may well also result in an apparently self - similar scaling relation on average .apart from temperature , the fully parametric mass estimate depends on the shape of at .both and have a dependence on this quantity , being integrals of weighted towards large radii. however , the surface brightness profile can be determined at much higher resolution than temperature from x - ray data , meaning that priors on the shape of need not be as influential .provided that the choice of density parametrization is not overly restrictive , we would thus expect biases towards self similarity to be less of a concern for the relation compared to or . the x - ray luminosity mass relation should be essentially free of this bias , since it is dominated by emission from the dense gas at cluster centers , at radii typically .it is therefore interesting to note that the relation is the only one of these scalings for which the fully parametric approach to mass estimation has consistently measured strong departures from self - similarity in the slope ( e.g. ; see also section [ sec : meta ] ) .an alternative method for determining cluster hydrostatic mass profiles was developed by ( * ? ? ?* see also ) . in this approach , a functional form for the total mass profileis explicitly adopted , and used in conjunction with a non - parametric description of the surface brightness to predict the temperature in concentric shells .temperature measurements from spectral data then provide the means to constrain the parameters of the mass model .in contrast to fully parametric methods , the semi - parametric approach does not restrict the forms of the icm density or temperature profiles .apart from the regularization imposed by the size of the annular regions analyzed ( a factor in all of the approaches discussed here ) , these profiles are not constrained a priori .whereas the fully parametric approach implies an implicit prior on the form of , the semi - parametric method explicitly adopts a prior on the form of this function , typically motivated by numerical simulations of cluster formation ( e.g. the model of , hereafter ) . as a consequence ,the semi - parametric approach does not impose a prior on the form of the mass temperature relation ( or the other scaling relations ) in the sense of equation [ eq : parammt ] .that is not to say that the procedure imposes no priors at all ; the choice of a particular form for the mass profile explicitly does so . in this case , however , the effect of the prior on the mass reconstruction is completely transparent , and the goodness of fit furthermore provides a means to evaluate the mass model , a significant advantage .the most general possibility for hydrostatic mass analysis is a fully non - parametric de - projection .examples include the methods introduced by and , in which numerical derivatives of non - parametric icm density and temperature profiles are directly used to reconstruct the enclosed mass , subject to the constraint that mass increase with radius ; and by , in which the total densities in concentric spherical shells are free model parameters . in a sense , these approaches are , respectively , logical extensions of the fully parametric methods , which use the derivatives of and to derive the mass , and the semi - parametric methods , which model the mass profile directly .however , these non - parametric methods require very high quality data compared to methods which impose some kind of prior on the mass distribution ; as has already been mentioned , x - ray data typically can not resolve the temperature gradient near .the use of these approaches has thus been relatively limited .finally , the explicit assumption of hse can be bypassed by estimating mass using a proxy ( e.g. or ) from an external scaling relation .this approach clearly carries its own prior , namely the validity of the mass proxy , which must be verified and calibrated using true mass determinations .there are also restrictions on what scaling relations can sensibly be investigated using this technique ; for example , given its definition , -derived masses should not be used to investigate scalings with gas mass or temperature . on the other hand , for hot clusters ( ) , is a good mass proxy whose determination is essentially independent of temperature , so masses estimated from can reasonably be used to study the relation in this mass range ( e.g. ) .the appropriate use of mass proxies can thus potentially increase the available sample size and redshift range for studying some scaling relations .the comparison of scaling relations derived in different works is complicated by a variety of potential systematics , including ( potentially redshift - dependent ) selection effects , instrument cross - calibration , and the use of different regression methods over the years , in addition to the issues discussed in this paper ( see also appendix [ sec : comments ] ) . nevertheless , it is interesting to test whether there is any trend in scaling relation results from the literature with the mass modeling technique employed .here we focus on mass temperature relations measured from x - ray data , which have a particularly long history .a sampling of slopes and reported uncertainties from the literature over the past 12 years is shown in figure [ fig : mtslopes ] . in some cases, we have included multiple results from the same authors , where different data sets or mass models produced noticeably different results . in the figure , red circles are results obtained by fitting fully parametric and models .blue triangles indicate results using a non - parametric description of the icm along with an explicit prior on the form of the total mass profile ( semi - parametric methods ) .the green square reflects a study of massive clusters where gas mass was used as a proxy for total mass .temperature measurements used in the displayed results are all emission - weighted averages , and masses in a given study are estimated at a constant value of ( see below ) .and models ; blue triangles show results using a non - parametric description of the icm along with an explicit prior on the form of the total mass profile ; and the green square uses gas mass as a proxy for total mass .the dashed line shows the self - similar value of the slope . ] among the results based on fits to very simple and models are : 1 .two results from where masses were estimated from x - ray observations .the first slope , , is from a heterogeneous sample with measured temperature profiles .for the second , the isothermal model was fitted to the data of , resulting in a slope of . in both cases ( as well as their third result , below ) ,masses were rescaled to .2 . the first two results from use a compilation of clusters for which resolved temperature profiles were available , with masses estimated at by fitting a model density profile and assuming a polytropic relationship between gas density and temperature .the first slope , , uses the entire sample , while the second , , was obtained by excluding 4 clusters with measured .the third result shown from and the slope from were derived by fitting different subsets of the data of , respectively finding slopes of and .the analysis provides masses at from isothermal model fits .more recent works using and fits generally have used more complicated models , which are detailed in the respective papers : 1 . used masses measured by , who fitted functions for and to x - ray data .the figure shows their slope of for clusters with at , the largest radius for which no extrapolation was required .their results at other radii are very similar .2 . fitted parametrized models to 13 clusters , obtaining mass temperature relations at and 500 with both emission- and mass - weighted temperatures . in the figure , we show the emission - weighted slope for , .the best fitting values in the other cases ranged from 1.51 to 1.64 .the analysis was extended to 17 clusters in , resulting in a slope of .3 . fitted a sample of 23 groups and 14 clusters , spanning , obtaining a slope of at .4 . the models of to 28 clusters with , finding a slope of at .the mass temperature slopes that rely on fully parametric fits to and tend to cluster in the 1.501.65 range .the exceptions are one result from ( * ? ? ?* using the isothermal model ) and one from ( * ? ? ?* using a polytropic model ) . in the former case , commentthat there exists a clear correlation between measured values of and in the data , . based on section [ sec : isothermalbeta ], one might expect such a correlation to result in a steeper slope when the isothermal model is used .the simple formula from section [ sec : isothermalbeta ] over - predicts the size of the effect : implies a yet steeper slope than was observed .the full explanation likely involves the effect of the third fit parameter , , as well as the measurement errors and the method used to fit the scaling relation . in , eliminating the clusters with the smallest measurements ( which also happen to be at the low - temperature end of the data set ) reduces both the empirical correlation and the mass temperature slope ( compare their first and second results in the figure ) , supporting the qualitative notion that model parameter correlations contribute to steepening of the slope .similarly , the data set used by ( * ? ? ?* their third result above ) and has an empirically smaller correlation between and , and fits to a correspondingly shallower slope .the works using more complicated and models generally show less strong departures from the self - similar value .relatively fewer authors have used an explicit prior on the form of the mass profile , along with a non - parametric description of the icm : 1 .the third result from employs masses from , obtaining a slope of .the mass profiles were constrained by a combination of galaxy velocity dispersion and x - ray temperature data .2 . fitted a mass profile motivated by to x - ray data for 24 hot ( ) clusters , obtaining a slope of at . however , when they allow the normalization of the scaling relation to evolve with redshift , the measured mass temperature slope is steeper , .3 . fitted an mass profile to non - parametric surface brightness and temperature data for 42 massive , dynamically relaxed clusters , obtaining hse masses at .the profile provides an acceptable fit to the data ( see also ) . combining these mass measurements with temperatures from an extension of the work in , we obtain a mass temperature slope of ( see appendix [ sec : a08mt ] ) . the final result shown in the figureis from , who used gas mass as a proxy to estimate total masses at for a sample of 94 hot , massive clusters , obtaining a mass temperature slope of . because the gas mass fraction was calibrated using the data of , the two results are not entirely independent . on the other hand ,relatively few of the clusters are in the data set , so this dependence should largely be limited to the normalization of the scaling relation .apart from the slope , which has a large uncertainty , the results that use explicit priors on the form of the mass profile or employ gas mass as a proxy appear to prefer a relatively steep slope compared with the other works , . given that mass models such as the profile are well motivated by numerical simulations and provide an acceptable fit to cluster data ( e.g. ) , the segregation apparent in figure [ fig : mtslopes ] suggests that the implicit priors in fully parametric and models bias the resulting slopes towards the self - similar value .in this paper , we have discussed the influence of priors on hydrostatic mass estimates of clusters and on the resulting mass observable scaling relations .the use of fully parametric gas density ( or x - ray brightness ) and temperature profiles , similar to those commonly used in the literature , introduces an implicit prior on the form of the mass profile via the hydrostatic equation .furthermore , the structure of the prior thus imposed results in an implicit prior on the cluster scaling relations .if the parametrized models employed are insufficiently flexible , or conversely if they are too general to be constrained at the radii of interest , then constraints on the scaling relations will be biased towards having self - similar slopes .alternative techniques for hydrostatic mass measurement exist which , by construction , do not suffer from this bias .the most common of these is a semi - parametric approach , in which a parametric prior on the form for the mass profile is explicitly adopted , with the icm described independently and non - parametrically .typically , the priors used here are motivated by the results of numerical simulations .an advantage of the semi - parametric approach is that it requires no a priori assumptions about the potentially complex form of the icm density and temperature profiles , and that the applicability of the mass profile model can be straightforwardly evaluated through the goodness of fit .we comment further on the relative merits of various methods , and offer general recommendations , in appendix [ sec : comments ] . in the literature ,results for the mass temperature slope obtained by fitting parametric and profiles tend to cluster relatively near the self - similar value .semi - parametric analyses appear to prefer a significantly steeper mass temperature relation , although there are relatively few such works to consider .while a variety of systematic effects can potentially affect the scaling relations , this segregation of values for the slope suggests that the priors imposed during mass estimation have a significant influence that needs to be considered carefully . as cluster surveys at all wavelengthsare expanded to higher and higher redshifts , and are used to investigate more complex cosmological questions , accurate calibration of the relevant scaling relations will only become more important .gravitational lensing will make a unique contribution to these efforts , particularly in assessing the residual bias in icm - based mass estimates due to the hse assumption .nevertheless , icm - based mass measurements for relaxed systems will remain an important ingredient in cluster cosmology due to the higher precision and lower systematic scatter of individual estimates compared to lensing .it is therefore critical , going forward , that the priors employed in these measurements be minimal , straightforwardly testable , and well understood .the authors are grateful to mark voit for interesting and insightful comments .am was supported by an appointment to the nasa postdoctoral program at the goddard space flight center , administered by oak ridge associated universities through a contract with nasa .swa acknowledges supported from the u.s .department of energy under contract number de - ac02 - 76sf00515 .s. w. , evrard a. e. , mantz a. b. , 2011 , , in press , arxiv:1103.4829 s. w. , rapetti d. a. , schmidt r. w. , ebeling h. , morris r. g. , fabian a. c. , 2008 , , 383 , 879 s. w. , schmidt r. w. , ebeling h. , fabian a. c. , van speybroeck l. , 2004 , , 353 , 457 s. w. , schmidt r. w. , fabian a. c. , 2001 , , 328 , l37 s. , borgani s. , pierpaoli e. , dolag k. , ettori s. , morandi a. , 2009 , , 394 , 479 j. s. , bautz m. w. , arabadjis g. , 2004 , , 617 , 303 m. , pointecouteau e. , pratt g. w. , 2005 , , 441 , 893 a. c. , hu e. m. , cowie l. l. , grindlay j. , 1981 , , 248 , 47 a. c. , sanders j. s. , allen s. w. , crawford c. s. , iwasawa k. , johnstone r. m. , schmidt r. w. , taylor g. b. , 2003 , , 344 , l43 a. , reiprich t. h. , bhringer h. , 2001 , , 368 , 749 w. et al . , 2005 , , 635 , 894 y. , 1997 , ph.d .thesis , univ .tokyo , ( 1997 ) a. , carlin j. b. , stern h. s. , rubin d. b. , 2004 , bayesian data analysis .chapman & hall / crc d. j. , mushotzky r. f. , scharf c. a. , 1999 , , 520 , 78 a. m. , davis d. s. , mushotzky r. , 2010 , , 709 , l103n. , 1986 , , 222 , 323 b. c. , 2007 , , 665 , 1489 a. , hoekstra h. , babul a. , henry j. p. , 2008, , 384 , 1567 a. , allen s. w. , ebeling h. , rapetti d. , 2008 , , 387 , 1179 a. , allen s. w. , ebeling h. , rapetti d. , drlica - wagner a. , 2010a , mnras , 406 , 1773 a. , allen s. w. , rapetti d. , ebeling h. , 2010b , mnras , 406 , 1759 b. r. , nulsen p. e. j. , 2007 , , 45 , 117 a. , ettori s. , moscardini l. , 2007 , , 379 , 518 d. , vikhlinin a. , kravtsov a. v. , 2007 , , 655 , 98 j. f. , frenk c. s. , white s. d. m. , 1997 , , 490 , 493 p. e. j. , powell s. l. , vikhlinin a. , 2010 , , 722 , 55 c. , enlin t. a. , springel v. , jubelgas m. , dolag k. , 2007 , , 378 , 385 e. , arnaud m. , pratt g. w. , 2005 , , 435 , 1 p. , biviano a. , bhringer h. , romaniello m. , voges w. , 2005 , , 433 , 431 e. , tormen g. , moscardini l. , 2004 , , 351 , 237 t. h. , bhringer h. , 2002 , , 567 , 716 e. et al . , 2010 , , 708 , 645 c. l. , 1988 , x - ray emission from clusters of galaxies s. , 1996 , , 48 , l119 r. w. , allen s. w. , 2007 , , 379 , 209 r. w. , allen s. w. , fabian a. c. , 2001 , , 327 , 1057 a. , burke d. j. , aldcroft t. l. , worrall d. m. , allen s. , bechtold j. , clarke t. , cheung c. c. , 2010 , , 722 , 102 a. et al ., 2011 , , 331 , 1576 m. , voit g. m. , donahue m. , jones c. , forman w. , vikhlinin a. , 2009 , , 693 , 1142 a. et al . , 2009a , , 692 , 1033 a. , kravtsov a. , forman w. , jones c. , markevitch m. , murray s. s. , van speybroeck l. , 2006 , , 640 , 691 a. et al ., 2009b , , 692 , 1060 g. m. , 2005 , reviews of modern physics , 77 , 207 d. a. , jones c. , forman w. , 1997 , , 292 , 419 h. , rozo e. , wechsler r. h. , 2010 , , 713 , 1207as discussed in section [ sec : fullparametric ] , when the model parameters that determine the density and temperature profiles at are unconstrained or poorly constrained , the fully parametric approach can be biased towards self - similar scaling relations . as an explicit illustration of this ,consider a model description of the gas density in conjunction with a simple , non - isothermal temperature profile , ^{c / b}}.\end{aligned}\ ] ] this function is a simplification of the form used by , namely eliminating the ` cool core ' term , which is intended to describe the profile at small radii .to illustrate the case where these models are effectively unconstrained , we generated random realizations by sampling independent , uniform values of the model parameters within the ranges given in table [ tab : nonisobetarand ] .the radial scales in the density and temperature models , and , were allowed to take values between zero and .this is the maximum value of for which the isothermal model has a real solution for ( equation [ eq : isothbetardelta ] ) ; while the same is not true of this non - isothermal model , allowing larger values does not change the resulting picture qualitatively . was allowed to vary over a range somewhat wider than that seen in observations , while the temperature exponents , and , were varied over approximately the range allowed by . to provide an adequate baseline to observe the resulting scaling behavior , the temperature normalization , ,was sampled uniformly in the logarithm between 1 and 1000 ..allowed ranges for uniformly distributed parameters of the non - isothermal model , with defined as .the gas density normalization is not shown , since it has no effect on the mass temperature relation .[ cols="^,^,^",options="header " , ]
deriving the total masses of galaxy clusters from observations of the intracluster medium ( icm ) generally requires some prior information , in addition to the assumptions of hydrostatic equilibrium and spherical symmetry . often , this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles . in this paper , we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach , and the implications of such priors for scaling relations formed from those masses . we show that the application of such fully parametric models of the icm naturally imposes a prior on the slopes of the derived scaling relations , favoring the self - similar model , and argue that this prior may be influential in practice . in contrast , this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the icm non - parametrically . constraints on the slope of the cluster mass temperature relation in the literature show a separation based the approach employed , with the results from fully parametric icm modeling clustering nearer the self - similar value . given that a primary goal of scaling relation analyses is to test the self - similar model , the application of methods subject to strong , implicit priors should be avoided . alternative methods and best practices are discussed . [ firstpage ]
transportation processes are common in complex natural and engineering systems , examples of which include transmission of data packets on the internet , public transportation systems , migration of carbon in biosystems , and virus propagation in social and ecosystems . in the past decade , transportation dynamics have been studied extensively in the framework of complex networks , where a phenomenon of main interest is the transition from free flow to traffic congestion .for example , it is of both basic and practical interest to understand the effect of network structure and routing protocols on the emergence of congestion . despite these works ,relatively little attention has been paid to the role of _ individual mobility_. the purpose of this paper is to address how this mobility affects the emergence of congestion in transportation dynamics .the issue of individual mobility has become increasingly fundamental due to the widespread use of ad - hoc wireless communication networks .the issue is also important in other contexts such as the emergence of cooperation among individuals and species coexistence in cyclic competing games .recently , some empirical data of human movements have been collected and analyzed . from the standpoint of complex networks ,when individuals ( nodes , agents ) are mobile , the edges in the network are no longer fixed , requiring different strategies to investigate the dynamics on such networks than those for networks with fixed topology . in this paper , we shall introduce an intuitive but physically reasonable model to deal with transportation dynamics on such mobile / non - stationary networks .in particular , we assume in our model that communication between two agents is possible only when their geographical distance is less than a pre - defined value , such as the case in wireless communication .information packets are transmitted from their sources to destinations through this scheme . to be concrete , we assume the physical region of interest is a square in the plane , and we focus on how the communication radius and moving speed may affect the transportation dynamics in terms of the emergence of congestion .our main results are the following .firstly , we find that congestion can occur for small communication range , limited forwarding capability and low mobile velocity of agents .secondly , the transportation throughput exhibits a hierarchical structure with respect to the moving speed and there is in fact an algebraic power law between the throughput and the communication radius , where the power exponent tends to assume a smaller value for higher moving speed . to explain these phenomena in a quantitative manner, we develop a physical theory based on solutions to the fokker - planck equation under initial and boundary conditions that are specifically suited with the transportation dynamics on mobile - agent networks .besides providing insights into issues in complex dynamical systems such as contact process , random - walk theory , and self - organized dynamics , our results will have direct applications in systems of tremendous importance such as ad - hoc communication networks . in sec .[ sec : model ] , we describe our model of mobile agents in terms of the transportation rule and the network structure . in sec .[ sec : numerics ] , we present numerical results on the order parameter , the critical transition point and the average hopping time . in sec .[ sec : theory ] , a physical theory is presented to explain the numerical results .a brief conclusion is presented in sec .[ sec : conclusion ] .in our model , agents move on a square - shaped cell of size with periodic boundary conditions .agents change their directions of motion as time evolves , but the moving speed is the same for all agents .initially , agents are randomly distributed on the cell . after each time step, the position and moving direction of an arbitrary agent are updated according to where and are the coordinates of the agent at time , and is an -independent random variable uniformly distributed in the interval $ ] .each agent has the same communication radius .two agents can communicate with each other if the distance between them is less than . at each time step , there are packets generated in the system , with randomly chosen source and destination agents , and each agent can deliver at most packet ( we set in this paper ) toward its destination . to transport a packet , an agent performs a local search within a circle of radius .if the packet s destination is found within the searched area , it will be delivered directly to the destination and the packet will be removed immediately .otherwise , the packet is forwarded to a randomly chosen agent in the searched area .the queue length of each agent is assumed to be unlimited and the first - in - first - out principle holds for the queue . the transportation process is schematically illustrated in fig .[ fig : schemeic ] . the communication network among the mobile agents can be extracted as follows .every agent is regarded as a node of the network and a wireless link is established between two agents if their geographical distance is less than the communication radius . due to the movement of agents ,the network s structure evolves from time to time .the network evolution as a result of local mobility of agents is analogous to a locally rewiring process . as shown in fig .[ fig : schemeic ] , nodes 1 and 2 are disconnected while node 3 and node 4 are connected at time . at time , nodes 1 and 2 depart from each other and become disconnected while nodes 3 and 4 approach each other and establish a communication link .note that the mobile process does not hold the same number of links at different time , which is different from the standard rewiring process where the number of links is usually fixed .we define an agent s degree at a specific time step as the number of links at that moment .figure [ fig : degree ] shows that the degree distribution of networks of mobile agents exhibits the poisson distribution : where is the degree , is the proportion of nodes with degree and is the average degree of network . as shown in fig .[ fig : degree ] , the average degree increases as the communication radius increases and the peak value of decreases as increases .we also investigate the relative size of the largest connected component and the clustering properties of the network in terms of the clustering coefficient .the relative size of the largest connected component is defined as where and is the size of the largest connected component and the total network respectively .the clustering coefficient for node is defined as the ratio between the number of edges among the neighbors of node and its maximum possible value , , i.e. , the average clustering coefficient is the average of over all nodes in the network . the insert of fig .[ fig : degree ] shows that and increase as the communication radius increases . in particular , when the value of exceeds a certain value .e.g. , , high values of and is attained .we also note that the motion speed does not influence the statistical properties of the communication network . in general, the communication network caused by limited searching area and mobile behavior is of geographically local connections associated with poisson distribution of node degrees and dense clustering structures .to characterize the throughput of a network , we exploit the order parameter introduced in ref . : where , indicates the average over a time window of width , and represents the total number of data packets in the whole network at time . as the packet - generation rate is increased through a critical value of , a transition occurs from free flow to congestion . for , due to the absence of congestion , there is a balance between the number of generated and that of removed packets so that , leading to .in contrast , for , congestion occurs and packets will accumulate at some agents , resulting in a positive value of .the traffic throughput of the system can thus be characterized by the critical value which is on average the largest number of generated packets that can be handled at each time without congestion .figure [ fig : order](a ) exemplifies the transition in the order parameter from free flow to congestion state at some critical value .we find that depends on both the moving speed and the communication radius .figure [ fig : order](b ) shows the dependence of on for different values of .we observe a hierarchical structure in the dependence .specifically , when is less or larger than some values , remains unchanged at a low and a high value , respectively , regardless of the values of .the transition between these two values of is continuous .the hierarchical structure can in fact be predicted theoretically in a quantitative manner ( to be described ) .figure [ fig : rct](a ) shows the dependence of on for different values of , which indicates an algebraic power law : , where is the power - law exponent .we find that the power law holds for a wide range of and the exponent is inversely correlated with . for example , for , but for large values of , say , we have . when reaches the size of the square cell , is close to as every agent always stays in the searching range of all others and almost all packets can arrive at their destinations in a single time step . to gain additional insights into the dependence of on the parameters and so as to facilitate the development of a physical theory , we explore an alternative quantity , the average hopping time in the free flow state which , for a data packet , is defined as the number of hops from its source to destination .as we will see , can not only be calculated numerically , it is also amenable to theoretical analysis , providing key insights into the theory for .representative numerical results for are shown in fig .[ fig : rct](b ) .we see that for large , scales with as and , as both and are increased , decreases .we now present a physical theory to explain the power law behaviors associated with and then .a starting point is to examine the limiting case of , where can be estimated analytically . in particular , assume that a particle walks randomly on an infinite plane .there are many holes of radius on the plane .holes form a phalanx and the distance between two nearby holes is .the particle will stop walking when it falls in a hole .the underlying fokker - planck equation is (\textbf{r},t),\ ] ] where is the probability density function of a particle at location at time , is the diffusion coefficient , is the potential energy , inside holes and outside holes , and inside holes and outside holes . making use of solutions to the eigenvalue problem : \phi_{n}(\textbf{r})=-\lambda_{n}\phi_{n}(\textbf{r}),\ ] ] where isthe normalized eigenfunction and is the corresponding eigenvalue , we can expand as where and the initial probability density is distributed over a region of a typical size . the probability that a particle still walks at time : where .since the term is dominant , we have which gives the average hopping time as since the infinite - plane problem can be transformed into a problem on torus : where and and are the first - kind and the second - kind bessel function , respectively , and .the quantity can be obtained by using ,\ ] ] and combining eqs .( [ 4.2 ] ) and ( [ 4.5 ] ) , we get . for , , we have and .finally , we obtain as for , becomes zero in the area where a hole moves and decays with time under two mechanisms : diffusion at rate and motion of holes at the rate .thus , we have where is the weighting factor ( ) that decreases as increases .specifically , for small and for large .the quantity is given by where or , ( ii ) is valid for and ( iii ) is for . for large values of , agents are approximately well - mixed so that we can intuitively expect the average time to be determined by the inverse of the ratio of the agent s searching area and the area of the cell : the validity of this equation is supported by the fact that the ratio of the two areas is equivalent to the ratio of the total number of agents to the number of agents within the searching area .the estimation of for large is consistent with the theoretical prediction from eqs .( [ eq : v2 ] ) and ( [ eq : lambda])(iii ) by inserting .the theoretical prediction is in good agreement with simulation results , as shown in fig .[ fig : rct](b ) . with the aid of eqs .( [ 4.9 ] ) and ( [ eq : v3 ] ) for , we can derive a power law for . in a free - flow state ,the number of disposed packets is the same as that of generated packets in a time interval . for , the number of packets passing through an agentis proportional to its degree .this yields where is the degree of , the sum runs over all agents in the network , and is the average degree of the network . during steps , an agent can deliver at most packets . to avoid congestionrequires .if all agents have the same delivering capacity , the transportation dynamics is dominated by the agent with the largest number of neighbors and the transition point can be estimated by where is the largest degree of the network .thus , for , we have where .since the degree distribution follows the poisson distribution : , the quantity can thus be estimated by inserting , and into eqs .( [ 7 ] ) , we can calculate for low moving speed . for large , eq .( [ 7 ] ) can also be applied but and .hence , for large is given by equation ( [ 10 ] ) indicates that scales with , which is in good agreement with simulation results shown in fig .[ fig : rct](a ) .in conclusion , we have introduced a physical model to study the transportation dynamics on networks of mobile agents , where communication among agents is confined in a circular area of radius and agents move with fix speed but in random directions . in general , the critical packet - generating rate at which a transition in the transportation dynamics from free flow to congestion occurs depends on both and , and we have provided a theory to explain the dependence . our results yield physical insights into critical technological systems such as ad - hoc wireless communication networks .for example , the power laws for the network throughput uncovered in this paper can guide the design of better routing protocols for such communication networks . from the standpoint of basic physics , our findings are relevant to general dynamics in complex systems consisting of mobile agents , in contrast to many existing works where no such mobility is assumed .this work is funded by the national basic research program of china ( 973 program no.2006cb705500 ) , the national important research project:(study on emergency management for non - conventional happened thunderbolts , grant no .91024026 ) , the national natural science foundation of china ( grant nos . 10975126 and 10635040 ) , and the specialized research fund for the doctoral program of higher education of china ( grant no .20093402110032 ) .wxw and ycl are supported by afosr under grant no .fa9550 - 10 - 1 - 0083 .
most existing works on transportation dynamics focus on networks of a fixed structure , but networks whose nodes are mobile have become widespread , such as cell - phone networks . we introduce a model to explore the basic physics of transportation on mobile networks . of particular interest are the dependence of the throughput on the speed of agent movement and communication range . our computations reveal a hierarchical dependence for the former while , for the latter , we find an algebraic power law between the throughput and the communication range with an exponent determined by the speed . we develop a physical theory based on the fokker - planck equation to explain these phenomena . our findings provide insights into complex transportation dynamics arising commonly in natural and engineering systems .
in the last years much effort has been spent to identify the weak points of low density parity check ( ldpc ) code graphs , responsible for error floors of iterative decoders .after the introduction of the seminal concept of _ pseudocodewords _ , it is now ascertained that these errors are caused by small subsets of nodes of the tanner graph that act as attractors for iterative decoders , even if they are not the support of valid codewords .these structures have been named _ trapping sets _ ,, or _ absorbing sets _ ,, or _ absorption sets _ , defined in slightly different ways . in this paperwe build mainly on and ,, .the first merit of , has been to define absorbing sets ( ass ) from a purely topological point of view .moreover , the authors have analyzed the effects of ass on finite precision iterative decoders , on the basis of hardware and importance sampling simulations , .ass behavior depends on the decoder quantization and in they are classified as _ weak _ or_ strong _ depending on whether they can be resolved or not by properly tuning the decoder dynamics . in same research group proposes a postprocessing method to resolve ass , once the iterative decoder is trapped . in author defines _ absorption sets _( equivalent to ass ) and identifies a variety of ass for the ldpc code used in the ieee 802.3an standard .the linear model of , suitable for min - sum ( ms ) decoding , is refined to meet the behavior of belief propagation decoders . under some hypotheses ,the error probability level can be computed assuming an unsaturated ldpc decoder .loosely speaking , in this model an as is solved if messages coming from the rest of the graph tend to infinity with a rate higher than the wrong messages inside the as . in practical implementations , messages can not get arbitrarily large .besides , hypotheses on the growth rate of the messages entering the as are needed .in density evolution ( de ) is used , but this is accurate only for ldpc codes with infinite ( in practice , very large ) block lengths . in saturation is taken into account and the input growth rate is evaluated via discretized de or empirically via simulation . in andsuccessive works , the authors rate the trapping set dangerousness with the _ critical number _ , that is valid for hard decoders but fails to discriminate between the soft entries of the iterative decoder . in this paper, we look for a concise , quantitative way to rate the ass dangerousness with soft decoding .we focus on min - sum ( ms ) soft decoding that is the basis for any ldpc decoder implementation , leaving aside more theoretical algorithms such as sum - product ( spa ) or linear programming ( lp ) .we study the evolution of the messages propagating inside the as , when the all - zero codeword is transmitted . unlike , we assume a limited dynamic of the log likelihood ratios ( llrs ) as in a practical decoder implementation . the as dangerousness can be characterized by a _ threshold _we show that , under certain hypotheses , the decoder convergence towards the right codeword can fail only if there exist channel llrs smaller than or equal to . when all channel llrs are larger than , successful decoding is assured .we also show with examples that ass with greater are more harmful than ass with smaller .finally , we provide an efficient algorithm to find . for many ass , .in these cases _ we can deactivate ass simply setting two saturation levels _ , one for extrinsic messages( in our system model , this level is normalized to ) , and another level , smaller than , for channel llrs . this way the code designer can concentrate all efforts on avoiding only the most dangerous ass , letting the receiver automatically deactivate the other ones with extrinsic messages strong enough to unlock them .the article is organized as follows .section [ sec - system model ] settles the system model .section [ sec - equilibria ] introduces the notion of equilibria and thresholds .section [ sec - generalized equilibria ] deals with generalized equilibria , a tool to study ass with arbitrary structure .section [ sec - limit cycles ] deals with limit cycles .section [ sec - chaotic ] studies the message passing behavior above threshold , and provides a method to deactivate many ass .section [ sec - examples ] shows practical examples of ass that behave as predicted by our model during ms decoding on real complete ldpc graphs .section [ sec - search algorithm ] proposes an efficient algorithm to compute .section [ sec - other properties ] highlights other interesting properties .finally , section [ sec - conclusions ] concludes the paper .we recall that a subset of variable nodes ( vns ) in a tanner graph is an absorbing set if * every vn in has strictly more boundary check nodes ( cns ) in than in , being and the set of boundary cns connected to an _ even _ or _ odd _ number of times , respectively ; * the cardinality of and are and , respectively . besides , is a _ fully absorbing set _ if also all vns outside have strictly less neighbors in than outside . in it is observed that a pattern of all - ones for the vns in is a stable concurrent of the all - zeros pattern for the iterative bit - flipping decoder , notwithstanding a set of _ unsatisfied boundary cns _ ( dark cns in fig .[ fig - ass topology ] ) .ass behave in a similar manner also under iterative soft decoding , as shown and discussed in , .( a ) , a maximal as ; in fig . [ fig - ass topology](b ) , a as ; in fig .[ fig - ass topology](c ) , a as , that is also the support of a codeword.,title="fig : " ] + if all cns are connected to no more than twice , the as is _elementary_. elementary ass are usually the most dangerous .given the code girth , elementary absorbing sets can have smaller values of and than non elementary ones .if ass are the support of _ near - codewords _ , the smaller is , the higher the probability of error . besides , the smaller the ratio , the more dangerous the as is . in this paperwe focus on elementary ass only , as those in fig .[ fig - ass topology ] .an as is _ maximal _ if , as in fig .[ fig - ass topology](a ). intuitively , maximal ass are the mildest ones , since they have a large number of unsatisfied cns . on the opposite , an as with ( as in fig .[ fig - ass topology](c ) ) is the support of a codeword . for our analysiswe assume an ms decoder , that is insensitive to scale factors .thus we can normalize the maximum extrinsic message amplitude .we recall that , apart from saturation , the evolution of the messages inside the as is linear ( , ) since the cns in simply forward the input messages .the relation among the internal extrinsic messages generated by vns can be tracked during the iterations , by an _ routing _matrix .basically , iff there exists an ( oriented ) path from message to message , going across one vn .for instance , fig .[ fig - messages ] depicts the llr exchange within the as of fig .[ fig - ass topology](b ). the corresponding first row of is \,.\ ] ] ( b ) . for simplicity ,cns in the middle of edges are not shown.,title="fig : " ] + to account for saturation we define the scalar function and we say that is _ saturated _ if , _ unsaturated _ otherwise . for vectors , is the element - wise saturation . for the time being , we consider a _parallel _ message passing decoder , where all vns are simultaneously activated first , then all cns are simultaneously activated , in turn .the system evolution at the -th iteration reads where are the extrinsic messages within the as , is the vector of extrinsic messages entering the as through , and is a _ repetition _ matrix with size , that constrains the channel llrs to be the same for all messages emanating from the same vn .referring to fig .[ fig - ass topology](b ) , 1 also note that the row weight of is unitary , i.e. . as to the extrinsic messages entering the as from outside, we bypass the tricky problem of modeling the dynamical behavior of the decoder in the whole graph assuming that each message entering the as has saturated to the maximal correct llr ( i.e. , since we transmit the all - zero codeword ) .this is a reasonable hypothesis after a sufficient number of iterations , as observed in where the authors base their postprocessing technique on this assumption . in section [ sec - examples ]we show that the decoding of a large graph is in good agreement with the predictions of this model . under this hypothesis , we can write where is the vector of the vn degrees . from now on , we will consider only left - regular ldpc codes , with .most of the theorems presented in the following sections can be extended to a generic vn degree vector .luckily , among regular ldpc codes this is also the case with the most favorable waterfall region .if we set , then , and becomes this equation is more expressive than , as is the gap between the current state and the values that the extrinsic messages should eventually achieve , once the as is unlocked . besides, we will show that .therefore , and .the two competing forces are now clearly visible .the former always helps convergence , and the latter can amplify negative terms ( if the as is not maximal , some rows of have weight larger than ) . the rest of the paper is devoted to unveil the hidden properties of , finding sufficient conditions for correct decoding , i.e. when .we will assume a conservative condition to decouple the as behavior from the rest of the code : we do not start with ._ we take into account any configuration of extrinsic messages that may result in a convergence failure_. we start from an iteration with the rest of the decoder messages saturated to .the configuration inside the as , which is the result of the message evolution up to that iteration , is unknown .the drawback of this approach is that we renounce to predict the probability of message configurations inside the as leading to decoding errors . on the other hand ,if no can lock the decoder , this is true independently of the evolution of messages inside the as .we will study equilibria , limit cycles and chaotic behaviors ( i.e. , aperiodic trajectories ) of , depending on the channel llrs and _ any _ initial state .in this section , we study equilibria for the non - linear system .[ def - equlibrium ] a pair is an _ equilibrium _iff equilibria with are harmful .they behave as attractors for the evolution of the extrinsic messages , and can lead to uncorrect decisions . with the aim of finding the most critical ass , those that can lead to convergence failure even with large values of , we would like to solve the following problem : [ prb - original problem ] the constraint restricts the search to bad equilibria , having at least one extrinsic message smaller than .we call the _ threshold _ , since the as has no bad equilibria with above that value . in section[ sec - chaotic ] we will show that the notion of threshold does not pertain only to bad equilibria , but also to any other bad trajectory of , not achieving . in the above optimization problem , for simplicity we did not assign upper and lower bounds to the channel llrs . in practice, we can restrict our search in the range .[ thm - equilibrium -1 ] the pair is always an equilibrium . substituting in the equilibrium equation, we obtain [ thm - equilibrium + 1 ] the only equilibrium for a system having and is in . since , we can define a strictly positive quantity .consider parallel message passing .focusing on the evolution of , if , then , and we stop . otherwise , andwe go on . for the generic step , assuming and proceeding by recursion , we have the same inequalities hold also in case of sequential message passing , activating cns in arbitrary order ( once per iteration ) . as soon as , the recursion ends .we conclude that the message passing algorithm will eventually achieve .being the result of a maximization , a straight consequence of the above two theorems is [ thm - alpha range ] as for problem [ prb - original problem ] , .the two boundary values and are the thresholds of maximal ass and codewords , respectively : any support of a codeword has .maximal absorbing sets have .we start from codewords . is a valid equilibrium for problem [ prb - original problem ] . indeed : by corollary [ thm - alpha range ] we conclude that .referring to maximal ass , for any and , we can define a strictly positive quantity . focusing on the evolution of , if , then , and we stop .otherwise , and we go on . for the generic step , assuming and proceeding by recursion , we obtain as soon as , the recursion ends .the message passing algorithm will eventually achieve , that is not a valid equilibrium for problem [ prb - original problem ] .we conclude that at least one element in must be equal to , therefore .most of the effort of this section is in the reformulation of problem [ prb - original problem ] , to make it manageable .first , in place of equilibria , we consider a slightly more general case , removing the repetition matrix and assuming unconstrained channel llrs .[ def - generalized equilibrium ] a pair is a _ generalized equilibrium _iff accordingly , we write the following optimization problem . [ prb - generalized problem ] the following theorem holds .[ thm - tau=tau * ] as for problems [ prb - original problem ] and [ prb - generalized problem ] , .we show that and .every equilibrium is also a generalized equilibrium .given a solution of problem [ prb - original problem ] with , the solution with and satisfies the constraints of problem [ prb - original problem ] .being the result of the maximization in problem [ prb - generalized problem ] , we conclude that . on the converse, generalized equilibria may not be equilibria .indeed , could not be compatible with the repetition forced by matrix . notwithstanding this ,if a generalized equilibrium exists , then also an equilibrium exists , with and .consider channel llrs .we explicitly provide an initialization for that makes the extrinsic messages achieve an equilibrium , with . first , note that if we set , we obtain the inequality proceeding by induction , since .the above equation states that the sequence is monotonically decreasing .yet , it can not assume arbitrarily small values , since extrinsic messages have a lower saturation to .we conclude that must achieve a new equilibrium . the equilibrium satisfies all the constraints of problem [ prb - original problem ] . being the result of a maximization , .the above statements do not claim that the two problems are equivalent .indeed , they can be maximized by _ different _ pairs . anyway , as long as we are interested in the as threshold , we can deal with problem [ prb - generalized problem ] instead of problem [ prb - original problem ] and with generalized equilibria instead of equilibria .in this section , we focus on limit cycles , i.e. on extrinsic messages that periodically take the same values . we show that they have thresholds smaller than or equal to equilibria .therefore , we will neglect them .[ def - limit cycle ] the sequence is a _limit cycle _ with period iff , limit cycles can be interpreted as equilibria of the _ augmented _ as , described by an augmented matrix of size .while the vn and cn activation order does not matter in case of equilibria ( at the equilibrium , extrinsic messages do not change if we update them all together , or one by one in arbitrary order ) , this is not true in case of limit cycles .indeed , the associated set of equations depends on the decoding order . in case of parallel message passing ,one can write a system of equations with rows , where the -th horizontal stripe of equations represents the evolution of extrinsic messages from state to instead , in sequential ( or serial - c ) decoding cns are activated one by one , in turn , immediately updating the a - posteriori llrs of the vns connected thereto .the augmented matrix changes , since only the first cns use extrinsic messages produced at the previous iteration , while all others exploit messages generated during the same iteration . we can represent this behavior defining two matrices and , binary partitions of , and writing an augmented matrix as 1 and have upper and lower triangular shapes , due to the sequential update order .note that is valid not only for sequential cn message passing decoding , but also for any arbitrary order , as long as all extrinsic messages are activated in turn , once per decoding iteration .parallel message passing is a special case of , with and .therefore we provide the following theorem only for the most general case .[ thm - sequential cycles do not matter ] if there exists a limit cycle , a generalized equilibrium with exists , too .consider any partition of the identity matrix in binary matrices , with size : then where in the second equation enters into the function since and .choose a vector of extrinsic messages as as a consequence , we have and with , being . finally , choose the partition that implements the function , i.e. , thus achieving a generalized equilibrium with .a straight consequence of theorem [ thm - sequential cycles do not matter ] is that limit cycles can be neglected , when we compute the as threshold .we also have to take into account potential chaotic behaviors of the extrinsic messages in . in principle, could even evolve without achieving any equilibrium or limit cycle . yet , above the threshold the extrinsic messages achieve .[ thm - chaotic behaviors ] let be the solution of problem [ prb - original problem ] or [ prb - generalized problem ] . if , for any starting with , for a sufficiently large for the time being , consider channel messages that can assume only quantized values between and , with uniform step , .assume that also extrinsic messages are quantized numbers , with the same step .therefore , can only assume different values . letting the system evolve , it is clear that extrinsic messages at every time must belong to the same set of values .when , the analysis presented in previous sections assures that the only remaining equilibrium is .indeed , other equilibria can not exist since they would need . by theorem[ thm - sequential cycles do not matter ] , also limit cycles do not exist , both in case of parallel and sequential decoding .therefore , the only value that can assume more than once is ( otherwise we would incur in equilibria or cycles ) .we can conclude that will be reached in at most iterations .after that , the extrinsic messages will remain constant and the absorbing set will be defused .if and are not quantized , we can always identify a sufficiently small quantization step and a quantized pair s.t . since is dense in .finally , writing the inequality and recalling that achieves in at most iterations , we conclude that also must achieve in at most iterations , by the squeeze theorem applied to .theorem [ thm - chaotic behaviors ] states that there can not exist bad equilibria , limit cycles or chaotic behaviors ( in short , _ bad trajectories _ ) if the minimum channel llr exceeds the solution of problem [ prb - original problem ] or [ prb - generalized problem ] .this reinforces the name _ threshold _ assigned to ( we do not distinguish any more between and ) , that is not limited to equilibria , but pertains to all bad trajectories . in fig .[ fig - as line](a ) we represent bad trajectories , ordering them w.r.t .their minimum channel llr , in the range .+ by theorems [ thm - sequential cycles do not matter ] and [ thm - chaotic behaviors ] , the rightmost bad trajectory is an equilibrium .the results found so far can be exploited to deactivate many ass during the decoding process , using two different saturation levels . without loss of generality we set the saturation level of extrinsic messages equal to , and the saturation level of channel llrs equal to , with .the latter saturation level defines the range of admissible channel llrs , depicted in figs .[ fig - as line](a ) and [ fig - as line](b ) as a gray box .the decoding trajectories within ass can be very different in case of positive or negative thresholds : * if , the saturation of channel llrs to does not destroy bad trajectories .this is graphically represented in fig .[ fig - as line](a ) ; * if , we can set . with this choice, _ channel llrs can never lead to bad trajectories _ , as depicted in fig .[ fig - as line](b ) .therefore , by theorem [ thm - chaotic behaviors ] the as is defused .the behavior of bad structures under iterative decoding in a large code graph is in good agreement with the theory developed so far . for instance , consider the as ( 5,3 ) with topology shown in fig .[ fig - messages ] .+ for this as , ( a method to compute thresholds will be presented in the next section ) . in fig .[ fig - as real behaviour](a ) we plot its contribution to the error - floor of an ldpc code having block size and rate . and [ fig - as ( 7,3 ) topology ] , respectively .the error floors have been obtained applying importance sampling to a real ldpc code , under ms sequential decoding.,title="fig : " ] + the simulations are run using importance sampling over a gaussian channel , with snr around 2.5 db .we always let the quantized channel llrs vary in the range ] , and .decisions are taken after 20 iterations of ms sequential decoding . from fig .[ fig - as real behaviour](a ) it is apparent that the probability that the ms decoder be locked by the as is lowered when is reduced from to . however , this is larger than and an error floor still appears . in agreement with the predictions of our theory , if we set the as is always unlocked and the error probability is zero . in fig .[ fig - as real behaviour](b ) we plot the same curves for another as embedded in the same ldpc code .this as is shown in fig .[ fig - as ( 7,3 ) topology ] and its . once again , reducing decreases the error probability , but now does not guarantee an error - free performance .[ fig - ass contribution ] refers to a different code , with the same blocklength and rate .here we have 48 different as topologies of size ( see fig . [ fig - ass contribution](a ) ) , , whose thresholds are shown in fig .[ fig - ass contribution](b ) . in fig .[ fig - ass contribution](c ) we plot the ber contribution of each topology , with various channel saturation levels .the results agree with the predictions of our model . if , all ass contribute to the error floor . if all the ( 6,4 ) ass , whose threshold , are deactivated . with all ( 8,4 )ass below threshold are deactivated .also some ass with threshold just above gave no errors .besides , fig .[ fig - ass contribution](c ) shows a good correlation between the thresholds and the dangerousness of the ass .with the aim of deriving an efficient algorithm to compute the as threshold , we further simplify problem [ prb - generalized problem ] , introducing [ prb - inequality generalized problem ] where .with respect to problem [ prb - generalized problem ] , only the upper saturation is still present in : extrinsic messages can now assume any negative value .besides , the constraint imposed by the equilibrium equality has been relaxed , and substituted by an inequality containing only a scalar value . notwithstanding these modifications ,the following theorems hold : [ thm - tau*=tau tilde ] as for problems [ prb - generalized problem ] and [ prb - inequality generalized problem ] , .we show that and .assume we are given a solution of problem [ prb - generalized problem ] , i.e. with .we can exhibit a pair that satisfies the constraints of problem [ prb - inequality generalized problem ] .indeed therefore , the pair fulfills the constraints of problem [ prb - inequality generalized problem ] , because also . being the result of a maximization , .focusing on the converse , assume we are given a solution of problem [ prb - inequality generalized problem ] , with .no matter whether extrinsic messages are saturated or not , we can always add a positive vector to : this way , we turn inequality into the equality the constraints of problem [ prb - inequality generalized problem ] set , thus we conclude that and finally to conclude , if a solution of problem [ prb - inequality generalized problem ] with exists , we can exhibit a generalized equilibrium solution of problem [ prb - generalized problem ] , with .being the result of a maximization , we conclude that .once again , problems [ prb - generalized problem ] and [ prb - inequality generalized problem ] are not equivalent , as the solutions are different ( the second one is not even a generalized equilibrium ) .anyway , the two thresholds are the same .problem [ prb - inequality generalized problem ] is still non - linear and multimodal .besides , equations are still not differentiable .we further elaborate , rewriting problem [ prb - inequality generalized problem ] in another form that does not rely on the function : we define a partition of in the two subsets and of unsaturated and saturated messages , respectively and is redundant , since , but sometimes we use both of them for compactness . ] .we also introduce a permutation matrix , that reorganizes extrinsic messages , putting the unsaturated ones on top : accordingly , we permute the routing matrix , and divide it in four submatrices , having inputs / outputs saturated or not : we are now ready to introduce [ prb - linearized generalized problem ] note that the use of in the above problem is slightly misleading : even if the outer ( leftmost ) maximization sets , thanks to the inner maximization could achieve its maximum even in , and not in .we shall show that this relaxation does not impair the threshold computation .the following theorem holds .[ thm - alpha_tilde < = alpha_dot ] as for problems [ prb - inequality generalized problem ] and [ prb - linearized generalized problem ] , .we only give a sketch of the proof , since it is simple but quite long .we show that problem [ prb - inequality generalized problem ] implies problem [ prb - linearized generalized problem ] , and vice - versa .first , means that . rewriting in the modified order ,we obtain , since in the first block of inequalities the operator is useless .therefore must be true .as for the second block of inequalities , they hold only if the argument of the exceeds , i.e. , , that immediately leads to .analogous arguments hold for the converse .the only tricky point is the following .let be a maximizer for problem [ prb - linearized generalized problem ] , with , and the set corresponding to this solution .as already highlighted , could contain indices referring to saturated variables .if at least one element of is not saturated , this does not impair the outer maximization , since the same solution is a maximizer with another pattern of saturations . in this case , is satisfied . on the contrary ,if _ all _ messages in were saturated , we could achieve the maximum not respecting , obtaining .we now prove that this can not happen .indeed , if we set , becomes .yet , similarly to theorem [ thm - equilibrium -1 ] , there are always other legitimate solutions having that do not violate the constraints of problem [ prb - linearized generalized problem ] , e.g. , for which has no meaning and becomes , that is true because .we conclude that and that substituting with does not harm the threshold computation . once again , we do not distinguish any more between , , and since they match , and simply use . in principle, we could solve the inner maximization of problem [ prb - linearized generalized problem ] , repeatedly running an optimization algorithm suited to linear equality and inequality constraints ( e.g. , the simplex algorithm ) , and retaining only the largest value of among all possible configurations of saturated messages .this is practically unfeasible for two reasons .first , optimization algorithms are time - consuming and we should resort to them with caution .besides , the number of configurations to test grows exponentially with . solving problem [ prb - linearized generalized problem ] with a brute - force search becomes unpracticable even for moderate values of .in the following , we develop methods to discard most configurations .test 1 exploits the following two theorems : [ thm - w0 sufficient condition ] if contains at least one row with all - zero elements , there are no solutions satisfying the constrains of problem [ prb - linearized generalized problem ] . the proof is a _ reductio ad absurdum_. consider any row of with null weight , say the one corresponding to the -th in .then , by , where the second inequality holds since . the above result contradicts the hypothesis .[ thm - w1 sufficient condition ] if contains at least one row with exactly one element equal to , say , and if the column vector has weight larger than , then there are no solutions satisfying the constraints of problem [ prb - linearized generalized problem ] .the proof is a _ reductio ad absurdum_. consider any non null element of , say .note that the maximum weight of any row and column of is 2 , being the vn degree .thus , either is the only non - null element of the row , or at most another element exists in . if has weight 1 , by if has weight 2 , by where the second inequality holds since . in either case , where the first inequality comes from .the above result contradicts the hypothesis .theorems [ thm - w0 sufficient condition ] and [ thm - w1 sufficient condition ] suggest sufficient conditions to discard configurations of .the advantage of test 1 is simplicity .the weakness of test 1 is that it does not take advantage of previous maximizations , with other configurations of .test 2 exploits the threshold discovered up to that time .it starts initializing lower and upper bounds ( and , respectively ) for the minimum channel llr and for : where and . in the most general case , .yet we are mainly interested in the negative semi - axis , since in section [ sec - chaotic ] we have shown that we can deactivate an as only if the threshold is negative . therefore , in the threshold computation algorithm we can lower ( of course , we must keep ) , exchanging some information loss ( we return when ) , for an increased capability to discard saturation patterns , resulting in an execution speedup . test 2 analyzes the inequality constraints of problem [ prb - linearized generalized problem ] in turn , rewritten as with for every variable involved , the test tries to tighten the gap between the corresponding lower and upper bounds , exploiting bounds ( upper or lower , depending on the coefficient signs ) on the other variables .the process can terminate in two ways : 1 .bounds and can not be further improved , and : and are compatible with the existence of other equilibria , having thresholds larger than ; 2 . for at least one index , we achieve : equilibria having thresholds larger than the currently discovered can not exist , for that .the initialization in influences the algorithm effectiveness : the more the discovered threshold gets large , the more test 2 will effectively detect impossible configurations of , speeding up the solution of problem [ prb - linearized generalized problem ] .test 2 is typically more effective than test 1 , as it can detect a large number of configurations of not improving the threshold . yet, it can be applied only when is formed . on the contrary ,test 1 can also be applied during the construction of : let be the set of indices satisfying theorems [ thm - w0 sufficient condition ] or [ thm - w1 sufficient condition ] . erasing only a subset of from , the other elements in still satisfy conditions of theorems [ thm - w0 sufficient condition ] or [ thm - w1 sufficient condition ] .assume that only one element involved in some violation , say the -th , is erased by .the proof for a generic subset of violations , erased all together , can be achieved repeating the following argument many times , discarding one element after the other .when element passes from to , not only the row must be erased , but also the column must be canceled .looking at any other row of leading to a violation , say the one corresponding to element , three events can happen ( see fig .[ fig - row column deletion ] ) : .,title="fig : " ] + 1 . had weight 0 : after column deletion , it still has weight 0 and the hypothesis of theorem [ thm - w0 sufficient condition ] is still valid ; 2 . had weight 1 , and its element equal to 1 was exactly in the -th column ( the erased one ) : after deletion , the row assumes weight 0 , therefore satisfying the hypothesis of theorem [ thm - w0 sufficient condition ] ; 3 . had weight 1 , and the element equal to 1 , say did not lie in the -th column : after deletion , the row of still has weight 1 .since the weight of the corresponding column is still 1 ( it had weight 1 by hypothesis ) , theorem [ thm - w1 sufficient condition ] still holds for the -th element .either theorem [ thm - w0 sufficient condition ] or [ thm - w1 sufficient condition ] are still valid , and the other violations do not disappear .the above theorem gives us the freedom to erase elements in all together from , and simultaneously add them to .therefore , we can imagine a _ tree search _ among all possible configurations of saturated messages . at the root node , .at successive steps , some extrinsic messages are marked as already visited ( fixed " , from here on ) .in addition , fixed messages are labeled as saturated or not .extrinsic messages not fixed ( say free " ) are always unsaturated .this implicitly defines and . for the current configuration, test 1 is performed .three things can happen : * * case 1 * : test 1 claims that problem [ prb - linearized generalized problem ] may have solutions for that ; * * case 2 * : test 1 claims that is incompatible with any solution of problem [ prb - linearized generalized problem ] , and all the elements generating a test violation are free ; * * case 3 * : test 1 claims that is incompatible with any solution of problem [ prb - linearized generalized problem ] , but some elements generating a test violation have been previously fixed ( and marked as unsaturated ) . depending on the answer of test 1 , we expand the tree in different manners : * in case 1 , in turn we fix one of the free messages , and branch the tree , labeling the last element as either saturated or not , calling the algorithm recursively ; * in case 2 , we fix and mark as saturated all elements of that generate violations , and call the algorithm recursively ; * in case 3 , or if all variables have been already fixed , we take no action . after test 1 , before the tree branching , we either perform optimization or not : * in case 1 or 2 , test 2 is executed . in case of negative result , we return ; otherwise , the simplex algorithm is eventually performed to solve problem [ prb - linearized generalized problem ] for the current ; * in case 3 , we return the partial result . this way , test 1 speeds up the construction of and prunes many branches .test 2 avoids the execution of the simplex algorithm for many useless configurations , not detected by test 1 .puncturing is a popular means to adapt the code rate or even achieve rate compatibility .an interesting extension of our theory is that harmless ass having are deactivated even in case of puncturing .[ thm - puncturing ] puncturing at most vns of an as does not increase the threshold .assume that an as of threshold is punctured in less than vns .let be the set of channel llrs , with null messages for the punctured vns .first , consider the case .assume that after puncturing , a bad trajectory exists , with .this is an absurdum , since is a legitimate solution without puncturing , and the definition of threshold given e.g. in problem [ prb - original problem ] is contradicted .this holds as long as at least one variable is not punctured , otherwise nothing is left to optimize and .consider now the case .since at least one entry of is equal to , we have .thus and the threshold of the same as without puncturing is not exceeded .therefore , ass having can not become harmful .a final , not trivial property of thresholds is the following .[ thm - rational thresholds ] thresholds .we focus on problem [ prb - linearized generalized problem ] .we will prove the theorem for any constrained saturation pattern .therefore , the result will hold for the maximum across all possible .the proof is slightly cumbersome , and involves standard concepts of linear programming theory .first , note that constraints bound the feasible space of extrinsic messages . by theorem [ thm - alpha range ], the constraint can be added without modifying the result .therefore , the above constraints and the others in problem [ prb - linearized generalized problem ] define a _ in dimensions where geometrically , the above inequality constraints reported in canonical form represent half - spaces that shave " the polytope .the polytope is convex , since it results from the intersection of half - spaces , that are affine and therefore convex . to conclude , our optimization problem can be re - stated as with .our feasible region can not be empty , since we already know that a solution , i.e. always exists . from linear programming theory , we know that the number of independent constraints at any vertex is and that at least one vertex is an optimizer in linear programming problems ( the latter part is the enunciation of the fundamental theorem of linear programming ). focus on a vertex that is also a maximizer , and on the linearly independent constraints satisfied with equality in that point .let be the set of these constraints .we can write .being full - rank , we can achieve a full qr - like decomposition , being orthogonal , and lower triangular .note that , being the entries of rational ( actually , integer ) , we can always keep the elements of and rational , e.g. performing a gram - schmidt decomposition .multiplying both sides of the above equation by , we obtain focusing on the first line of the above system , we achieve , since also .in this paper we defined a simplified model for the evolution inside an absorbing set of the messages of an ldpc min - sum decoder , when saturation is applied .based on this model we identified a parameter for each as topology , namely a _ threshold _ , that is the result of a _ max - min _ non linear problem , for which we proposed an efficient search algorithm .we have shown that based on this threshold it is possible to classify the as dangerousness . if all the channel llrs inside the as are above this threshold , after a sufficient number of iterations the ms decoder can not be trapped by the as .r. koetter and p. .vontobel , `` graph covers and iterative decoding of finite - length codes , '' in _ international symposium on turbo codes & related topics _ , brest , france , sept .2003 , pp . 7582 .s. chilappagari , s. sankaranarayanan , and b. vasic , `` error floors of ldpc codes on the binary symmetric channel , '' in _ ieee international conference on communications ( icc06 ) _ , istanbul , turkey , june 2006 , pp .10891094 .s. chilappagari , m. chertkov , m. stepanov , and b. vasic , `` instanton - based techniques for analysis and reduction of error floors of ldpc codes , '' _ ieee j. select .areas commun ._ , vol . 27 , no . 6 , pp .855865 , june 2009 .l. dolecek , z. zhang , m. wainwright , v. anantharam , and b. nikolic , `` analysis of absorbing sets for array - based ldpc codes , '' in _ ieee international conference on communications ( icc07 ) _ , glasgow , scotland , u.k . ,june 2007 , pp .62616268 .l. dolecek , p. lee , z. zhang , v. anantharam , b. nikolic , and m. wainwright , `` predicting error floors of structured ldpc codes : deterministic bounds and estimates , '' _ ieee j. select .areas commun ._ , vol . 27 , no . 6 , pp .908917 , aug .z. zhang , l. dolecek , b. nikolic , v. anantharam , and m. wainwright , `` design of ldpc decoders for improved low error rate performance : quantization and algorithm choices , '' _ ieee trans .wireless commun ._ , vol .57 , no . 11 , pp . 32583268 , nov .2009 .l. dolecek , z. zhang , v. anantharam , m. wainwright , and b. nikolic , `` analysis of absorbing sets and fully absorbing sets of array - based ldpc codes , '' _ ieee trans .inf . theory _56 , no . 1 ,181201 , jan . 2010
in this paper , we investigate absorbing sets , responsible of error floors in low density parity check codes . we look for a concise , quantitative way to rate the absorbing sets dangerousness . based on a simplified model for iterative decoding evolution , we show that absorbing sets exhibit a threshold behavior . an absorbing set with at least one channel log - likelihood - ratio below the threshold can stop the convergence towards the right codeword . otherwise convergence is guaranteed . we show that absorbing sets with negative thresholds can be deactivated simply using proper saturation levels . we propose an efficient algorithm to compute thresholds . low density parity check codes , error floor , absorbing sets .
a major obstacle to phylogenetic inference is the heterogeneity of genomic data .for example , mutation rates vary widely between genes , resulting in different branch lengths in the phylogenetic tree for each gene . in many cases ,even the topology of the tree differs between genes . within a single long gene , we are also likely to see variations in the mutation rate , see for a current study on regional mutation rate variation .our focus is on phylogenetic inference based on single nucleotide substitutions . in this paperwe study the effect of mutation rate variation on phylogenetic inference. the exact mechanisms of single nucleotide substitutions are still being studied , hence the causes of variations in the rate of these mutations are unresolved , see . in this paperwe study phylogenetic inference in the presence of heterogeneous data . for homogenous data , i.e. , data generated from a single phylogenetic tree ,there is considerable work on consistency of various methods , such as likelihood and distance methods , and inconsistency of other methods , such as parsimony .consistency means that the methods converge to the correct tree with sufficiently large amounts of data .we refer the interested reader to felsenstein for an introduction to these phylogenetic approaches .there are several works showing the pitfalls of popular phylogenetic methods when data is generated from a mixture of trees , as opposed to a single tree .we review these works in detail shortly .the effect of mixture distributions has been of marked interest recently in the biological community , for instance , see the recent publications of kolczkowski and thornton , and mossel and vigoda . in our setting the datais generated from a mixture of trees which have a common tree topology , but can vary arbitrarily in their mutation rates .we address whether it is possible to infer the tree topology .we introduce the notion of a _linear test_. for any mutational model whose transition probabilities can be parameterized by an open set ( see the following subsection for a precise definition ) , we prove that the topology can be reconstructed by linear tests , or it is impossible in general due to a non - identifiable mixture distribution . for several of the popular mutational models we determine which of the two scenarios ( reconstruction or non - identifiability ) hold .the notion of a linear test is closely related to the notion of linear invariants .in fact , lake s invariants are a linear test .there are simple examples where linear tests exist and linear invariants do not ( in these examples , the mutation rates are restricted to some range ) . however , for the popular mutation models , such as jukes - cantor and kimura s 2 parameter model ( both of which are closed under multiplication ) we have no such examples . for the jukes - cantor and kimura s 2 parameter model , we prove the linear tests are essentially unique ( up to certain symmetries ) .in contrast to the study of invariants , which is natural from an algebraic geometry perspective , our work is based on convex programming duality .we present the background material before formally stating our new results .we then give a detailed comparison of our results with related previous work .an announcement of the main results of this paper , along with some applications of the technical tools presented here , are presented in .a phylogenetic tree is an unrooted tree on leaves ( called taxa , corresponding to species ) where internal vertices have degree three .let denote the edges of and denote the vertices .the mutations along edges of occur according to a continuous - time markov chain .let denote the states of the model .the case is biologically important , whereas is mathematically convenient .the model is defined by a phylogenetic tree and a distribution on .every edge has an associated rate matrix , which is reversible with respect to , and a time .note , since is reversible with respect to , then is the stationary vector for ( i.e. , ) .the rate matrix defines a continuous time markov chain .then , and define a transition matrix .the matrix is a stochastic matrix of size , and thus defines a discrete - time markov chain , which is time - reversible , with stationary distribution ( i.e. , ) .given we then define the following distribution on labellings of the vertices of .we first orient the edges of away from an arbitrarily chosen root of the tree .( we can choose the root arbitrarily since each is reversible with respect to . ) then , the probability of a labeling is let be the marginal distribution of on the labelings of the leaves of ( is a distribution on where is the number of leaves of ) . the goal of phylogeny reconstruction is to reconstruct ( and possibly ) from ( more precisely , from independent samples from ) .the simplest four - state model has a single parameter for the off - diagonal entries of the rate matrix .this model is known as the jukes - cantor model , which we denote as jc .allowing 2 parameters in the rate matrix is kimura s 2 parameter model which we denote as k2 , see section [ sec : k2 ] for a formal definition .the k2 model accounts for the higher mutation rate of transitions ( mutations within purines or pyrimidines ) compared to transversions ( mutations between a purine and a pyrimidine ) .kimura s 3 parameter model , which we refer to as k3 accounts for the number of hydrogen bonds altered by the mutation .see section [ sec : k3 ] for a formal definition of the k3 model . for ,the model is binary and the rate matrix has a single parameter .this model is known as the cfn ( cavender - farris - neyman ) model . for any examples in this paper involving the cfn , jc , k2 or k3 models ,we restrict the model to rate matrices where all the entries are positive , and times which are positive and finite . we will use to denote the set of transition matrices obtainable by the model under consideration , i.e. , the above setup allows additional restrictions in the model , such as requiring which is commonly required . in our framework, a model is specified by a set , and then each edge is allowed any transition matrix .we refer to this framework as the _ unrestricted framework _ , since we are not imposing any dependencies on the choice of transition matrices between edges . this set - up is convenient since it gives a natural algebraic framework for the model as we will see in some later proofs .a similar set - up was required in the work of allman and rhodes , also to utilize the algebraic framework .an alternative framework ( which is typical in practical works ) requires a common rate matrix for all edges , specifically for all .note we can not impose such a restriction in our unrestricted framework , since each edge is allowed any matrix in .we will refer to this framework as the _ common rate framework_. note , for the jukes - cantor and cfn models , the unrestricted and common rate frameworks are identical , since there is only a single parameter for each edge in these models .we will discuss how our results apply to the common rate framework when relevant , but the default setting of our results is the unrestricted model . returning to our setting of the unrestricted framework , recall under the condition the set is not a compact set ( and is parameterized by an open set as described shortly ) .this will be important for our work since our main result will only apply to models where is an open set .moreover we will require that consists of multi - linear polynomials .more precisely , a polynomial ] such that where is a set of stochastic matrices which are reversible with respect to ( thus is their stationary distribution ) .typically the polynomials are defined by an appropriate change of variables from the variables defining the rate matrices .some examples of models that are paraemeterized by an open set are the general markov model considered by allman and rhodes ; jukes - cantor , kimura s 2-parameter and 3-parameter , and tamura - nei models . for the tamura - nei model ( which is a generalization of jukes - cantor and kimura s models )we show in how the model can be re - parameterized in a straightforward manner so that it consists of multi - linear polynomials , and thus fits the parameterized by an open set condition ( assuming the additional restriction ) . in our setting, we will generate assignments from a mixture distribution .we will have a single tree topology , a collection of sets of transition matrices where and a set of non - negative reals where .we then consider the mixture distribution : thus , with probability we generate a sample according to .note the tree topology is the same for all the distributions in the mixture ( thus there is a notion of a generating topology ) .in several of our simple examples we will set and , thus we will be looking at a uniform mixture of two trees .we begin by showing a simple class of mixture distributions where popular phylogenetic algorithms fail . in particular, we consider maximum likelihood methods , and markov chain monte carlo ( mcmc ) algorithms for sampling from the posterior distribution . in the following ,for a mixture distribution , we consider the likelihood of a tree as , the maximum over assignments of transition matrices to the edges of , of the probability that the tree generated .thus , we are considering the likelihood of a pure ( non - mixture ) distribution having generated the mixture distribution .more formally , the maximum expected log - likelihood of tree for distribution is defined by where recall for the cfn , jc , k2 and k3 models , is restricted to transition matrices obtainable from positive rate matrices and positive times .chang constructed a mixture example where likelihood ( maximized over the best single tree ) was maximized on the wrong topology ( i.e. , different from the generating topology ) . in chang sexamples one tree had all edge weights sufficiently small ( corresponding to invariant sites ) .we consider examples with less variation within the mixture and fewer parameters required to be sufficiently small .we consider ( arguably more natural ) examples of the same flavor as those studied by kolaczkowski and thornton , who showed experimentally that in the jc model , likelihood appears to perform poorly on these examples .figure [ fig : cfnbadmle ] shows the form of our examples where and are parameters of the example .we consider a uniform mixture of the two trees in the figure .for each edge , the figure shows the mutation probability , i.e. , it is the off - diagonal entry for the transition matrix .we consider the cfn , jc , k2 and k3 models . and sufficiently small , maximum likelihood is inconsistent on a mixture of the above trees.,height=96 ] we prove that in this mixture model , maximum likelihood is not robust in the following sense : when likelihood is maximized over the best single tree , the maximum likelihood topology is different from the generating topology. in our example all of the off - diagonal entries of the transition matrices are identical .hence for each edge we specify a single parameter and thus we define a set of transition matrices for a 4-leaf tree by a 5-dimensional vector where the -th coordinate is the parameter for the edge incident leaf , and the last coordinate is the parameter for the internal edge . here is the statement of our result on the robustness of likelihood .[ thm : mlebad ] let .let and .consider the following mixture distribution on : 1 .in the cfn model , for all , there exists such that for all the maximum - likelihood tree for is .[ thm : cfnbad ] 2 .in the jc , k2 and k3 models , for , there exists such that for all the maximum - likelihood tree for is .[ thm : jcbad ] recall , likelihood is maximized over the best pure ( i.e. , non - mixture ) distribution .note , for the above theorem , we are maximizing the likelihood over assignments of valid transition matrices for the model . for the above models , valid transition matrices are those obtainable with finite and positive times , and rate matrices where all the entries are positive .a key observation for our proof approach of theorem [ thm : mlebad ] is that the two trees in the mixture example are the same in the limit .the case is used in the proof for the case .we expect the above theorem holds for more a general class of examples ( such as arbitrary , and any sufficiently small function on the internal edge ) , but our proof approach requires sufficiently small .our proof approach builds upon the work of mossel and vigoda .our results also extend to show , for the cfn and jc models , mcmc methods using nni transitions converge exponentially slowly to the posterior distribution .this result requires the 5-leaf version of mixture example from figure [ fig : cfnbadmle ] .we state our mcmc result formally in theorem [ thm : mcmc - main ] in section [ sec : mcmc ] after presenting the background material .previously , mossel and vigoda showed a mixture distribution where mcmc methods converge exponentially slowly to the posterior distribution .however , in their example , the tree topology varies between the two trees in the mixture . based on the above results on the robustness of likelihood, we consider whether there are any methods which are guaranteed to determine the common topology for mixture examples .we first found that in the cfn model there is a simple mixture example of size , where the mixture distribution is non - identifiable .in particular , there is a mixture on topology and also a mixture on which generate identical distributions .hence , it is impossible to determine the correct topology in the worst case .it turns out that this example does not extend to models such as jc and k2 .in fact , all mixtures in jc and k2 models are identifiable .this follows from our following duality theorem which distinguishes which models have non - identifiable mixture distributions , or have an easy method to determine the common topology in the mixture .we prove , that for any model which is parameterized by an open set , either there exists a linear test ( which is a strictly separating hyperplane as defined shortly ) , or the model has _non - identifiable _ mixture distributions in the following sense .does there exist a tree , a collection , and distribution , such that there is another tree , a collection and a distribution where : thus in this case it is impossible to distinguish these two distributions .hence , we can not even infer which of the topologies or is correct .if the above holds , we say the model has _non - identifiable mixture distributions_. in contrast , when there is no non - identifiable mixture distribution we can use the following notion of a linear test to reconstruct the topology .a _ linear test _ is a hyperplane strictly separating distributions arising from two different 4 leaf trees ( by symmetry the test can be used to distinguish between the 3 possible 4 leaf trees ) . it suffices to consider trees with 4 leaves , since the full topology can be inferred from all 4 leaf subtrees ( bandelt and dress ) .our duality theorem uses a geometric viewpoint ( see kim for a nice introduction to a geometric approach ) .every mixture distribution on a 4-leaf tree defines a point where .for example , for the cfn model , we have and .let denote the set of points corresponding to distributions for the 4-leaf tree , .a linear test is a hyperplane which strictly separates the sets for a pair of trees .consider the 4-leaf trees and .a _ linear test _ is a vector such that for any mixture distribution arising from and for any mixture distribution arising from . there is nothing special about and - we can distinguish between mixtures arising from any two leaf trees , e.g. , if is a test then distinguishes the mixtures from and the mixtures from , where swaps the labels for leaves and .more precisely , for all , [ thm : dual ] for any model whose set of transition matrices is parameterized by an open set ( of multilinear polynomials ) , exactly one of the following holds : * there exist non - identifiable mixture distributions , or * there exists a linear test . for the jc and k2 models , the existence of a linear test follows immediately from lake s linear invariants .hence , our duality theorem implies that there are no non - identifiable mixture distributions in this model .in contrast for the k3 model , we prove there is no linear test , hence there is an non - identifiable mixture distribution .we also prove that in the k3 model in the common rate matrix framework , there is a non - identifiable mixture distribution . to summarize, we show the following : 1 .[ thm : main222 ] in the cfn model , there is an ambiguous mixture of size 2 . 2 .in the jc and k2 model , there are no ambiguous mixtures .3 . in the k3 modelthere exists a non - identifiable mixture distribution ( even in the common rate matrix framework ) .steel , szkely and hendy previously proved the existence of a non - identifiable mixture distribution in the cfn model , but their proof was non - constructive and gave no bound on the size of the mixture .their result had the more appealing feature that the trees in the mixture were scalings of each other .allman and rhodes recently proved identifiability of the topology for certain classes of mixture distributions using invariants ( not necessarily linear ) .rogers proved that the topology is identifiable in the general time - reversible model when the rates vary according to what is known as the invariable sites plus gamma distribution model .much of the current work on invariants uses ideas from algebraic geometry , whereas our notion of a linear test is natural from the perspective of convex programming duality .note , that even in models that do not have non - identifability between different topologies , there is non - identifiability within the topology .an interesting example was shown by evans and warnow .we prove , in section [ sec : duality ] , theorem [ thm : dual ] that a phylogenetic model has a non - identifiable mixture distribution or a linear test . we then detail lake s linear invariants in section [ sec : tests ] , and conclude the existence of a linear test for the jc and k2 models . in sections[ sec : ambiguity ] and [ sec : k3 ] we prove that there are non - identifiable mixtures in the cfn and k3 models , respectively .we also present a linear test for a restricted version of the cfn model in section [ sec : cfntest ] .we prove the maximum likelihood results stated in theorem [ thm : mlebad ] in section [ sec : maxlik ] .the maximum likelihood results require several technical tools which are also proved in section [ sec : maxlik ] .the mcmc results are then stated formally and proved in section [ sec : mcmc ] .let the permutation group act on the 4-leaf trees by renaming the leaves .for example swaps and , and fixes .for , we let denote tree permuted by .it is easily checked that the following group ( klein group ) fixes every : note that , i.e. , is a normal subgroup of .for weighted trees we let act on by changing the labels of the leaves but leaving the weights of the edges untouched .let and let .note that the distribution is just a permutation of the distribution : the actions on weighted trees and on distributions are compatible : [ sec : duality ] in this section we prove the duality theorem ( i.e. , theorem [ thm : dual ] ) . our assumption that the transition matrices of the model are parameterized by an open set implies the following observation .[ obs : multisecond ] for models parameterized by an open set , the coordinates of are multi - linear polynomials in the parameters .we now state a classical result that allows one to reduce the reconstruction problem to trees with 4 leaves .note there are three distinct leaf - labeled binary trees with leaves. we will call them see figure [ trees ] . leaves . ] for a tree and a set of leaves , let denote the induced subgraph of on where internal vertices of degree are removed . for distinct leaf - labeled binary trees and there exist a set of leaves where .hence , the set of induced subgraphs on all 4-tuples of leaves determines a tree .the above theorem also simplifies the search for non - identifiable mixture distributions .if there exists a non - identifiable mixture distribution then there exists a non - identifiable mixture distribution on trees with leaves .recall from the introduction , the mixtures arising from form a convex set in the space of joint distributions on leaf labelings .a test is a hyperplane _ strictly _ separating the mixtures arising from and the mixtures arising from . for general disjointconvex sets a strictly separating hyperplane need not exist ( e.g. , take and ) .the sets of mixtures are special - they are convex hulls of images of open sets under a multi - linear polynomial map .[ lem : dua ] let be multi - linear polynomials in .let .let be an open set .let be the convex hull of .assume that .there exists such that for all .suppose the polynomial is a linear combination of the other polynomials , i.e. , .let .let be the convex hull of .then is a bijection between points in and .note , .there exists a strictly separating hyperplane between and ( i.e. , there exists such that for all ) if and only if there exists a strictly separating hyperplane between and ( i.e. , there exists such that for all ) . hence , without loss of generality , we can assume that the polynomials are linearly independent .since is convex and , by the separating hyperplane theorem , there exists such that for all . if for all we are done .now suppose for some .if , then by translating by and changing the polynomials appropriately ( namely , ) , without loss of generality , we can assume .let .note that is multi - linear and because are linearly independent . since , we have and hence has no constant monomial .let be the monomial of lowest total degree which has a non - zero coefficient in .consider where for which occur in and for all other .then , since there are no monomials of smaller degree , and any other monomials contain some which is .hence by choosing sufficiently close to , we have ( since is open ) and ( by choosing an appropriate direction for ) .this contradicts the assumption that is a separating hyperplane .hence which is a contradiction with the linear independence of the polynomials .we now prove our duality theorem .clearly there can not exist both a non - identifiable mixture and a linear test .let be the convex set of mixtures arising from ( for ) .assume that , i.e. , there is no non - identifiable mixture in .let .note that is convex , , and is the convex hull of an image of open sets under multi - linear polynomial maps ( by observation [ obs : multisecond ] ) . by lemma [ lem : dua ]there exists such that for all .let ( where is defined as in ) .let and let where is defined analogously to .then similarly for we have and hence is a test .the cfn , jc , k2 and k3 models all have a natural group - theoretic structure .we show some key properties of linear tests utilizing this structure .these properties will simplify proofs of the existence of linear tests in jc and k2 ( and restricted cfn ) models and will also be used in the proof of the non - existence of linear tests in k3 model .our main objective is to use symmetry inherent in the phylogeny setting to drastically reduce the dimension of the space of linear tests .symmetric phylogeny models have a group of symmetries ( is the intersection of the automorphism groups of the weighted graphs corresponding to the matrices in ) .the probability of a vertex labeling of does not change if the labels of the vertices are permuted by an element of .thus the elements of which are in the same orbit of the action of on have the same probability in any distribution arising from the model .let be the orbits of under the action of .let be the orbits of under the action of .note that the action of on is well defined ( because is a normal subgroup of ) .for each pair that are swapped by let [ lem : tes ] suppose that has a linear test .then has a linear test which is a linear combination of the .let be a linear test .let let be a mixture arising from . for any the mixture arises from and hence similarly for arising from and hence is a linear test .now we show that is a linear test as well .let arise from .note that arises from and hence similarly for arising from and hence is a linear test .note that is zero on orbits fixed by .on orbits swapped by we have that has opposite value ( i.e. , on , and on for some ) .hence is a linear combination of the . for the later proofs it will be convenient to label the edges by matrices which are not allowed by the phylogenetic models .for example the identity matrix ( which corresponds to zero length edge ) is an invalid transition matrix , i.e. , , for the models considered in this paper .the definition ( [ eq : defp ] ) is continuous in the entries of the matrices and hence for a weighting by matrices in ( the closure of ) the generated distribution is arbitrarily close to a distribution generated from the model .[ closure ] a linear test for ( which is a strictly separating hyperplane for ) is a separating hyperplane for .the above observation follows from the fact that if a continuous function is positive on some set then it is non - negative on .suppose that the identity matrix .let arise from with weights such that the internal edge has weight .then arises also from with the same weights .a linear test has to be positive for mixtures form and negative for mixtures from .hence we have : [ obs : zero ] let arise from with weights such that the internal edge has transition matrix .let be a linear test . then .in this section we show a linear test for jc and k2 models .in fact we show that the linear invariants introduced by lake are linear tests .we expect that this fact is already known , but we include the proof for completeness and since it almost elementary given the preliminaries from the previous section . to simplify many of the upcoming expressions throughout the following section , we center the transition matrix for the jukes - cantor ( jc ) model around its stationary distribution in the following manner . recall the jc model has and its semigroup consists of matrices where is the all ones matrix ( i.e. , for all ) and .we refer to as the _ centered edge weight_. thus , a centered edge weight of ( which is not valid ) means both endpoints have the same assignment . whereas ( also not valid ) means the endpoints are independent .the group of symmetries of is .there are orbits in under the action of ( each orbit has a representative in which appears before for any ) .the action of further decreases the number of orbits to .here we list the orbits and indicate which orbits are swapped by : by lemma [ lem : tes ] every linear test in the jc model is a linear combination of we will show that is a linear test and that there exist no other linear tests ( i.e. , all linear tests are multiples of ) .[ lcj ] let be a single - tree mixture arising from a tree on leaves .let be defined by let arise from , for .we have in particular is a linear test .label the 4 leaves as , and let denote the centered edge weight of the edge incident to the respective leaf .let denote the centered edge weight of the internal edge .let arise from with centered edge weights , .let be the multi - linear polynomial .if then does not depend on the label of and hence , for all , thus divides .the are invariant under the action of ( which is transitive on ) and hence is invariant under the action of .hence divides for .we have where is a linear polynomial in .let arise from with . in leaf - labelings with non - zero probability in the labels of agree and the labels of agree .none of the leaf - labelings in ( [ ephi ] ) satisfy this requirement and hence if .hence is the zero polynomial and is the zero polynomial as well .now we consider .if then , by observation [ obs : zero ] , .thus is a root of and hence .we plug in and to determine .let be the distribution generated by these weights .the leaf - labelings for which is non - zero must have the same label for and the same label for .thus and hence .we have note that is always positive .the action of switches the signs of the and hence .thus is always negative .we now show uniqueness of the above linear test , i.e. , any other linear test is a multiple of .any linear test in the jc model is a multiple of ( [ ephi ] ) .let be a linear test .let be the distribution generated by centered weights and on .by observation [ obs : zero ] we must have .note that hence thus and hence is a multiple of ( [ ephi ] ) .mutations between two purines ( and ) or between two pyrimidines ( or ) are more likely than mutations between a purine and a pyrimidine .kimura s 2-parameter model ( k2 ) tries to model this fact .we once again center the transition matrices to simplify the calculations .the k2 model has and its semigroup consists of matrices with and .see felsenstein for closed form of the transition matrices of the model in terms of the times and rate matrices .one can then derive the equivalence of the conditions there with the conditions .note , can be negative , and hence certain transitions can have probability but are always .observe that , i.e. , the jc model is a special case of the k2 model .the group of symmetries is ( it has elements ) .there are orbits in under the action of ( each orbit has a representative in which appears first and appears before ) .the action of further decreases the number of orbits to .the following orbits are fixed by : the following orbits are swapped by : by lemma [ lem : tes ] any linear test for the k2 model is a linear combination of [ lk2 ] let be a single - tree mixture arising from a tree on leaves .let be defined by let arise from , for .we have in particular is a linear test .let for some .let the transition matrix of the edge incident to leaf be , and the internal edge has .let be the generated distribution , and let be the multi - linear polynomial . if then the matrix on the edge incident to leaf has the last two columns the same .hence roughly speaking this edge forgets the distinction between labels and , and therefore , in , we can do the following replacements : and we obtain , thus divides .since is invariant under the action of we have that divides for and hence where is linear in and .now let for .the label of the internal vertices for must agree with the labels of neighboring leaves and hence now plugging into and , for this setting of , we have by plugging into and we have note that and and hence ( [ emu2 ] ) is always positive .linearity of the test implies that is positive for any mixture generated from and negative for any mixture generated from .[ k2unique ] any linear test in the k2 model is a multiple of ( [ ephi2 ] ) .let be a linear test .a linear test in the k2 model must work for jc model as well .applying symmetries we obtain comparing ( [ k2jkfuj ] ) with ( [ ephi ] ) we obtain let arise from with centered weights , , and . from observation [ obs :zero ] it follows that .the leaf - labelings with non - zero probability must give the same label to and , and the labels of and must either be both in or both in .the only such leaf - labelings involved in are .thus thus and from ( [ k2jc ] ) we get and .let be generated from with centered weights , , and . in leaf - labelings with non - zero probability the labels of and are either both in or both in .the only such leaf labelings in are 0202,0213,0212 , and 0203 .hence thus and all the are determined .hence the linear test is unique ( up to scalar multiplication ) .in this section we consider the cfn model .we first prove there is no linear test , and then we present a non - identifiable mixture distribution .we then show that there is a linear test for the cfn model when the edge probabilities are restricted to some interval . again , when considering linear tests we look at the model with its transition matrix centered around its stationary distribution .the cfn model has and its semigroup consists of matrices with . in the cfn model , note that the roles of and are symmetric , i.e. , hence the group of symmetries of is .there are orbits of the action of on ( one can choose a representative for each orbit to have the first coordinate ) .the action of further reduces the number of orbits to .the action of swaps two of the orbits and keeps of the orbits fixed : by lemma [ lem : tes ] , if there exists a linear test for cfn then ( a multiple of ) is a linear test .let arise from with the edge incident to leaf labeled by , for , and the internal edge labeled by .a short calculation yields note that ( [ ecf ] ) is negative if is much smaller than the other ; and ( [ ecf ] ) is positive if are much smaller than the other .thus is not a linear test and hence there does not exist a linear test in the cfn model . by theorem [ thm : dual ]there exists a non - identifiable mixture .the next result gives an explicit family of non - identifiable mixtures .for each edge we will give the edge probability , which is the probability the endpoints receive different assignments ( i.e. , it is the off - diagonal entry in the transition matrix . for a 4-leaf tree , we specify a set of transition matrices for the edges by a 5-dimensional vector where , for , is the edge probability for the edge incident to leaf labeled , and is the edge probability for the internal edge . for and , set where and let the distribution is invariant under .hence , is also generated by a mixture from , a leaf - labeled tree different from .in particular , the following holds : hence , whenever and satisfy then is in fact a distribution and there is non - identifiability .note , for every , there exist and which define a non - identifiable mixture distribution .note that fixes leaf labels and swaps with and with .a short calculation yields which are both zero for our choice of and .this implies that is invariant under the action of , and hence non - identifiable .let .if the centered edge weight for the cfn model is restricted to the interval then there is a linear test .we will show that ( [ ecf ] ) is positive if the are in the interval .let .note that . since ( [ ecf ] ) is multi - linear ,its extrema occur when the are from the set ( we call such a setting of the extremal ) .note that the are positive and occurs only in terms with negative sign .thus a minimum occurs for .the only extremal settings of the which have are and . for the other extremal settings ( [ ecf ] ) is positive , since .for the value of ( [ ecf ] ) is .in contrast to the above lemma , it is known that there is no linear invariant for the cfn model .this implies that there is also no linear invariant for the restricted cfn model considered above , since such an invariant would then extend to the general model .this shows that the notion of linear test is more useful in some settings than linear invariants .[ sec : k3 ] in this section we prove there exists a non - identifiable mixture distribution in the k3 model . our result holds even when the rate matrix is the same for all edges in the tree ( the edges differ only by their associated time ) , i.e. , the common rate matrix framework .morevoer , we will show that for most rate matrices in the k3 model there exists a non - identifiable mixture in which all transition matrices are generated from .the kimura s 3-parameter model ( k3 ) has and its semigroup consists of matrices of the following form ( which we have centered around their stationary distribution ) : with , , and .note that , i.e. , the k2 model is a special case of the k3 model .the group of symmetries is ( which is again the klein group ) .there are orbits in under the action of ( each orbit has a representative in which appears first ) .the action of further decreases the number of orbits to .the following orbits are fixed by : the following orbits switch as indicated under the action of : and by lemma [ lem : tes ] any test is a linear combination of we first present a non - constructive proof of non - identifiability by proving that there does not exist a linear test , and then theorem [ thm : dual ] implies there exists a non - identifiable mixture .we then prove the stronger result where the rate matrix is fixed .[ lem : k3-ambiguity ] there does not exist a linear test for the k3 model .there exists a non - identifiable mixture in the k3 model .suppose that is a test .let , . for , denotes the centered parameters for the edge incident to leaf , and are the centered parameters for the internal edge . in the definitions of and belowwe will set .this ensures that in labelings with non - zero probability , leaves and both internal vertices all have the same label .moreover , by observation [ obs : zero ] , .let be generated from with , and . in labelings with non - zero probability ,the labels of and have to both be in or both in .the only labels in with this property are .thus , any ] .since the examples are negative they work immediately for the above weaker constraint .recall , the k2 model is a submodel of the k3 model : the rate matrices of the k2 model are precisely the rate matrices of k3 model with . by lemma [ lk2 ] thereexists a test in the k2 model and hence there are no non - identifiable mixtures .we will show that the existence of a test in the k2 model is a rather singular event : for almost all rate matrices in the k3 model there exist non - identifiable mixtures and hence no test .we show the following result . [ rara ]let be chosen independently from the uniform distribution on ] . then with probability ( over the choice of ) we have .we will proceed by induction on .the base case follows from lemma [ lem : bnd ] since the probability of a finite set is zero . for the induction step consider the polynomial .since is a non - zero polynomial there are only finitely many choices of \lim_{i\ra\infty } is a weight for which the optimum of is attained . by the optimality of the have we assumed and hence the entries of are finite since we assumed has positive entries .thus take a subsequence of the which converges to some .note , is in the closure of the model .let be the smallest entry of .for all sufficiently large , has all entries . hence , because of , for all sufficiently large , ( recall that the log - likelihoods are negative ) .combining with , we have that the entries of are bounded from below by .thus , both and are bounded . for bounded and convergent sequences , therefore , from ( [ e:1 ] ) , ( [ e:2 ] ) , and ( [ e:3 ] ) we have .since we get a contradiction with the uniqueness of .we now bound the difference of and when the previous lemma applies .this will then imply that for sufficiently small , is close to .here is the formal statement of the lemma .[ lem : mai ] [ lem : sv ] let be a probability distribution on such that every element has non - zero probability .let be a leaf - labeled binary tree on vertices .suppose that there exists in the closure of the model such that and that is the unique such weight .let be such that , and is continuous around in the following sense : as .let , and .let be the hessian of at and be the jacobian of at .assume that has full rank. then moreover , if for all such that then the inequality in ( [ e : es ] ) can be replaced by equality .[ rem : nonzero ] when for all such that then the likelihood is maximized at non - trivial branch lengths . in particular , for the cfn model , the branch lengths are in the interval , that is , there are no branches of length or .similarly for the jukes - cantor model the lengths are in . for notational convenience ,let .thus , .note that the function maps assignments of weights for the edges of to the logarithm of the distribution induced on the leaves .hence , the domain of is the closure of the model , which is a subspace of , where is the dimension of the parameterization of the model , e.g. , in the cfn and jc models and in the k2 model .we denote the closure of the model as .note , the range of is ^{|\omega|^n} ] .let note that and for . for have =\frac{1}{\beta}\cdot 128c^2(16c^6 - 24c^4 + 17c^2 + 1),\ ] ] and the last coordinate of is it is easily checked that is positive for and hence we have equality in lemma [ lem : mai ] . for we have =\frac{1}{\beta}\cdot 128c^4(16c^4 - 40c^2 - 7),\ ] ] for we have =\frac{1}{\beta}\cdot 256c^4(16c^4 - 8c^2 - 3),\ ] ] it remains to show that and .we know that for the inequalities hold .thus we only need to check that and do not have roots for .this is easily done using sturm sequences , which is a standard approach for counting the number of roots of a polynomial in an interval .our technique requires little additional work to extend the result to jc , k2 , and k3 models .let jc - likelihood of tree on distribution be the maximal likelihood of over all labelings of , in the jc model .similarly we define k2-likelihood and k3-likelihood .note that k3-likelihood of a tree is greater or equal to its k2-likelihood which is greater or equal to its jc - likelihood . in the followingwe will consider a mixture distribution generated from the jc model , and look at the likelihood ( under a non - mixture ) for jc , k2 and k3 models .for the k3 model , the transition matrices are of the form with , , and .the k2 model is the case , the jc model is the case .[ mlebadother ] let and .let denote the following mixture distribution on generated from the jc model : there exists such that for all the jc - likelihood of on is higher than the k3-likelihood of and on .note , part [ thm : jcbad ] of theorem [ thm : mlebad ] for the jc , k2 , and k3 models is immediately implied by theorem [ mlebadother ] .first we argue that in the jc model is the most likely tree . as in the case for the cfn model , because of symmetry , we have the same hessian for all trees . again the jacobians differ only in the last coordinate . for : then , for : then , for : note , again the last coordinate is positive as required for the application of lemma [ lem : mai ] .finally , now we bound the k3-likelihood of and .the hessian matrix is now .it is the same for all the -leaf trees and has a lot of symmetry .there are only different entries in . for distinct $ ] we have for , its jacobian is a vector of 15 coordinates .it turns out that is the concatenation of 3 copies of the jacobian for the jc model which is stated in .finally , we obtain for we again obtain that for its jacobian , is the concatenation of copies of .then , finally , for , for its jacobian , is the concatenation of copies of ( [ j1 ] ) and is the concatenation of copies of .then , note the quantities are the same in the k3 model are the same as the corresponding quantities in the jc model .it appears that even though the optimization is over the k3 parameters , the optimum assignment is a valid setting for the jc model .observe that for in the jc model ( see ) is larger than for and in the model ( see and ) .applying lemma [ lem : mai ] , this completes the proof of theorem [ mlebadother ] .the following section has a distinct perspective from the earlier sections . herewe are generating samples from the distribution and looking at the complexity of reconstructing the phylogeny .the earlier sections analyzed properties of the generating distribution , as opposed to samples from the distribution .in addition , instead of finding the maximum likelihood tree , we are looking at sampling from the posterior distribution over trees . to do this, we consider a markov chain whose stationary distribution is the posterior distribution , and analyze the chain s mixing time .for a set of data where , its likelihood on tree with transition matrices is let denote a prior density on the space of trees where our results extend to priors that are lower bounded by some as in mossel and vigoda . in particular , for all , we require .we refer to these priors as -regular priors . applying bayes law we get the posterior distribution : that for uniform priors the posterior probability of a tree given is proportional to .each tree then has a posterior weight we look at markov chains on the space of trees where the stationary distribution of a tree is its posterior probability .we consider markov chains using nearest - neighbor interchanges ( nni ) .the transitions modify the topology in the following manner which is illustrated in figure [ fig : nni ] .let denote the tree at time .the transition of the markov chain is defined as follows .choose a random internal edge in .internal vertices have degree three , thus let denote the other neighbors of and denote the other neighbors of .there are three possible assignments for these 4 subtrees to the edge ( namely , we need to define a pairing between ) .choose one of these assignments at random , denote the new tree as .we then set with probability with the remaining probability , we set .the acceptance probability in is known as the metropolis filter and implies that the unique stationary distribution of the markov chain statisfies , for all trees : we refer readers to felsenstein and mossel and vigoda for a more detailed introduction to this markov chain .we are interested in the mixing time , defined as the number of steps until the chain is within variation distance of the stationary distribution .the constant is somewhat arbitrary , and can be reduced to any with steps . for the mcmc resultwe consider trees on 5 taxa .thus trees in this section will have leaves numbered and internal vertices numbered .let denote the tree .thus , has edges , , , , , , .we will list the transition probabilities for the edges of in this order .for the cfn model , we consider the following vector of transition probabilities .let where for the jc model , let where let and .we are interested in the mixture distribution : let denote the tree .the key lemma for our markov chain result states that under , the likelihood has local maximum , with respect to nni connectivity , on and .[ lem : mcmc - mle ] for the cfn and jc models , there exists such that for all then for all trees that are one nni transition from or , we have this then implies the following corollary .[ thm : mcmc - main ] there exist a constant such that for all the following holds .consider a data set with characters , i.e. , , chosen independently from the distribution .consider the markov chains on tree topologies defined by nearest - neighbor interchanges ( nni ) . then with probability over the data generated ,the mixing time of the markov chains , with priors which are -regular , satisfies the novel aspect of this section is lemma [ lem : mcmc - mle ] .the proof of theorem [ thm : mcmc - main ] using lemma [ lem : mcmc - mle ] is straightforward .the proof follows the same lines as the proof of theorems [ thm : mlebad ] and [ mlebadother ] .thus our main task is to compute the hessian and jacobians , for which we utilize maple .we begin with the cfn model .the hessian is the same for all trees on 5-leaves : we begin with the two trees of interest : + ( 80,60)(0,0 ) ( 10,9)(0,15)3 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)5 ( 70,9)(0,15)4 ( 10,41)(0,15)2 ( 70,41)(0,15)1 and ( 80,60)(0,0 ) ( 10,9)(0,15)2 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)5 ( 70,9)(0,15)4 ( 10,41)(0,15)1 ( 70,41)(0,15)3 their jacobian is thus , note the last two coordinates are positive , hence we get equality in the conclusion of lemma [ lem : mai ] .finally , we now consider those trees connected to and by one nni transition . since there are 2 internal edges each tree has 4 nni neighborsthe neighbors of are ( 80,60)(0,0 ) ( 10,9)(0,15)4 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)2 ( 70,9)(0,15)5 ( 10,41)(0,15)3 ( 70,41)(0,15)1 ( 80,60)(0,0 ) ( 10,9)(0,15)4 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)1 ( 70,9)(0,15)5 ( 10,41)(0,15)3 ( 70,41)(0,15)2 ( 80,60)(0,0 ) ( 10,9)(0,15)2 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)3 ( 70,9)(0,15)5 ( 10,41)(0,15)1 ( 70,41)(0,15)4 ( 80,60)(0,0 ) ( 10,9)(0,15)2 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)4 ( 70,9)(0,15)5 ( 10,41)(0,15)1 ( 70,41)(0,15)3 the neighbors of are ( 80,60)(0,0 ) ( 10,9)(0,15)3 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)1 ( 70,9)(0,15)5 ( 10,41)(0,15)2 ( 70,41)(0,15)4 ( 80,60)(0,0 ) ( 10,9)(0,15)3 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)4 ( 70,9)(0,15)5 ( 10,41)(0,15)2 ( 70,41)(0,15)1 ( 80,60)(0,0 ) ( 10,9)(0,15)4 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)2 ( 70,9)(0,15)5 ( 10,41)(0,15)1 ( 70,41)(0,15)3 ( 80,60)(0,0 ) ( 10,9)(0,15)4 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)3 ( 70,9)(0,15)5 ( 10,41)(0,15)1 ( 70,41)(0,15)2 the jacobian for all 8 of these trees is finally , note the quantities are larger for the two trees and .this completes the proof for the cfn model .we now consider the jc model . againthe hessian is the same for all the trees : beginning with tree ( 80,60)(0,0 ) ( 10,9)(0,15)2 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)5 ( 70,9)(0,15)4 ( 10,41)(0,15)1 ( 70,41)(0,15)3 we have : the last two coordinates of are since they are positive we get equality in the conclusion of lemma [ lem : mai ] .finally , for the neighbors of : ( 80,60)(0,0 ) ( 10,9)(0,15)4 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)1 ( 70,9)(0,15)5 ( 10,41)(0,15)3 ( 70,41)(0,15)2 ( 80,60)(0,0 ) ( 10,9)(0,15)4 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)2 ( 70,9)(0,15)5 ( 10,41)(0,15)3 ( 70,41)(0,15)1 ( 80,60)(0,0 ) ( 10,9)(0,15)2 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)3 ( 70,9)(0,15)5 ( 10,41)(0,15)1 ( 70,41)(0,15)4 ( 80,60)(0,0 ) ( 10,9)(0,15)2 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)4 ( 70,9)(0,15)5 ( 10,41)(0,15)1 ( 70,41)(0,15)3 we have : hence , considering : ( 80,60)(0,0 ) ( 10,9)(0,15)3 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)5 ( 70,9)(0,15)4 ( 10,41)(0,15)2 ( 70,41)(0,15)1 the last two coordinates of are which are positive . finally , considering the neighbors of : ( 80,60)(0,0 ) ( 10,9)(0,15)4 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)2 ( 70,9)(0,15)5 ( 10,41)(0,15)1 ( 70,41)(0,15)3 ( 80,60)(0,0 ) ( 10,9)(0,15)4 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)3 ( 70,9)(0,15)5 ( 10,41)(0,15)1 ( 70,41)(0,15)2 ( 80,60)(0,0 ) ( 10,9)(0,15)3 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)1 ( 70,9)(0,15)5 ( 10,41)(0,15)2 ( 70,41)(0,15)4 ( 80,60)(0,0 ) ( 10,9)(0,15)3 ( 10,23)(1,1)10 ( 70,43)(-1,-1)10 ( 70,23)(-1,1)10 ( 10,43)(1,-1)10 ( 20,33)(1,0)40 ( 40,33)(0,-1)10 ( 40,9)(0,15)4 ( 70,9)(0,15)5 ( 10,41)(0,15)2 ( 70,41)(0,15)1 we have : hence , in fact and have larger likelihood than any of the 13 other 5-leaf trees .however , analyzing the likelihood for the 5 trees not considered in the proof of lemma [ lem : mcmc - mle ] requires more technical work since is maximized at invalid weights for these 5 trees .we now show how the main theorem of this section easily follows from the above lemma .the proof follows the same basic line of argument as in mossel and vigoda , we point out the few minor differences . for a set of characters , define the maximum log - likelihood of tree as where consider where each is independently sampled from the mixture distribution .let be or , and let be a tree that is one nni transition from .our main task is to show that .let denote the assignment which attains the maximum for , and let for , let . by chernoff s inequality ( e.g. , ( * ? ? ?* remark 2.5 ) ) , and a union bound over , we have for all , assuming the above holds , we then have and , let note , lemma [ lem : mcmc - mle ] states that .set .note , .hence , this then implies that with probability by the same argument as the proof of lemma 21 in mossel and vigoda .then , the theorem follows from a conductance argument as in lemma 22 in .we thank elchanan mossel for useful discussions , and john rhodes for helpful comments on an early version of this paper .p. buneman .the recovery of trees from measures of dissimilarity , in : f. r. hodson , d. g. kendall , p. tautu ( eds . ) , _ mathematics in the archaeological and historical sciences _ , edinburgh university press , edinburgh , 387 - 395 , 1971 .i. i. hellmann , i. ebersberger , s. e. ptak , s. pbo , and m. przeworski . a neutral explanation for the correlation of diversity with recombination rates in humans ._ , 72:1527 - 1535 , 2003 .e. mossel and e. vigoda . limitations of markov chain monte carlo algorithms for bayesian inference of phylogeny . to appear in _ annals of applied probability_. preprint available from arxiv at http://arxiv.org/abs/q-bio.pe/0505002 j. s. rogers .maximum likelihood estimation of phylogenetic trees is consistent when substitution rates vary according to the invariable sites plus gamma distribution ._ systematic biololgy _ , 50(5):713 - 722 , 2001 .
we address phylogenetic reconstruction when the data is generated from a mixture distribution . such topics have gained considerable attention in the biological community with the clear evidence of heterogeneity of mutation rates . in our work we consider data coming from a mixture of trees which share a common topology , but differ in their edge weights ( i.e. , branch lengths ) . we first show the pitfalls of popular methods , including maximum likelihood and markov chain monte carlo algorithms . we then determine in which evolutionary models , reconstructing the tree topology , under a mixture distribution , is ( im)possible . we prove that every model whose transition matrices can be parameterized by an open set of multi - linear polynomials , either has non - identifiable mixture distributions , in which case reconstruction is impossible in general , or there exist linear tests which identify the topology . this duality theorem , relies on our notion of linear tests and uses ideas from convex programming duality . linear tests are closely related to linear invariants , which were first introduced by lake , and are natural from an algebraic geometry perspective .
in industrialized countries typical consumers are expected to become electricity producers due to the ongoing widespread deployment of distributed generation and energy storage elements , commonly called distributed energy resources ( der ) . a consumer that produces energyis called a prosumer .in addition to the introduction of ders , gradual replacement of conventional internal combustion engine vehicles with electrical vehicles ( evs ) is expected in the near future , causing a significant increase in the load on the power grid . by using only conventional grid management systemsthe electrical grid capacity is under question .reinforcing the grid all the way to the user is an option , though it is expensive , especially when other more convenient and cheaper alternatives are on the horizon .the shift from a mainly unidirectional power flows towards a fully bidirectional paradigm can be used as an advantage , allowing installation of additional ders within existing infrastructure . however, this requires a precise monitoring of the distribution grid that provides reliable and accurate information on its status to enable dynamic grid management of the future . while the benefits and the necessity of a smart distribution grid are clear , the communication solution supporting it is not straightforward . today, are increasingly using iec 61850 based communications for high - level monitoring , management and control on high - speed lan / optical fiber networks .however , when the scope is extended downwards to low voltage infrastructure , the availability of such high - end communication solutions is usually not anticipated . within the sunseed project ,a promising approach is considered where the already deployed cellular networks ( primarily lte ) are used to provide the smart distribution grid communication infrastructure . in this paperthe focus is on the security framework and network performance requirements to enable the incorporation of various measurement and control devices , which together allow for the establishment of grid management services based on time and privacy sensitive data .the specific contributions of this paper relate to the smart grid services introduced in the following section .thereafter , we present the requirements , design choices and proposed solutions for the smart grid communication and security architecture .next , we consider the performance of shared cellular lte networks as a part of a smart grid system .specifically , we study the achievable latency and reliability of the lte based smart grid communication .finally , we summarize our findings and outline the future steps of the sunseed project , namely with respect to the large scale field trial that will be deployed until 4th quarter of 2016 .to achieve reliable and accurate knowledge on the grid condition , the is of key importance .the benefit of the state estimation is that it can take into account all types of available measurements , thus reducing the investment costs into required measurement infrastructure .further , dsse provides estimation of grid state also on the grid nodes where measurement devices are not located . as the measurement locations are placed all the way down to the prosumer level ,the shared cellular networks seem to serve as an efficient and viable solution for communicating between measurement devices and the back - end system . in generalthe dsse performance depends on location density , type , accuracy , and reporting interval of the available measurement infrastructure in the grid , such as : : : are dedicated devices with common time reference provided by a very high precision clock , which allow for time - synchronized phasor ( that is , synchrophasor ) estimations at different locations . combining high precision and high sampling rate ( up to 50/60 hz ) measurements of voltage or current phasors on all 3 phases from multiplepmus allow for a comprehensive view on the state of the entire grid interconnection .[ fig : pmu - pmc ] depicts a fully embedded micro pmu ( / ) prototyped within the sunseed project that enables 3-phase voltage / current synchrophasor measurements at medium / low voltage of the distribution grid . besides dedicated measurement circuitry and signal processing it also features a linux - enabled application processor , lte , ethernet and low - power radio communication interfaces , secure element , and a gps - based reference clock .devices : : allow for 3-phase power quality measurements ( such as real / reactive / apparent power , frequency , voltage , current , total harmonic distortion ) and control of end connection points ( via on / off relays or serial line protocol ) . within the sunseed project the devices were designed in exactly the same form factor as the / , reusing the application and connectivity boards and introducing a measurement and control board .a 1 s reporting period was considered for devices deployed at major grid buses and important prosumer locations to support state estimation , while a request - response mechanism was considered to support demand - response services described in section [ sec : dem_resp ] .smart meters ( sms ) : : for standard billing measurement , assumed to be deployed at each prosumer . based on 1 min or 15 min reporting interval . in the future , the sms may be used for power measurements as well , however this requires lower reporting intervals , e.g. , down to 1 s like the pmc devices .regardless of the challenges that need to be addressed the main benefits of making the grid highly observable can be summarised as follows : * disturbances on lower voltage level can be locally detected , cleared , and eliminated before they affect other parts of the system .* will be able to identify grid model deficits , and allow them to construct accurate models suitable for detailed analysis and planning work .* will be able to analyse how installed and planned generation will affect the grid , enabling short , medium and long term planning .* continuous grid observation will pave the way for the real - time grid control .the smart distribution grid enables advanced features in demand monitoring , analysis and response .importantly , the monitoring and control activities will not only reside in operation centers but can also be distributed across the whole grid by enabling control of consumption and production flexibility in consumer and prosumer locations . having both sides of the distribution grid participating in demand - response can lead to a win - win outcome .the benefits from efficient network control and the consumers benefit from optimised use of energy .for example , consumers at the demand side will be able to manage their own consumption by changing the normal electricity consumption patterns over time .both centralised and decentralised approaches are seen in the literature .decentralised techniques usually have reduced computational complexity , however undesired communication overhead may be expected .we notice that the quality of service ( qos ) requirement for demand - response is relatively relaxed compared to dsse , with the measurements interarrival time in the order of minutes to hours , and the data transmission latency requirements around 1 s .the communication burden can be further reduced with the use of approximated information in the neighbourhood - wide consumption scheduling as proposed in . in this way, the scheduling is done by the individual consumers while their actual consumption is observed by the sms and the pmc units .the role of the security framework is to protect the smart grid assets against unfriendly attacks .an assessment of potential attacks scenarios has resulted in the identification of four high level security objectives : * insure availability of the services offered by the smart grid ( resilience to cyberattacks to insure functionality ) .* insure privacy of communications within the smart grid ( avoid spying ) . *prevent damage to equipment or infrastructures ( resilience to cyberattacks to insure equipment or infrastructures safety ) .* avoid fraud ( for specific equipment located in subscribers premises ) .the attacker profiles taken into account are typically : * a cyberterrorist trying to gain information , disrupt the functioning of the sunseed services or lead to a malfunction of the infrastructures by compromising either a device , a communication link or a cloud platform . * a subscriber trying to alter the functioning of the smart grid services to lower its costs or increase its revenues .the protection of data communications taking place in the smart grid system is key to meet the security objectives , and different levels of protection may coexist at different levels of the communication protocol stack : network access level security : : targets the protection of the access network .this includes for example the use of sim cards to authenticate with 3 g or 4 g wireless networks but also the deployment of vpn , firewalls , etc .transport level security : : primarily aims at protecting point - to - point data communication between two communicating nodes . with communications being ip based ,this type of data protection is largely independent from the communication channel ( 3 g , 4 g , wi - fi , plc , ... ) and is in most cases focused on protection of tcp communication .application level security : : addressing the protection of the payload carrying the applications data as described below . from the security standpoint ,the primary high level goal of the architecture described here is to provide end - to - end data protection at the transport level for communication between the monitoring devices and the various smart grid applications and services that access this data .end - to - end security in its simplest form consists in implementing point - to - point security , typically at the transport level for every segment in the communication path from source to destination .every such segment is therefore protected using different credentials and a rekeying operation is typically needed at each transiting node , which should therefore be trusted .optionally , it should be possible to add an extra layer of security enabling the ciphering of application data all the way from source to destination ( using a group traffic encryption key ) . in this casethe data will remain opaque while moving through transit nodes , possibly reducing the trust level required for these nodes .[ fig : communication_architecture ] describes the proposed communication and security architecture .data originating from / , pmc or sm devices is published to a data sharing platform , possibly transiting through a gateway or aggregator using popular publish / subscribe protocols such as or . at the other end of the chain , applications subscribe to the published data to perform specific tasks upon receiving the data .another security requirement addressed by this architecture relates to the capability to manage the authorization rights defining how software entities may interact and exchange data . for optimal consistency and to minimize the risk of security holes resulting from human errors , this is best achieved using a centralized management interface avoiding the need to configure access rights in a disparate way in many different platforms . on that aspect, the choice was made to use a standalone authorization server supporting a delegated authorization scheme .the implementation is based on the profile of the widely used oauth protocol .the authorization server is , in this context , the place holding the description of the access control policies to all access controlled resources in the ecosystem .for example / devices may publish their data on specific information topics using the mqtt protocol , and it is certainly necessary to define which software entity may publish or subscribe on a specific topic .the information topic is then modeled as an access controlled resource .the delegated authorization scheme , insures a dynamic life cycle management of the access controlled resources in the authorization server , by enabling the resources servers ( for example the publish / subscribe brokers ) , in charge of enforcing access control , to register and delegate the management of access control for their resources to the authorization server via the uma rest api .this delegated authorization scheme is one of the two main structural security choices made in the proposed architecture .it opens the possibility to achieve a centralized access control management for resources located in many dispersed heterogeneous platforms . as a result ,client applications need only to authenticate with the authorization server to dynamically receive credentials granting them access to multiple heterogeneous platforms .this results not only in enhanced security , but also in a great simplification of authentication and authorization management .the authorization rights granted to requesting clients are negotiated using the oauth protocol and materialized as etokens possibly carrying credentials .the idea is to distribute dynamically to every client application ( located in the / or pmc devices or in remote cloud servers ) the set of credentials required to perform the tasks it should perform with a given work flow scenario .for example , a client application implementing a historical data archive may need to subscribe to the / information topics via the mqtt protocol and then store the received data in an archival database .this client application should then dynamically receive credentials enabling both the reception of information and its storage in the archival database .another structural security choice relates to the decision to protect the credentials stored in the / and pmc devices via the use of embedded secure elements , similar to the ones used for manufacturing embedded uiccs in cellular iot devices .commercially available secure elements provide a credible protection for credentials stored inside their memory , which are meant to be used in place and can not be read back , thereby significantly complicating the setup of attacks involving credential stealing and/or device cloning .the protection of communications relies upon the use of the protocol involving both clients and servers certificates .client certificates are dynamically generated in an initial security bootstrapping process between the / and pmc devices and the authorization server .on the device , the pki private and public keys are dynamically generated within the secure element . while the public key is sent out and serves as the starting point to generate the client certificate , the private key will remain securely stored inside the secure element which exposes an api , enabling clients to request the on chip execution of cryptographic primitive operations such as signing or ciphering . in order to simplify deployment ,secure elements are pre - personalized at manufacturing time .each of them comes with a unique identifier ( which is used also to identify the device ) , and these identifiers are initially provisioned in a secure element management platform along with root secrets .no software , or configuration operation needs to be performed when deploying the secure elements .they are to be considered as any other electronic component and just need to be soldered on the circuit board of the devices to secure .a bootstrap process will occur transparently upon the first connection of the device to an ip network , resulting in the possibility to remotely manage the credentials stored inside the secure element from a remote web interface .finally , particular attention was given to ease the use of the proposed security mechanisms by application developers that are often requesting simple to use security solutions .they want reasonable assurance that their application will provide robust data protection without having to dive into the details of the cryptographic operations .the use of tls to protect data communications is a good example .tls in its most common implementation involves the use of server certificates , enabling servers to be authenticated by clients .clients may also be authenticated using client certificates , but the complexity involved in generating and distributing those has greatly limited the use of such authentication methods . a very common demand from application developers is to simplify the process of obtaining the credentials they need , whatever their form . in many cases ,those credentials themselves are not even handled in the developer s code , but rather passed to third party libraries or modules that the developer may be using .in addition to ensuring the security of communicated information in the smart grid , another cornerstone is to ensure reliable and timely delivery of the information through the used cellular networks , which is considered in the following section .the existing lte cellular networks carry various types of traffic , e.g. mobile broadband traffic , and are expected to additionally serve the traffic originated by many iot applications including the smart grid applications such as dsse and demand - response . in this section the focus is on the uplink of lte cellular network that carries the reports from the installed measurement devices ( / , pmc , and sm ) towards the publish - subscriber servers . when the ( shared ) existing lte cellular networks are used to facilitate this measurement collection , from a communication performance point of view there are two possible bottlenecks that can have detrimental effects : 1 . the bottleneck in the random access phase , i.e. when a large number of smart grid devices would like to randomly or periodically transmit their measurement reports . here, each device needs to go through the steps in the . due to the large number of smart grid and non - smart grid devices within the lte cell , the random access attempts to set - up the individual connections might collide resulting in failed random access attempts .the bottleneck in the communication phase ( that is , after successfully finishing the random access phase ) when a large number of smart grid devices would like to push their measurement data towards smart grid applications . sincemany uplink messages are contending for the limited lte uplink resources the maximum delay that some measurement reports might experience ( for example due to the waiting time until a device is granted an ul transmission resource ) could exceed the requirements of smart grid applications such as dsse . in the following section we analyse the achievable lte uplink delay of sm , pmc and / devices .hereafter , we consider specifically the bottlenecks in the random access phase and describe a proposal for ensuring reliable random access .the analyses are based on simulation models described in references .in particular , we investigate and quantify the performance of lte for different possible deployment scenarios in terms of the number of devices , their type , and the amount of reserved lte uplink physical radio resources , as configured by the lte cellular network operator for supporting the smart grid data traffic . for the following studies , we consider the ratio r between the number of / devices over the number of pmc / sm devices .the pmc and sm devices are considered jointly , since their uplink traffic patterns are similar .we consider ratios of r=1/10 and r=1/3 , where the former represents a scenario with moderate der penetration where not too many / devices are needed , whereas the latter represents a heavy der penetration .the measurement report sizes from the pmc / sm and / devices are assumed to be 70 bytes and 560 bytes , respectively , as the / devices report more detailed power measurements .we assume that the reporting interval of all types of measurement reports is 1 s , as motivated in sec .[ sec : smart_grid_monitoring ] .the analysis in this section is focused on the radio part of the lte uplink transmission , i.e. , between the end - node and the lte base station .this is because it is assumed that this is the most critical part of the end - to - end path between the measurement device and the smart - grid publish - subscribe server and applications , which are typically connected through high - speed network infrastructure as indicated in fig .[ fig : communication_architecture ] . in lte the uplink radio resources are organized in time - frequency blocks , also called physical resource blocks ( prbs ) , with duration of 0.5 ms and 12 consecutive ofdma frequency sub - carriers .the shortest uplink radio transmission duration is 1 ms , also known as , which consists of two consecutive prbs in the time domain .the prb allocation per individual device and per tti ( or per block of ttis ) is done by the scheduler located in the enb .the assumed scheduling approach for this analysis is fair fixed assignment ( ffa ) where in every tti a device is randomly selected from a number of devices willing to transmit and it is allocated a fixed number of prbs per device . depending on the signal - to - interference - plus - noise ratio ( sinr ) as experienced by the device on the allocated prbs an appropriate modulation and coding schemeis selected for the transmission .this , in turn determines the amount of data ( in bits ) that can be transferred , and finally the number of ttis needed to transmit the measurement report by the devices . additionally , as there is a limited number of reserved uplink lte radio resources ( such as a limited total number of prbs ) for the transmission of the measurement reports , not all active devices in the lte cell can begin their transmission within one tti .as a consequence , a number of active devices have to wait until they receive an uplink transmission grant , resulting in a certain amount of waiting time .then the maximum uplink lte delay is the sum of the transmission time and the waiting time .for more details regarding the analysis of the maximum uplink lte delay the reader is referred to reference . in order to quantify the maximum lte uplink delay ,monte - carlo system level simulations were performed for an urban environment with increased number of total smart grid devices per lte cell .0.49 0.49 in fig .[ fig : wamstosm110 ] and fig . [ fig : wamstosm13 ] the maximum lte delay results are presented for r=1/10 and r=1/3 , respectively , for fixed number of 2 prbs assigned per device .it can be seen that even for very small total number of devices the maximum uplink lte delay is about 20 ms and 200 ms , respectively .this is the intrinsic delay of transmitting the measurement report ( including any re - transmissions ) , since for low number of devices the waiting time is practically zero . as the number of devices increases , the maximum uplink delay remains constant with zero waiting time , only for when the whole bandwidth ( 10 mhz or 50 prbs in this case )is reserved for the smart grid data . as the number of available prbsis decreased to 20 or 6 prbs , which is more realistic in shared networks , the max delay rapidly increases due to the waiting time incurred at the devices until they get a scheduling grant for uplink transmission .the only exception here is the max delay for the / in fig .[ fig : wamstosm110 ] where the max delays for 50 prbs and 20 prbs are equal and stays constant up to 4000 devices per lte cell ( i.e. these two curves overlap ) .if the maximum delay requirement for the real - time application is for instance 1 s then in order to achieve this requirement for e.g. up to 4000 devices per lte cell , the operator is required to reserve 50 prbs ( or a whole 10 mhz lte carrier ) , which might not be an economically viable solution . for a more realistic amount of reserved resources ( for example 6 prbs )the achievable maximum delay is 6s or 3s for r=1/3 or r=1/10 , respectively .when a measurement device wants to transmit a report , it will need to change state from idle to connected in the lte network , through the .this procedure has several steps in which failures can occur , especially in case of preamble collisions .the collision probability increases with the number of active devices .this means that in a traditional lte network , a large number of accessing devices can cause unacceptable delays for the mission critical traffic of dsse and demand - response applications , where certain reliability requirements exist .since all random access requests are treated equally in legacy lte , there is no way to favorise certain types of traffic . for mission critical traffic, we propose an alternative approach to random access in lte , which allows to reserve sufficient to ensure a certain level of success probability ( reliability ) in the random access procedure . instead of serving all accessing devices equally , as in the legacy lte arp ,the modified approach that is described in detail in reference allows to create prioritized traffic classes for which the probability of successfully accessing the network can be guaranteed .the principle of this approach is illustrated in fig .[ fig : contention_frame ] , where the prioritized traffic classes / , pmc / sm and best effort ( ordered by most important first ) that are relevant for the considered smart grid communication system are shown .each of the / and pmc / sm traffic classes have a dedicated estimation slot in the contention frame , in which the corresponding devices must activate a random preamble to access the network .this enables the enodeb to estimate the number of accessing devices and to dimension the following serving phases to satisfy the required reliability .next , the devices will activate a preamble randomly within the corresponding serving phase , to start the arp .any remaining raos in the contention frame will be available for best effort traffic .the duration of the contention frame is set as half of the shortest latency deadline , to ensure that the all deadlines can be fulfilled .0.49 0.49 the allocation of resources for increasing number of / and pmc / sm devices in an lte cell is shown in fig .[ fig : rach_fraction ] .the plot shows that as the number of active devices increases , first the raos available for best effort traffic run out .hereafter , the raos for pmc / sm traffic are sacrificed to favorise the more important / traffic .finally , the total number of devices becomes too large to also support / traffic . for comparison , in fig .[ fig : rach_reliability ] we show the achievable reliability of the legacy lte arp , calculated using the collision probability model in reference . assuming that the arrival of / and pmc / sm traffic occurs in a traditional best effort manner , the reliability of all traffic in legacy lte will drop below the required reliability of both / and pmc / sm , at an earlier point than with our proposed scheme .notice that the required reliability of / can be supported with the proposed scheme for 3 times as many active devices as legacy lte for the r=1/10 scenario .this does not hold for the r=1/3 scenario where a larger fraction of devices require 99.9% reliability , since it requires more raos to ensure 99.9% reliability than 95% reliability .unfortunately , the proposed scheme for guaranteed reliability random access can not be easily implemented in today s cellular networks , since it requires changes to the lte protocol in both devices and enodeb .the scheme may , however , inspire the development of protocols for the upcoming 5 g standards .with the increasing penetration of the smart grid needs more and deeper monitoring and control to maintain stable operation . in this paper , we have considered shared cellular lte networks as the underlying ict infrastructure to support the smart grid .specifically , we highlighted the security and communication requirements such as for example end - to - end security , dynamic credential distribution , and highly reliable low latency uplink communication .further , we outlined the solutions that were considered in the sunseed project for tackling the communication related challenges of ensuring successful operation of the future smart grids . in the last year of the sunseed project( until jan .31st , 2017 ) the suitability of the lte cellular network for facilitating the smart grid monitoring and control functions will be tested via a large field trial in slovenia consisting of four planned areas with total number of 42 / devices , 5 pmc devices , 22 plc concentrators and 563 sms as follows : kromberk area : : with lte coverage of 10 / devices , 1 pmc device , 6 plc concentrators and 116 sms .bonifika area : : with lte coverage of 7 / devices , 2 pmc devices , 5 plc concentrators and 535 sms .razdrto area : : with umts coverage ( due to lack of lte coverage ) of 17 / devices , 2 pmc devices , 7 plc concentrators and 10 sms .kneza area : : as illustrated in fig .[ fig : trial_kneza ] with lte coverage of 3 / devices and 2 plc concentrators as well as satellite links ( due to lack of coverage of any cellular network in this mountainous region ) covering 5 / devices , 2 plc concentrators and 2 sms . for more details ,the reader is referred to the project deliverables available via the project web - page at : www.sunseed-fp7.eu .this work is partially funded by eu , under grant agreement no .619437 sunseed .the sunseed project is a joint undertaking of 9 partner institutions and their contributions are fully acknowledged .a. von meier and r. arghandeh , `` chapter 34 - every moment counts : synchrophasors for distribution networks with variable resources , '' in _ renewable energy integration _ , l. e. jones , ed.1em plus 0.5em minus 0.4emboston : academic press , 2014 , pp .429 438 .[ online ] .available : http://www.sciencedirect.com/science/article/pii/b978012407910600034x z. zhu , s. lambotharan , w. h. chin , and z. fan , `` a game theoretic optimization framework for home demand management incorporating local energy resources , '' _ industrial informatics , ieee transactions on _ , vol .11 , no . 2 ,pp . 353362 , 2015 .z. zhu and z. fan , `` an efficient consumption optimisation for dense neighbourhood area demand management , '' in _ ieee international energy conference ( energycon)_.1em plus 0.5em minus 0.4emieee , apr 2016 .l. jorguseski , h. zhang , m. chrysalos , m. golinski , and y. toh , `` lte delay assessment for real - time management of future smart grids , '' in _1st eai international conference on smart grid inspired future technologies ( smartgift)_.1em plus 0.5em minus 0.4emeai , 2016 .g. c. madueno , n. k. pratas , c. stefanovic , and p. popovski , `` massive m2 m access with reliability guarantees in lte systems , '' in _ communications ( icc ) , 2015 ieee international conference on_.1em plus 0.5em minus 0.4emieee , 2015 , pp .29973002 . g. c. madueo , j. j. nielsen , d. m. kim , n. k. pratas , .stefanovi , and p. popovski , `` assessment of lte wireless access for monitoring of energy distribution in the smart grid , '' _ ieee journal on selected areas in communications _34 , no . 3 , pp .675688 , march 2016 .d. c. dimitrova , j. l. van den berg , g. heijenk , and r. litjens , `` lte uplink scheduling - flow level analysis , '' in _multiple access communications : 4th international workshop , macom 2011_.1em plus 0.5em minus 0.4emspringer berlin heidelberg , 2011 , pp . 181192 .[ online ] .available : http://dx.doi.org/10.1007/978-3-642-23795-9_16
the electricity production and distribution is facing two major changes . first , the production is shifting from classical energy sources such as coal and nuclear power towards renewable resources such as solar and wind . secondly , the consumption in the low voltage grid is expected to grow significantly due to expected introduction of electrical vehicles . the first step towards more efficient operational capabilities is to introduce an observability of the distribution system and allow for leveraging the flexibility of end connection points with manageable consumption , generation and storage capabilities . thanks to the advanced measurement devices , management framework , and secure communication infrastructure developed in the fp7 sunseed project , the now has full observability of the energy flows at the medium / low voltage grid . furthermore , the prosumers are able to participate pro - actively and coordinate with the and other stakeholders in the grid . the monitoring and management functionalities have strong requirements to the communication latency , reliability and security . this paper presents novel solutions and analyses of these aspects for the sunseed scenario , where the smart grid ict solutions are provided through shared cellular lte networks . smart grid , real - time monitoring , security architecture , cellular networks , low latency , reliable communication .
perturbative calculations play an important role in field theoretical approach to understanding particle interactions .several techniques have been developed to tackle the ever increasing complexities for the task of evaluating multiloop feynman diagrams , mostly in the context of dimensional regularization or analytic regularization and among them we can mention the powerful mellin - barnes contour integration , the method of gegenbauer polynomials , the differential equations technique and others .the ndim developed by halliday and ricotta has shown itself as a reliable one when applied to the calculation of diagrams of one- , two- and multi - loops , with scalar and tensorial structures and in noncovariant gauges .one of the advantages of ndim is that it allows us to avoid the often cumbersome parametric integrals , transfering the problem into easier solving systems of linear equations instead .another advantage of ndim is that the exponents of propagators are taken to be arbitrary integers , so that one can solve the general case for each type of graph. a severe drawback of this method is that as the number of loops increases , the number of systems of linear equations that must be solved grows to staggering heights .one would like then to work out higher loops via loop - by - loop calculation with inserted simpler results .the analytic result for the one - loop massless triangle feynman diagram has been evaluated long ago by boos and davydychev and since then reproduced in many different contexts , e.g. , .it is written in terms of a linear combination of four appel hypergeometric functions of two variables , with and , where , and label the three external momenta that flow along the triangle s three external legs .this well - known result for the off - shell triangle , however , _ is not valid _ for every momentum ; those momenta must be such that , and .in other words , the series is defined inside some region of convergence and for this reason the well - known result of boos and davydychev can not be used in a loop by loop calculation . because a loop integration implies that the integrated momentum runs from minus infinity to plus infinity, we can easily see why the linear combination of four s with same variables will run into trouble within a loop by loop calculation . to solve this difficulty one needs to use the correct analytic expression for the triangle diagram which allows further integration on the momentum variables appearing within the appel s functions .fortunately , such suitable and shortened version for the triangle diagram result can be constructed , which is written in terms of only three appel s hypergeometric functions .this shortened and simplified form that is adequate for further momentum integration is obtained using the analytic continuation properties obeyed by the appel s functions that preserve momentum conservation in the three legs of the triangle diagram .the paper is outlined as follows : first we translate the -dimensional feynman integral for the master diagram of figure [ figure 1 : ] in the language of ndim , then we calculate it using the negative dimensions technique .once the ndim result is obtained , we perform an appropriate analytic continuation to positive -dimensionality to get our desired result .let us consider the two - loop `` master '' diagram as shown in figure [ figure 1 : ] .the feynman integral associated to such a diagram is where is the external momentum . in the ndim we could , at least in principle ,tackle this double integral in eq .( [ master ] ) at once .however , as it has been already mentioned , this leads to the computation of more than eight thousand systems of coupled linear equations whose solutions are expressed as multiple series of hypergeometric - type with unknown analytic properties , so that we have no idea as to how to construct distinct complete sets of linearly independent solutions from among all those eight thousand plus results .thus , the alternative is to evaluate it stepwise , order by order , performing one momentum integral at a time . to do this, we immediately note that there makes no difference at all whether we integrate first in or in since in both cases we have to integrate a one - loop triangle type integral first .we could write the above integral as , for example , and we readily recognize the integral in momentum as an off - shell triangle one , which has a well - known result that can be written in terms of a linear combination of four appel hypergeometric functions of two variables with the two momentum dependent variables given , for example , as and .using now the series representation for the appel functions of these specific two variables , we can therefore see that the remaining -integral is of a self - energy type with shifted exponents for the propagators , ^{1+\nu } } , \label{2nd_integral}\ ] ] where is a factor which depends on , the dimension as well as the exponents of propagators .the shifting exponents and also depend on the dimension and exponents of propagators in the former integration , as well as on the double sum indices , say , of the series . however , straightforward application of this does not yield the correct result .here comes an important point : to carry out the second integral eq .( [ 2nd_integral ] ) one has to perform the integral over the whole space , for this reason the result of the former one must hold on the whole range of momentum . the well - known result of the off - shell triangle , written as a sum of four appel s hypergeometric functions , _ is not valid _ for all momentum range ; these momenta must be such that , and .in other words , the series is defined inside some region of convergence and for this reason the well - known result of boos and davydychev can not be used in ( [ 1st_integral ] ) for the integration . as demonstrated in , the result for the triangle diagram that should be plugged into eq .( [ 1st_integral ] ) has only three s .this choice guarantees that the integration may be performed for the whole interval .the ndim is characterized among other things , by two features : one is the generalized exponents for the propagators , say , and the other is the polynomial nature of the integrands that represent the propagators .thus , in the spirit of the ndim technique for performing feynman integrals , we introduce the ndim counterpart of ( [ 1st_integral ] ) , namely , ^{h}\,\intd^{d}k \:(k^2)^{i}\,[(k - p)^2]^{j}\,[(k - q)^2]^{l}.\label{1st_integralndim}\ ] ] let us first concentrate our attention in the triangle part : ^j\,[(k - q)^2]^l}.\ ] ] since there is a recurrent appearance of a certain expression involving the exponents of propagators and dimension , we introduce for convenience , the following definition , , and also define . the standard solution for eq .( [ delta ] ) is a sum of four terms : where the four coefficients are written in terms of pochhammer s symbols of the several exponents of propagators and dimension : and the various parameters of the hypergeometric functions are listed in the table below : .parameters of appel s functions in eq .( [ triangle4 ] ) . [ cols="^,^,^,^,^",options="header " , ] in principle any one of these three sets ( unprimed , primed or double primed ) could be inserted into the remaining -integral and the integration carried out .however , in order to compare our ensuing result for the two - loop `` master '' self - energy diagram with already known result , it so happens that the most convenient sets are the primed and/or the double primed ones .the reason why the unprimed set is not convenient is due to the fact that is proportional to both and , which when inserted into the -loop integration will lead to more complicate structure for the parameters in the function part of the answer . in our present case ,this function part is expressed as a series of the form : where parameters depend on the exponents of and .when any one of the is zero , only the term in the sum survives and in this case may coalesce into simpler forms and thus be summed up using gauss summation formula for hypergeometric series after integration .this kind of sum simplification and coalescence of fails to occur in the case of .thus in the following we take the primed set of solutions for the triangle diagram to perform the calculations .the -momentum integral is of a one - loop self - energy type integral , so just for reference we quote here the general result valid for such an integral in ndim : ^f \nonumber \\ & = & ( -\pi)^{d/2}(p^2)^{\sigma_1 } \frac{(1+\sigma_1)_{-2\sigma_1-d/2}}{(1+e)_{-\sigma_1}(1+f)_{-\sigma_1}}\ , , \qquad \sigma_1 \equiv e + f + d/2.\end{aligned}\ ] ] application of this ndim formula will produce again recurring expressions which we define conveniently using short hand notations , such that , for example , and .thus , proper evaluation ( details are left to appendix ) of those relevant three terms gives respectively : we can now collect all the individual results and write our answer to eq .( [ 1st_integralndim ] ) as the linear combination where the three coefficients and three functions are respectively given by : and having obtained these results we now have to analytic continue to positive dimension and negative values of exponents .this is accomplished by operating on the pochhammer s symbols present in the coefficients , which is typical of the ndim technique.then analyticly continuing eq .( [ 1st_integralndim ] ) we get from eq .( [ result ] ) : the final result for the two - loop `` master '' self - energy diagram with generalized exponents of propagators is therefore given by eq .( [ ac ] ) . in itthe analytic continuation of the exponents to negative values , , in the pochhammer s symbols are done using the well - known relation : using this analytic continuation relation for the various pochhammer s factors in the coefficients given in eqs .( [ c1])-([c3 ] ) we get in order for us to check whether this result is consistent with known results previously obtained via other methods , it is necessary to particularize the result in eq .( [ ac ] ) for the specific values , which is tantamount to evaluate the original feynman integral for the two - loop master integral given in eq .( [ master ] ) .moreover , as pointed out previously , since we are taking the special case , this implies that all sum terms where we meet vanishes except for .also as the numerator parameter in reduces the function to just a constant equal to 1.the other functions in eqs .( [ f1 ] ) and ( [ f2 ] ) will present the coincident numerator and denominator parameters so that they both coaslesce into two gauss hypergeometric functions with unity argument : .\end{aligned}\ ] ] the gauss hypergeometric function of unity can be summed up using the gauss summation formula , using we finally get .\end{aligned}\ ] ] we may rewrite these three terms in a more compact form using the well known identity and its variants .the second term within the square brackets may be written as while the first and third terms within square brackets may be written together as thus finally .\end{aligned}\ ] ] this is exactly the result obtained via gegenbauer polynomials method by chetyrkin s _ et al _ ( cf .( 2.14 ) , p. 351in the first reference in ) .after obtaining the two - loop `` master '' self - energy diagram result , it is not difficult to get the three loop two - point funtion for the diagram depicted in figure [ figure2 : ] .we explicitly draw in it the momentum flow in each line and use the convention , for convenience .then the corresponding integral reads : in the spirit of order - by - order integration , this can be written as ^f,\ ] ] where in the first -integration we have explicited .this integral is a one - loop self - energy integral given in eq .( [ se ] ) with in place of and . plugging this result into eq .( [ 3loops_dec ] ) we get ^h \int \! d^dk(k^2)^i [ ( k - p)^2]^j [ ( k - q)^2]^{\sigma_1}.\end{aligned}\ ] ] if we compare eq .( [ 3loops_sigma1 ] ) with eq .( [ 1st_integralndim ] ) we note that the former has exactly the same structure of integrand as the latter one ; the only difference being the exponent that appears in the factor instead of in the former , we have now in the latter expression. then we can immediately write upon analytic continuation to positive dimension and negative values of exponents , we get it is now a simple matter of substituting the correct values of exponents for the case and careffully manipulating the various gamma functions that appear to obtain the final result for the three loop integral the final result for eq .( [ 3loops_result ] ) is then this is concordant with the result given in hathrell ( see eq .( 8.13 ) on page 176 ) .in this work we have demonstrated that in a loop by loop calculation of higher order feynman diagrams , the standard analytic solution for the one - loop triangle diagram expressed in terms of a linear combination of four appel s hypergeometric functions of two variables can not be used .the reason why such a solution for the triangle can not be used can be understood considering that those variables defining the hypergeometric functions are restricted to convergence constraints , and loop integrations require momentum running from minus infinity to plus infinity . also , our result hints that such an analytic expression for the triangle diagram also is not correct for further integration due to the fact that those variables are connected by a momentum conservation constraint , namely , , and therefore , not all the four appel s hypergeometric functions are linearly independent to each other .therefore , on the grounds of mathematical argumentation concerning constraints to lower the number of independent functions as well as the physical argumentation connected to the domain of momentum integration and variables convergence region , the analytic function for the triangle diagram that allows for further momentum integration must be a linear combination of three independent appel s function . which three of these functions should be can only be determined invoking another physical input beyond momentum conservation . using the modified triangle diagram integral expressed as a linear combination of _ three _ linearly independent solutions in terms of _ three _appel s functions of two variables , two of which have the same variables and and the third one having variables and , we were able to calculate the two - loop `` master '' self - energy diagram using ndim performing order - by - order calculation .of course , our calculation shows that the same care must be taken for order - by - order calculation done in the usual positive dimensional calculations involving embedded triangle diagrams .once the result for the two - loop `` master '' diagram is obtained , it is a matter of straighforward calculation to obtain the corresponding three - loop diagram as in figure [ figure 2 : ] since the feynman integral associated to such diagram can be reduced to the two - loop case once a convenient one - loop self - energy diagram integral is performed .the only novelty is that this ensuing two - loop `` master '' integral now bears a shifted exponent in one of the integrand factors .the remaining of the calculation is just manipulation of gamma function factors using the property and its related versions together with use of well - known properties of hypergeometric functions and of unity argument .i.g . halliday and r.m .ricotta , phys . lett.b* 193 * ( 1987 ) 241 .r.m.ricotta , _ topics in field theory _ , ( ph.d .thesis , imperial college , 1987 ) .dunne and i.g .halliday , phys .lett.b * 193 * ( 1987 ) 247 .
ndim ( negative dimensional integration method ) is a technique for evaluating feynman integrals based on the concept of analytic continuation . the method has been successfully applied to many diagrams in covariant and noncovariant gauge field interactions and has shown its utility as a powerful technique to handle feynman loop integrations in quantum field theories . in principle ndim can handle any loop calculation ; however , in practical terms , the resulting multiseries with several variables in general can not be summed up conveniently and its analytic properties are generally unknown . the alternative then is to use order by order ( loop by loop ) integration in which the first integral is of the triangle diagram type . however , the nave momentum integration of this leads to wrong results . here we use the shortened version for the triangle in ndim that is suitable for a loop by loop calculation and show that it leads ( after appropriate analytic continuation to positive dimension ) to agreement with the known result for the two - loop master diagram . from it , a three - loop is then calculated and shown again its consistency with the already published result for such a diagram . _ keywords : negative dimensional integration , higher - order diagrams , off - shell triangle diagram insertions . _ - mail : suzuki.unesp.br
cancer is not a single disease , but rather a highly complex and heterogeneous set of diseases .dynamic changes in the genome , epigenome , transcriptome and proteome that result in the gain of function of oncoproteins or the loss of function of tumor suppressor proteins underlie the development of all cancers .while some of the mechanisms that govern the transformation of normal cells into malignant ones are rather well understood , many mechanisms are either not fully understood or are unknown at the moment . even if all of the mechanisms could be identified and comprehended , it is not clear progress in understanding cancer could be made without knowledge of how these different mechanisms couple to one another .it has been observed that many complex interactions occur between tumor cells , and between a cancer and the host environment .multidirectional feedback loops occur between tumor cells and the stroma , immune cells , extracellular matrix and vasculature , which are not well understood synergistically .clearly , our current state of knowledge is insufficient to deduce clinical outcome , not to mention how to control cancer progression in the most malignant forms of cancer .this suggests that a more quantitative approach to understanding different cancers is necessary in order to control it and increase the lifetime of patients with these deadly diseases .theoretical / computational modeling of cancer when appropriately linked with experiments and data offers a promising avenue for such an understanding .such modeling of tumor growth using a variety of different approaches has been a very active area of research for the last two decades or so but clearly is in its infancy . a diverse number of mechanisms have been explored via such models , and a multitude of computational / mathematical techniques have been employed ; see ref . for a review .these models have the common aim of predicting certain features of tumor growth in the hope of finding new ways to control neoplastic progression . given a model which yields reproducible and accurate predictions ,the effects of different genetic , epigenetic and environmental changes , as well as the impact of therapeutically targeting different aspects of the tumor , can be probed . however , these models must be linked to data from experimental assays in a comprehensive and systematic fashion in order to develop of a quantitative understanding of cancer .the holy grail of computational tumor modeling is to develop a simulation tool that can be utilized in the clinic to predict neoplastic progression and response to treatment . not only must such a model incorporate the many feedback loops involved in neoplastic progression , the model must also account for the fact that cancer progression involves events occurring over a range of spatial and temporal scales .a successful model would enable one to broaden the conclusions drawn from existing medical data , suggest new experiments , test new hypotheses , predict behavior in experimentally unobservable situations , and be employed for early detection .there is overwhelming evidence that cancer of all types are _ emerging , opportunistic systems _ .success in treating various cancers as a self - organizing complex dynamical systems will require unconventional , innovative approaches and the combined effort of an interdisciplinary team of researchers .a lofty long - term goal of such an endeavor is not only to obtain a quantitative understanding of tumorigenesis but to _ limit _ and _ control _ the expansion of a solid tumor mass and the infiltration of cells from such masses into healthy tissue .figure 1 : picture of an ising model .because a comprehensive review of the vast literature concerning biophysical cancer modeling is beyond the scope of this article , we focus on reviewing the work that we have done toward the development of an `` ising model '' of cancer .the ising model is an idealized statistical - mechanical model of ferromagnetism that is based on simple local - interaction rules ( see figure 1 ) , but nonetheless leads to basic insights and features of real magnets , such as phase transitions with a critical point . toward the goal of developing an analogous ising model of cancer ,we have formulated a four - dimensional ( 4d ) ( three dimensions in space and one in time ) cellular automaton ( ca ) model for brain tumor growth dynamics and its treatment .like the ising model for magnets , we will see later that this involves local rules for how healthy cells transition into various types of cancer cells . before describing our computational models for tumor growth, we first briefly summarize several salient features of solid tumor growth as applied to glioblastoma multiforme ( gbm ) , the most malignant of brain cancers .the rest of paper is organized as follows : in section 2 , some background concerning gbms and solid tumors in general is presented . in section 3 , a minimalist 4d ca tumor growth model is described in which healthy cells transition between states ( proliferative , hypoxic , and necrotic ) according to simple local rules and their present states , and apply it to gbms .this is followed by a discussion of the extension of the model to study the effect on the tumor dynamics and geometry of a mutated subpopulation .how tumor growth is affected by chemotherapeutic treatment is also discussed , including induced resistance . in section 4 , the modification of the ca model to include explicitly the effects of vasculature evolution and angiogenesis on tumor growth are discussed . in section 5 , the effects of physical confinement and heterogeneous environment are described . in section 6 , a simulation tool for tumor growth that merges and improves individual ca models is presented . in section 7 , a descriptions if given of how one might characterize the invasive network organization around a solid tumor using spanning trees .section 8 discusses some open problems and promising avenues for future research .figure 2 : the picture of a tumor in brain .glioblastoma multiforme ( gbm ) ( see figure 2 ) , the most aggressive of the gliomas , is a collection of tumors arising from the glial cells or their precursors in the central nervous system .unfortunately , despite advances made in cancer biology , the median survival time for a patient diagnosed with gbm is only 12 - 15 months , a fact that has not changed significantly over the past several decades . as suggested by its name ( i.e. , `` multiforme '' ) , gbm is complex at many levels of organization .it exhibits diversity at the macroscopic level , having necrotic , hypoxic and proliferative regions . at the mesoscopic level , tumor cell interactions , microvascular remodeling and pseudopalisading necrosisare observed .further , the discovery that tumor stem cells may be the sole malignant cell type with the ability to proliferate , self - renew and differentiate introduces yet another level of mesoscopic complexity to gbm . at the microscopic level ,gbm cells exhibit point mutations , chromosomal deletions , and chromosomal amplifications .figure 3 : the picture of a mts .a substantial amount of research has been conducted to model macroscopic tumor growth either based on microscopic considerations ; or in a more phenomenological fashion .one of the early attempts to model empircally the volume of a solid tumor versus time is the gompertz model , i.e. , \right),\ ] ] where is the volume at time and and are parameters ; see ref . and references therein .qualitatively , this equation gives exponential growth at small times which then saturates at large times ( decelerating growth ) .in particular , this model considers the tumor as an oversized idealized multicellular tumor spheroid ( see figure 3 ) , which is stage of early tumor growth .we note that modeling an ideal tumor as an oversized spheroid is especially suited for gbm , since this tumor , like a large mts , comprises large areas of central necrosis surrounded by a rapidly expanding shell of viable cells ( figure 2 ) .however , we note that real tumors always possess much more complex morphology . more importantly ,gompertzian - growth models are very limited ; they only capture gross features of tumor growth and can not explain their underlying microscopic " mechanisms . one of the hallmarks of high - grade malignant neuroepithelial tumors , such as glioblastoma multiforme ( gbm ) , is the regional heterogeneity , i.e. , the relatively large number of clonal strains or subpopulations present within an individual tumor of monoclonal origin .each of these strains is characterized by specific properties , such as the rate of division or the level of susceptibility to treatment .therefore the growth dynamics of a single tumor are determined by the relative behaviors of each separate subpopulation .for example , the appearance of a rapidly dividing strain can substantially bias tumor growth in that direction .tumors supposedly harbor cells with an increased mutation rate , which indicates that these tumors are genetically unstable . genetic and epigenetic events throughout the tumormay occur randomly or be triggered by environmental and intrinsic stresses .the continuing existence of a subpopulation , however , depends primarily on the subpopulation s ability to compete with the dominant population in its immediate vicinity .clonal heterogeneity within a tumor has been shown to have very pronounced effects on treatment efficacy .treatment resistance is itself a complex phenomenon .there is no single cause of resistance , and many biochemical aspects of resistance are poorly understood .chemoresistant strains can either be resistant to a single drug or drug family ( individual resistance ) , or they can be resistant to an array of agents ( multidrug / pleotropic resistance ) .cellular mechanisms behind multidrug resistance include increased chemical efflux and/or decreased chemical influx , such as with p - glycoprotein - mediated ( p - gp ) drug resistance .complicating the situation further , resistance can arise at variable times during tumor development .some tumors are resistant to chemotherapy from the onset .this has been described as _inherent _ resistance , because it exists before chemotherapeutic drugs are ever introduced .in other cases , however , treatment initially proves successful , and only later does the tumor prove resistant .this is an example of acquired resistance , as it develops in response to treatment .there are at least two possible mechanisms for this type of tumor behavior .acquired resistance may result from a small number of resistant cells that are gradually selected for throughout the course of treatment . at the same time , there is also evidence suggesting that chemotherapeutic agents may induce genetic or epigenetic changes within tumor cells , leading to a resistant phenotype .other studies indicate that chemotherapy may increase cellular levels of p - gp mrna and protein in various forms of human cancer . a tumor s response to radiation therapycan also depend on underlying genetic factors . a cell s inherent radio - resistance may stem from the efficiency of dna repair mechanisms in sublethally damaged cells . while the properties of gbm cells are very important in understanding growth dynamics , just as important are the properties of the environment in which the tumor growsfor example , gbms grow in either the brain or spinal cord , and are therefore confined by the shape and size of these organs .another example of the importance of accounting for the host environment relates to the vascular structure of the brain .recent research evidence suggests that tumors arising in vascularized tissue such as the brain do not originate avascularly , as originally thought .instead , it is hypothesized that glioma growth is a process involving vessel co - option , regression and growth .three key proteins , vascular endothelial growth factor ( vegf ) and the angiopoietins , angiopoietin-1 ( ang-1 ) and angiopoietin-2 ( ang-2 ) , are required to mediate these processes .a picture of what likely occurs during the process of glioma vascularization has been summarized by gevertz and torquato .as a malignant mass grows , the tumor cells co - opt the mature vessels of the surrounding brain that express constant levels of bound ang-1 .vessel co - option leads to the upregulation in ang-2 and this shifts the ratio of bound ang-2 to bound ang-1 . in the absence of vegf, this shift destabilizes the co - opted vessels within the tumor center and marks them for regression .vessel regression in the absence of vessel growth leads to the formation of hypoxic regions in the tumor mass .hypoxia induces the expression of vegf , stimulating the growth of new blood vessels .this robust angiogenic response eventually rescues the suffocating tumor .glioma growth dynamics remain intricately tied to the continuing processes of vessel regression and growth .tumor cell invasion is a hallmark of gliomas .individual glioma cells have been observed to spread diffusely over long distances and can migrate into regions of the brain essential for the survival of the patient .while mri scans can recognize mass tumor lesions , these scans are not sensitive enough to identify malignant cells that have spread well beyond the tumor region . typically ,when a solid tumor is removed , these invasive cells are left behind and tumor recurrence is almost inevitable .numerous models have been developed to model certain tumor behavior or characteristics with a great deal of mathematical rigor ( e.g. , in the form of coupled differential equations ) .however , with such approaches , the sets of equations that govern tumor behavior often do not correspond to the characteristics of individual tumor cells .an important goal of studying tumor development is to illustrate how their macroscopic traits stem from their microscopic properties .in addition , most of the equations are problem - specific , which limits their utility as general tools for tumor study .another potential challenge is that tumor models should be appreciated by as diverse an audience as possible .ideally , the mathematical complexity that allows theoreticians to analyze subtle aspects of it should not be an obstacle for clinicians who treat gbm . a model that accounts for complex tumor behavior with relative mathematical ease could be valuable . to this end, we have developed what appears to be a powerful cellular automaton ( ca ) computational tool for tumor modeling .based on a few salient set of microscopic parameters , this ca model can realistically model the macroscopic tumor behavior , including growth dynamics , emergence of a subpopulation as well as the effects of tumor treatment and resistance .this model has been extended to study the effects of vasculature evolution on early tumor growth and to simulate tumor growth in confined heterogeneous environments .we have also developed mathematical models to characterize the invasive network organization around a solid tumor .in this section , we describe a four - dimensional ( 4d ) cellular automaton ( ca ) model that we have developed that describes tumor growth as a function of time , using the fewest number of microscopic parameters .we refer to this as a _ minimalist _ four - dimensional ( 4d ) model because it involves three spatial dimensions and one dimension in time with the goal of capturing the salient features of tumor growth with a minimal number of parameters .the algorithm takes into account that this growth starts out from a few cells , passes through a multicellular tumor spheroid ( mts ) stage ( figure 3 ) and proceeds to the macroscopic stages at clinically designated time - points for a virtual patient : detectable lesion , diagnosis and death .this 4d ca algorithm models macroscopic gbm tumors as an enormous idealized mts , mathematically described by a gompertz - function given by eq .( 1 ) , since this tumor , like a large mts , comprises large areas of central necrosis surrounded by a rapidly expanding shell of viable cells ( figure 2 ) . in accordance with experimental data, the algorithm also implicitly takes into account that invasive cells are continually shed from the tumor surface and implicitly assumes that the tumor mass is well - vascularized during the entire process of growth .the effects of vasculature evolution are considered explicitly in sections 5 and 7 .a ca model is a spatially and temporally discrete model that consists of a grid of cells , with each cell being in one of a number of predefined states .the state of a cell at a given point in time depends on the state of itself and its neighbors at the previous discrete time point .transitions between states are determined by a set of local rules .the simulation is designed to predict clinically important criteria such as the fraction of the tumor which is able to divide ( gf ) , the non - proliferative ( arrest ) and necrotic fractions , as well as the rate of growth ( volumetric doubling time ) at given radii .furthermore , this ca model enables one to study emergence of a subpopulation due to cell mutations as well as the effects of tumor treatment and resistance .the general ca model includes both a proliferation routine which models tumor growth by cell division and a treatment routine which models the cell response to treatment and cell mutations .it also incorporates a novel adaptive automaton cell generation procedure .in particular , the ca model is characterized by several biologically important features : * the model is able to grow the tumor from a very small size of roughly 1000 real cells through to a fully developed tumor with cells .this allows a tumor to be grown from almost any starting point , through to maturity .* the thickness of different tumor layers , i.e. the proliferative rim and the non - proliferative shell , are linked to the overall tumor radius by a 2/3 power relation .this reflects a surface area to volume ratio , which can be biologically interpreted as nutrients diffusing through a surface . *the discrete nature of the model and the variable density lattice allow us to control the inclusion of mutant `` hot spots '' in the tumor as well as variable cell sensitivity / resistance to treatment . the variable density lattice will allow us to look at such an area at a higher resolution . *our inclusion of mechanical confinement pressure enables us to simulate the physiological confinement by the skull at different locations within the brain differently .our ca algorithm can be broken into three parts : automaton cell generation , the proliferation routine and the treatment routine . in the ensuing discussions ,we first present the three parts of our algorithm .then we show that the our model reflects a test case derived from the medical literature very well , proving the hypothesis that macroscopic tumor growth behavior may be modeled with primarily microscopic data .the first step of the simulation is to generate the automaton cells .the underlying lattice for the algorithm is the delaunay triangulation , which is the dual lattice of the voronoi tessellation . in order to develop the automaton cells ,a prescribed number of random points are generated in the unit square using the process of random sequential addition ( rsa ) of hard circular disks . in the rsa procedure , as a random point is generated , it is checked if the point falls within some prescribed distance from any other point already placed in the system .points that fall too close to any other point are rejected , and all others are added to the system . each cell in the final voronoi lattice will contain exactly one of these accepted sites .the voronoi cell is defined by the region of space nearer to a particular site than any other site . in two - dimensions ,this results in a collection of polygons that fill the plane .figure 4 : voronoi cells because a real brain tumor grows over several orders of magnitude in volume , the lattice was designed to allowed continuous variation with the radius of the tumor .the density of lattice sites near the center was significantly higher than that at the edge .a higher site density corresponds to less real cells per automaton cell , and so to a higher resolution .the higher density at the center enables us to simulate the flat small - time behavior of the gompertz curve . in the current model ,the innermost automaton cells represent roughly 100 real cells , while the outermost automaton cells represent roughly real cells .the average distance between lattice sites was described by the following relation : in which is the average distance between lattice sites and is the radial position at which the density is being measured .this relation restricts the increase in the number of proliferating cells as the tumor grows .note that when modeling the effects of vasculature evolution discussed in the following , a a uniform lattice is used for which each automaton cell includes approximately 10 real cancer cells .figure 5 : idealized tumor ll + & average overall tumor radius ( see appendix ) + & proliferative rim thickness ( determines growth fraction ) + & non - proliferative thickness ( determines necrotic fraction ) + & probability of division ( varies with time and position ) + + + & base probability of division , linked to cell - doubling time ( 0.192 ) + & base necrotic thickness , controlled by nutritional needs ( ) + & base proliferative thickness , controlled by nutritional needs ( ) + & maximum tumor extent , controlled by pressure response ( ) + [ paramtab ] the proliferation algorithm is designed to allow a tumor consisting of a few automaton cells , representing roughly 1000 real cells , to grow to a full macroscopic size .an idealized model of a macroscopic tumor is an almost spherical body consisting of concentric shells of necrotic , non - proliferative and proliferative regions ( see figure 5 ) .the four microscopic growth parameters of the algorithm are , , , and reflecting , respectively , the rate at which the proliferative cells divide , the nutritional needs of the non - proliferative and proliferative cells , and the response of the tumor to mechanical pressure within the skull .in addition , there are four key time - dependent quantities that determine the dynamics of the tumor , i.e. , , , , giving , respectively , the average overall tumor radius , proliferative rim thickness , non - proliferative thickness and probability of division .these quantities are based on the four parameters ( , , , ) and are calculated according to the following algorithm . * initial setup : the cells within a fixed initial radius of the center of the grid are designated proliferative .all other cells are designated as non - tumorous . * time is discretized and incremented , so that at each time step : * * each cell is checked for type : non - tumorous or ( apoptotic and ) necrotic , non - proliferative or proliferative tumorous cells . * * non - tumorous cells and tumorous necrotic cells are inert . * * non - proliferative ( growth - arrested ) cells more than a certain distance , , from the tumor s edge are turned necrotic .this is designed to model the effects of a nutritional gradient .the edge of the tumor is taken to be the nearest non - tumorous cell , i.e. , * * proliferative cells are checked to see if they will attempt to divide according to the probability of division , , which is influenced by the location of the dividing cell , reflecting the effects of mechanical confinement pressure .this effect requires the use of an additional parameter , the maximum tumor extent , . is given by * *if a cell attempts to divide , it will search for sufficient space for the new cell beginning with its nearest neighbors and expanding outwards until either an empty ( non - tumorous ) space is found or nothing is found within the proliferation radius , .the radius searched is calculated as : * * if a cell attempts to divide but can not find space it is turned into a non - proliferative cell .* after a predetermined amount of time has been stepped through , the volume and radius of the tumor are plotted as a function of time .* the type of cell contained in each grid are saved at given times .figure 6 : an illustration of the proliferation algorithm .the above simulation procedure is also illustrated in figure 6 .we note that the redefinition of the proliferative to non - proliferative transition implemented in the algorithm is one of the most important new features of the model .they allow a larger number of cells to divide , since cells no longer need to be on the outermost surface of the tumor to divide .in addition , it ensured that cells that can not divide are correctly labeled as such .table [ paramtab ] summarizes the important time - dependent functions calculated by the proliferation algorithm and the constant growth parameters used .the readers are referred to ref . for the detailed description of the algorithm and parameters .malignant brain tumors such as gbm generally consist of a number of distinct subclonal populations .each of these subpopulations , arising from the constant genetic and epigenetic alteration of existing cells in the rapidly growing tumor , may be characterized by its own behaviors and properties . however , since each single cell mutation only leads to a small number of offspring initially , very few newly arisen subpopulations survive more than a short time .et al _ have extended the ca to quantify `` emergence , '' i.e. the likelihood of an isolated subpopulation surviving for an extended period of time .only mutations affecting the rate of cellular division were considered in this rendition of the model .in addition , only competition between clones was taken into account ; there were no cooperative effects included , although such effects can easily be incorporated .the simulation procedure is as follows : an initial tumor composed entirely of cells of the primary clonal population is introduced , which is allowed to grow using the proliferation algorithm until it reaches a predetermined average overall radius .then , a single ( or a small number of ) automaton cell is changed from the _ primary _ strain to a _ secondary _ strain with an altered probability of division , which represents very small fractions of the total population of proliferative tumor cells and the tumor is allowed to continue to grow using the proliferation algorithm .it is important to note that this does not represent a single mutation event but rather a mutation event that results in a subpopulation reaching a size dictated by the limits of the lattice resolution employed ( i.e. , a specified number of cells ) .the behavior of the secondary strain was characterized in terms of two properties : the _ degree _ and the _ relative size _ of the initial population of mutated cells , i.e. , which represents the ratio between the base probability of division of the new clone , , and that of the original clone , ; and positive , negative and no competitive advantages are respectively conferred for , , and .the initial value , i.e. , , is a parameter of the model reflecting the size of the mutated region introduced . besides the four growth parameters in the minimalist 4d ca model , three additional parameters for treatment were subsequently introduced : , , and , the values of which reflect , respectively , the proliferative cells treatment sensitivity , the non - proliferative cells treatment sensitivity , and the mutational response of the tumor cells to treatment .furthermore , there are three additional time - dependent quantities , and , giving respectively fraction of proliferative cells that die upon treatment ( equivalent to ) , fraction of non - proliferative cells that die upon treatment ( equivalent to ) and volume fraction of mutated living cells .these parameters are summarized in table 2 and a detailed discussion is given in ref . .ll + & governs the proliferative cells response at each + & instance of treatment ( ) + & allows for different treatment responses between + & proliferative and non - proliferative cells ( ) + & fraction of surviving proliferative cells + & that mutate in response to treatment ( ) + + + & fraction of proliferative cells that die + & upon treatment ; equivalent to + & fraction of non - proliferative cells that die + & upon treatment ; equivalent to + & volumetric fraction of living cells ( proliferative and + & non - proliferative ) belonging to the secondary strain + [ treattab ] in the simulation , treatment was introduced as `` periodic impulse '' , i.e. , a small tumor mass is introduced which is intended to represent a gbm after successful surgical resection and allowed to grow using the proliferation algorithm ; then treatment is applied and considered effective at discrete time points . in particular , the simulation proceeds through the proliferative steps until every week time - point , at which time the treatment routine is introduced : * after the last round of cellular division , each proliferative cell is checked to see if it is killed by the treatment .the probability of death for a given proliferative cell is given by where is the proliferative treatment parameter .dead proliferative automaton cells are converted to healthy cells . * each non - proliferative cellis checked to see if it is killed .the probability of death for a given non - proliferative cell is given by where is the non - proliferative treatment parameter and is a fraction of .a non - proliferative cell is converted to a necrotic cell upon death . * each surviving non - proliferative cellis checked to see if it is within the proliferative thickness of a healthy cell ( i.e. the tumor surface ) .if so , the non - proliferative cell is converted back to a proliferative cell . *all proliferative cells ( including newly - designated ones ) are checked for mutations for treatment resistance with probability .a new is randomly generated for mutated cells while remains constant .clinically , gbm treatment consists of both radiation therapy and chemotherapy .however , in our model we do not distinguish between the separate effects of these two methods . the tumors response to all treatment is captured by the treatment algorithm .moreover , this response is assumed to be instantaneous at each four - week time point .the tumor growth data generated via the minimalist 4d ca proliferation algorithm was compared with available experimental data for an untreated gbm tumor from the medical literature .the parameters compared were cell number , growth fraction , necrotic fraction and volumetric doubling time , which are used to determine a tumor s level of malignancy and the prognosis for its future growth . because it is impossible to determine the exact time a tumor began growing ,the medical data are listed at fixed radii .the different cell fractions used were extrapolated from the spheroid level and compared to data published for cell fractions at macroscopic stages .[ comptab ] .comparison of simulated tumor growth and experimental data . for each quantity ,the simulation data is give on the first line and the experimental data is given on the second line . [ cols= "< , < " , ] summarized in table 3 is the comparison between simulation results and data ( experimental , as well as clinical ) taken from the medical literature ( see ref . for detailed references ) .the simulation data were created using a tumor which was grown from an initial radius of 0.1 mm . the following parameter set ( see table 1 ) was used : , , , this value of corresponds to a cell - doubling time of 4.0 days .the parameters and have been chosen to give a growth history that quantitatively fits the test case .the specification of these parameters corresponds to the specification of a clonal strain .the parameter was similarly chosen to match the test case history . in this case , however , the fit is relatively insensitive to the value of , as long as the parameter is somewhat larger than the fatal radius in the test case . on the whole ,the simulation data reproduces the test case very well .the virtual patient would die roughly 11 months after the tumor radius reached 5 mm and 3.5 months after the expected time of diagnosis .the fatal tumor volume is about .figure 7 : cross - sections of a growing mono - clonal tumor .figure 8 : a cut - away view of the simulated tumor .figure 9 : the volume and radius of the developing tumor .central cross - sections of the tumors are shown in figure 7 , in which the growth of the tumor can be followed graphically over time . herenecrotic cells are labeled with black , non - proliferative tumorous cells with light gray and proliferative tumor cells with dark gray .a cut - away view of the simulated tumor is shown in figure 8 . as expected in this idealized case ,the tumor is essentially spherical , within a small degree of randomness .the high degree of spherical symmetry ensures that the central cross - section is a representative view . the volume and radius of the developing tumorare shown versus time in figure 9 .note that the virtual patient dies while the untreated tumor is in the rapid growth phase .recall that the parameter reflects the degree of advantage of the mutated subpopulation over the primary clone ( positive , negative and no competitive advantages are respectively conferred for , , and ) and the initial value , i.e. , , is a parameter of the model reflecting the size of the mutated region introduced . a subpopulation is considered to have emerged once it comprises of the actively dividing cell population or if it remains in the actively dividing state once the tumor has reached a fully developed size . numerous simulations ( at least 100 )were run at each parameter set by kansal _et al _ in order to calculate the expected probabilities of emergence , i.e. , along with confidence intervals , , defined as where represents the observed probability of emergence in trials .we note that the probability of emergence is actually a conditional probability : it is the probability that a subpopulation with a mutation of degree emerges given that a region of relative size has mutated .figure 10 : p vs. alpha and cut - away view of simulated tumor with a subpopulation .the results represented were run with a parameter set in which , , , , for the primary strain , in a simulation in which each time step represents one day .figure 10 depicts the observed probability of emergence , , for a subpopulation of initial size as a function of , which gives an approximation of the true , asymptotic , probability of emergence .also shown in figure 10 is a cut - away view of the simulated tumor with a subpopulation .not surprisingly , is a monotonically increasing function that tends to 0 for and to 1 as become significantly greater than 1 .perhaps the most striking feature of these results is that there is a non - zero probability of emergence for a very small population with no growth rate advantage , or even with a small disadvantage ( i.e. ) .this suggests that a mutated subpopulation may arise even without any growth advantage .these populations could represent `` dormant '' clones which confer an advantage not being selected for at the time .an example would be the appearance of hypoxia tolerant or even treatment resistant clones .it should be stressed that populations with less competitive advantages over other tumor strains can have a nonzero probability of emergence especially if they are _ localized _ in space , which leads to a minimum surface area between the two populations per unit tumor volume . in this way, the population with smaller competitive advantage can compete more effectively .we will see in the next subsection that this same principle is at work when resistance is induced due to treatment .it was also found that the emergence probability is a monotonically increasing function in and has a logarithmic dependence on .figure 11 : effects of the subpopulation on the tumor geometry .figure 12 : effects of the subpopulation on growth history .figure 11 shows the effects of growth of the subpopulation on the tumor geometry .it can be seen clearly that the center of mass of the tumor is significantly shifted by the emergence of the subpopulation .another example of the importance of subpopulations is depicted in figure 12 . in this example , a diagnosis was made ( on day ) giving information about the macroscopic size and growth rate of the tumor . from this informationthree possible growth histories of the tumor are plotted .one is the time history of the tumor with an emergent subpopulation .the others represent limiting cases , each with a monoclonal tumor of either the primary ( `` base '' ) or secondary ( `` high '' ) clonal strain .note that at the time of diagnosis all three scenarios have very similar dynamics .so any of the three histories is a reasonable prediction given only size and growth rate information . however , estimating a fatal tumor volume to be 65 and defining the survival time to be the time required to reach this volume , the base case mis - predicts survival times to be 90 days , which is 30 days more than the 60 days of the `` true '' course .it is noteworthy that from this perspective the overall future growth dynamics of the entire tumor closely follows that of the most aggressive case , indicating that the more aggressive clone dominates overall outcome and should therefore also define the appropriate treatment .this finding supports the current practice in pathology of grading tumors according to the most malignant area ( i.e. population ) found in any biopsy material .although of less clinical significance , the high case similarly mis - predicts the past history of the tumor .if the diagnosis had been made earlier , the base case would yield still worse future predictions . similarly the high " case would yield worse past predictions for a diagnosis made at a later time .the predictive errors arising from the assumption of a monoclonal tumor indicate how important an accurate estimate of the clonal composition of a tumor is in establishing a complete history and prognosis .note that the numbers given here are intended to show the scale of the inaccuracy possible , not to reflect any data extracted from actual patients . combining the proliferation algorithm and the treatment algorithm , the behavior of tumors that are able to develop resistance throughout the course of treatment were investigated .recall that additional parameters were introduced in the treatment routine : , , and , the values of which reflect , respectively , the proliferative cells treatment sensitivity , the non - proliferative cells treatment sensitivity , and the mutational response of the tumor cells to treatment ( see table 2 ) .these investigation consisted of three individual case studies . in case 1 ,the growth dynamics of monoclonal tumors are studied to determine how tumor behavior is affected by the treatment parameters and .case 2 builds upon this information , analyzing the behavior of two - strain tumors . here , a secondary treatment - resistant strain exists alongside a primary treatment - sensitive strain .a secondary sub - population was introduced at the onset of each simulation , initializing it in different spatial arrangements and at several ( small ) relative volumes . in both cases 1 and 2 ,no additional sub - populations arise in the tumors once the simulation has begun ( i.e. ) . in case 3 , however , tumors were studied that were capable of undergoing resistance mutations in response to each round of treatment ( ) . in these simulations ,the growth and morphology of the tumors were analyzed in relation to the fraction of mutating cells .here we only report on the results of cases 2 and 3 . in case 2 ,the smaller subpopulation of a secondary treatment - resistant strain was initially spatially distributed in two different ways on the tumor surface that primarily consist of the primary treatment - sensitive strain : a _ localized _ and _ scattered _ scenario , reflecting possible effects of the result of surgery , for example ( see figure 13 ) . in the simulation, the tumors were initialized as a single strain , i.e. , monoclonally with and and treatment was introduced every four weeks while the tumor is growing from a small mass with a radius of 4 mm , corresponding to approximately of surgical volume resection . for the scattered resistance scenario ,the resistant strain was found to compete more effectively with the sensitive strain and it was shown that the initial number of resistant cells were not a significant indicator or prognosis . figure 13 : spatial distributions of the resistance strain .these conclusions may at first glance seem to contradict the those reported by by kansal _et al _ .recall that in this work the selection pressure was different ( growth - rate competition versus treatment effects ) .moreover , the roles of the primary and secondary strains are reversed in the case 2 example : the primary strain possessed a competitive advantage over the secondary strain .nevertheless , the conclusions of both papers follow precisely the same principle .the proliferative ability of a strain with a competitive advantage varies directly with its contact area with the less comptetive strain per cell . unlike case 2 , the tumors in case 3 begin the simulations as a single strain . here , however , treatment can induce the appearance of mutant strains ( ) . in these simulations ,the growth and morphology of the tumors were analyzed in relation to the fraction of mutating cells .the tumors in case 3 are all initialized monoclonally with and . with this initial -value ,nearly every mutant strain that arises from the initial population will posses a lower -value .this is not to suggest that all induced mutations must possess increased resistance .this fact here merely stems from the initial sensitive tumor under consideration .at first , the tumors in case 3 will develop like treatment - sensitive , monoclonal tumors ; growth will then accelerate as resistant cells begin to dominate .this corresponds to a case of acquired resistance via induced ( genetic and epigenetic ) mutations .overall , the tumor dynamics here are more variable than in cases 1 and 2 .when a new strain appears , it begins as a single automaton cell . unlike case 2, not all new strains will be able to proliferate to an appreciable extent .some are overwhelmed by the parent strain from which they arise .the mean survival time of the tumors were determined as a function of and these data are summarized in figure 14 .plots this data ; from to , the survival times vary nearly logarithmically with . when , the mean time is near 27 months , as most tumors remain monoclonal ( or nearly monoclonal ) with , . as increases , resistant strains appear more commonly and survival times fall .figure 14 : survival times associated with continuously mutating tumors .one of the more intriguing observations in this case involves the gross morphology of the mutating tumors .their three - dimensional geometries exhibit an interesting dependence on the value of .figure 15 presents representative images of the fully - developed tumors for small , intermediate and large fractions of mutated proliferative cells after treatment . for small ( left panel of figure 15 ) , some tumors develop a secondary strain while others do not .the tumors that remain monoclonal maintain their spherical geometry .when a resistant sub - population does develop , it appears as a lobe on the parental tumor . for intermediate , resistantsub - populations consistently arise from the parental strain .the middle panel of figure 15 depicts a typical tumor , whose geometries consistently deviate from an ideal sphere .these tumors are multi - lobed in appearance , and the original strain is commonly overwhelmed .however , when is large , the geometric trend reverses , i.e. , the tumors ( right panel of figure 15 ) again appear more spherical , despite the fact that they experience the greatest fraction of mutations per treatment event .these images suggest that extreme mutational responses can lead to similar macroscopic geometries .non - spherical geometries result from intermediate -values .figure 15 : images of continuously mutating tumors .as pointed out in the introduction , there are complex interactions occurring between between a tumor and the host environment , which makes it very difficult in predicting clinical outcome , even if mutations responsible for oncogenesis that determine tumor growth are beginning to be understood .these interactions include the effects of vasculature evolution on tumor growth , the organ - imposed physical confinement as well as the host heterogeneity . while the three studies described in the previous section were successful at analyzing and characterizing gbm growth both with and without treatment , in each case , the ca model made the simplifying assumption that the tumor mass was well - vascularized ( the vascular network and angiogenesis were implicitly accounted for ) and the effects of mechanical confinement were limited to one parameter ( ) , which allowed for growth of spherically symmetric tumors with a maximum radius .spherical - like growth is realistic provided that the environment is effectively homogeneous , but heterogeneous environments will cause apsherically - shaped tumors . in order to incorporate a greater level of microscopic detail , a 3d ( two dimensions in space and one in time ) hybrid variant of the original ca model that allows one to study how changes in the tumor vasculature due to vessel co - option , regression and sprouting influence gbm was developed by gevertz and torquato .this computational algorithm is based on the co - option - regression - growth experimental model of tumor vasculature evolution . in this model , as a malignant mass grows , the tumor cells co - opt the mature vessels of the surrounding tissue that express constant levels of bound angiopoietin-1 ( ang-1 ) .vessel co - option leads to the upregulation of the antagonist of ang-1 , angiopoietin-2 ( ang-2 ) . in the absence of the anti - apoptotic signal triggered by vascular endothelial growth factor ( vegf ) , this shift destabilizes the co - opted vessels within the tumor center and marks them for regression .vessel regression in the absence of vessel growth leads to the formation of hypoxic regions in the tumor mass .hypoxia induces the expression of vegf , stimulating the growth of new blood vessels .a system of reaction - diffusion equations was developed to track the spatial and temporal evolution of the aforementioned key factors involved in blood vessel growth and regression ( see section 6 for a detailed description ) .based on a set of algorithmic rules , the concentration of each protein and bound receptor at a blood vessel determines if a vessel will divide , regress or remain stagnant .the structure of the blood vessel network , in turn , is used to estimate the oxygen concentration at each cell site .oxygen levels determine the proliferative capacity of each automaton cell .the reader is referred to for the full details of this algorithm .the model proved to quantitatively agree with experimental observations on the growth of tumors when angiogenesis is successfully initiated and when angiogenesis is inhibited .further , due to the biological details incorporated into the model , the algorithm was used to explore tumor response to a variety of single and multimodal treatment strategies .an assumption made in both the original ca algorithm and the one that explicitly incorporates vascular evolution is that the tumor is growing in a spherically symmetric fashion . in a study performed by helmlinger _et al _ , it was shown that neoplastic growth is spherically symmetric only when the environment in which the tumor is developing imposes no physical boundaries on growth .in particular , it was demonstrated that human adenocarcinoma cells grown in a 0.7% gel that is placed in a cylindrical glass tube develop to take on an ellipsoidal shape , driven by the geometry of the capillary tube. however , when the same cells are grown in the same gel outside the capillary tube , a spherical mass develops .this experiment clearly highlights that the assumption of radially symmetric growth is only valid when a tumor grows in an unconfined or spherically symmetric environment . since many organs , including the brain and spinal cord ,impose non - radially symmetric physical confinement on tumor growth , the original ca algorithm was modified to incorporate boundary and heterogeneity effects on neoplastic progression .the first modification that was made to the original algorithm was simply to specify and account for the boundary that is confining tumor growth .several modifications were made to the original automaton rules to account for the impact of this boundary on neoplastic progression . the original ca algorithm imposed radial symmetry in order to determine whether a cancer cell is proliferative , hypoxic , or necrotic .the assumption of radially symmetric growth was also utilized in determining the probability a proliferative cell divides . in order to allow tumor growth in any confining environment ,all assumptions of radial symmetry from the automaton evolution rules were removed .it was demonstrated that models that do not account for the geometry of the confining boundary and the heterogeneity in tissue structure lead to inaccurate predictions on tumor size , shape and spread ( the distribution of cells throughout the growth - permitting region ) .the readers are referred to ref . for the details of this investigation , but an illustration of confinement effects are given in the next section .each of the previously discussed algorithms were designed to answer a particular set of questions and successfully served their purpose .hence , gevertz and torquato merged each algorithm into a single cancer simulation tool that would not only accomplish what each individual algorithm had accomplished , but had the capacity to have emergent properties not identifiable prior to model integration . in developing the merged algorithm , some modifications were made to the original automaton rules to more realistically mimic tumor progression .the merged simulation tool is summarized as follows : 1 .* automaton cell generation * : a voronoi tessellation of random points generated using the nonequilibrium procedure of random sequential addition of hard disks determines the underlying lattice for our algorithm . here a uniform density lattice is used instead of the lattice with variable density . each automaton cellcreated via this procedure represents a cluster of a very small number of biological cells ( ) .2 . * define confining boundary * : each automaton cell is divided into one of two regimes : nonmalignant cells within the confining boundary and nonmalignant cells outside of the boundary .* healthy microvascular network * : the blood vessel network which supplies the cells in the tissue region of interest with oxygen and nutrients is generated using the random analog of the krogh cylinder model detailed in ref .one aspect of the merger involved limiting blood vessel development to the subset of space in which tumor growth occurs .4 . * initialize tumor * : designate a chosen nonmalignant cell inside the growth - permitting environment as a proliferative cancer cell .* tumor growth algorithm * : time is then discretized into units that represent one real day . at each time step : 1 . _solve pdes _ : a previously - developed system of partial differential equations is numerically solved one day forward in time .the quantities that govern vasculature evolution , and hence are included in the equations , are concentrations of vegf ( ) , unoccupied vegfr-2 receptors ( ) , the vegfr-2 receptor occupied with vegf ( ) , ang-1 ( ) , ang-2 ( ) , the unoccupied angiopoietin receptor tie-2 ( ) , the tie-2 receptor occupied with ang-1 ( ) and the tie-2 receptor occupied with ang-2 ( ) . the parameters in these equations include diffusion coefficients of protein ( ) , production rates and , carrying capacities , association and dissociation rates ( and ) and decay rates . any term with a subscript denotes an indicator function ; for example , is a proliferative cell indicator function .it equals 1 if a proliferative cell is present in a particular region of space , and it equals 0 otherwise .likewise , is the hypoxic cell indicator function , is necrotic cell indicator function and is the endothelial cell indicator function .the equations solved at each step of the algorithm are : + + + + + + + + + in these equations , represents the concentration of hypoxic cells and represents the endothelial cell concentration per blood vessel .the system of differential equations contains 21 parameters , 13 of which were taken from experimental data .parameters were unable to be found in the literature were estimated . for more details on the parameter values , as well as information on the initial and boundary conditions and the numerical solver ,the reader is referred to ref .2 . _ vessel evolution _ : check whether each vessel meets the requirements for regression or growth .vessels with a concentration of bound ang-2 six times greater than that of bound ang-1 regress , provided that the concentration of bound vegf is below its critical value .vessel tips with a sufficient amount of bound vegf sprout along the vegf gradient .3 . _ nonmalignant cells _ : healthy cells undergo apoptosis if vessel regression causes its oxygen concentration to drop below a critical threshold ( more particularly , if the distance of a healthy cell from a blood vessel exceeds the assumed diffusion length of oxygen , 250 m ) . further , nonmalignant cells do not divide in the model . while nonmalignant cell division occurs in some organs ,a hallmark of neoplastic growth is that tumor cells replicate significantly faster than the corresponding normal cells .hence , we work under the simplifying assumption that nonmalignant division rates are so small compared to neoplastic division rates that they become relatively unimportant in the time scales being considered . in the cases where this assumption does not hold ,nonmalignant cellular division would have to be incorporated into the model .4 . _ inert cells _ : tumorous necrotic cells are inert .this assumption is certainly valid for the tumor type that motivated this modeling work , glioblastoma multiforme . in the case of glioblastoma , the presence of necrosis is an important diagnostic feature and , in fact , negatively correlates with patient prognosis .5 . _ hypoxic cells _ :a hypoxic cell turns proliferative if its oxygen level exceeds a specified threshold and turns necrotic if the cell has survived under sustained hypoxia for a specified number of days . in the original algorithms , the transition from hypoxia to necrosis was based on an oxygen concentration threshold .however , given that cells ( both tumorous and nonmalignant alike ) have been shown to have a limited lifespan under sustained hypoxic conditions , a temporal switch more accurately describes the hypoxic to necrotic transition .thus , a novel aspect of the merged algorithm is a temporal hypoxic to necrotic transition .it has been measured that human tumor cells remain viable in hypoxic regions of a variety of xenografts for 4 - 10 days . in our simulations, we will use the upper - end of this measurement and assume that tumor cells can survive under sustained hypoxia for 10 days . 6 . _ proliferative cells _ :a proliferative cell turns hypoxic if its oxygen level drops below a specified threshold .however , if the oxygen level is sufficiently high , the cell attempts to divide into the space of a viable nonmalignant cell in the growth - permitting region .the probability of division is given by : + + where is the base probability of division , is the distance of the dividing cell from the geometric center of the tumor and is the distance between the closest boundary cell in the direction of tumor growth and the tumor s center . in the original implementations of the algorithm , was fixed to be 0.192 , giving a cell - doubling time of 4 days . in the merged algorithm proposed here , we wanted to account for fact that tumor cells with a higher oxygen concentration likely have a larger probability of dividing than those with a lower oxygen concentration .for this reason , we have modified the algorithm so that depends on the distance to the closest blood vessel ( which is proportional to the oxygen concentration at a given cell site ) .the average value of was fixed to be 0.192 , and we have specified that takes on a minimum value of 0.1 and a maximum value of 0.284 .this means that a proliferative cell in the model can have a cell doubling time anywhere in the range of three to seven days .the formula used to determine is where is the diffusion length of oxygen , taken to be 250 m . both and on the average probability of division .if this average probability changes , so does and . 7 . _ tumor center and area _ : after each cell has evolved , recalculate the geometric center and area of the tumor .the readers are referred to ref . for more details , including how cell - level phenotypic heterogeneity is also considered in a similar fashion to the manner done in refs and .the 3d cancer simulation tool described here was employed to study tumor growth in a confined environment : a two - dimensional representation of the cranium in space as a function of time .the cranium is idealized as an elliptical growth - permitting environment with two growth - prohibiting circular obstacles representing the ventricular cavities .tumor growth is initiated in between a ventricular cavity and the cranium wall . in this setting, we find that the early - time characteristics of the tumor and the vasculature are not significantly different than those observed when radial symmetry is imposed on tumor growth . in particular , after 45 days of growth ( figure 16(a ) ), vessels associated with the radially symmetric tumor begin to regress and hypoxia results in the tumor center .twenty days later ( figure 16(b ) ) , a strong , disordered angiogenic response has occurred in the still radially symmetric tumor . over the next 50 days of growth ( figure 16(c ) and ( d ) ) , the disorganized angiogenic blood vessel network continues to vascularized the growing tumor , but the tumor s shape begins to deviate from that of a circle due to the presence of the confining boundary .the patterns of vascularization observed are consistent with the patterns observed in the original vascular model , suggesting that the merged algorithm maintains the functionality of the original vascular algorithm .figure 16 : tumor growing in a 2d representation of the cranium .however , if the results of this simulation are compared with those of the environmentally - constrained algorithm without the explicit incorporation of the vasculature , we find that the merged model responds to the environmental constraints in a way that is more physically intuitive . in the original environmentally - constrained algorithm , the tumor responds quickly and drastically to the confining boundary and ventricular cavities .this occurs because the original evolution rules not only determine the probability of division based on the distance to the boundary , but also determine the state of a cell based on a measure of its distance to the boundary . in the merged model which explicitly incorporates the vasculature , the state of each cell depends on the blood vessel network , and only the probability of division directly depends on the boundary .for this reason , the merged algorithm exhibits an emergent property in that it grows tumors that respond more gradually and naturally to environmental constraints than does the algorithm without the vasculature .the tumor growth in a two - dimensional irregular region of space that truly allows the neoplasm to adapt its shape as it grows in time ( i.e. , a 3d model ) was also investigated by gevertz _ _ e__t al ; see also gevertz and torquato . as with the two - dimensional representation of the cranium in space , an emergent property of the merged algorithm in which that a more subtle and natural response to the effects of physical confinement is found occurs .the studies taking into account mutations responsible for phenotypic heterogeneity have been carried out by gevertz and torquato , to which the readers are referred for more details .we note that all the results presented in this section need to be validated experimentally .it is well known that cancer cells can break off the main tumor mass and invade healthy tissue . for many cancers ,this process can eventually result in metastases to other organs .tumor - cell invasion is a hallmark of glioblastomas , as individual tumor cells have been observed to spread diffusely over long distances and can migrate into regions of the brain essential for the survival of the patient . in certain cases , the invading tumor cells form branched chains ( see figure 3 ) , i.e. , tree structures .the brain offers these invading cells a variety of pathways they can invade along ( such as blood vessel and white fiber tracts ) , which may be interpreted as the edges of an underlying graph with the various resistances " values along these pathways playing the role of edge weights . the underlying physics behind the formation of the observed patterns are only beginning to be understood .figure 17 : examples of weighted graph and the resulting minimal spanning tree .the competition between local and global driving forces is a crucial factor is determining the structural organization in a wide variety of naturally occurring branched networks . as an attempt toward a model of the invasive network emanating from a solid tumor , kansal andtorquato investigated the impact of a global minimization criterion versus a local one on the structure of spanning trees .spanning trees are defined as a loopless , connected set of edges that connect all of the nodes in the underlying graph ( see figure 17 ) . in particular , these authors considered the generalized minimal spanning tree ( gmst ) and generalized invasive spanning tree ( gist ) , because they generally offer extremes of global ( gmst ) and local ( gist ) criteria .both gmst and gist are defined on graphs in which the nodes are partitioned into groups and each edge has an assigned weight .gmst is refined ( relative to that of a spanning tree ) such that the requirement that every node of the graph is included in the tree is replaced by the inclusion of at least one node from each group with the additional requirement that the total weight of tree is minimized .gist can be constructed by growing a connected cluster of edges by `` invading '' the remaining the edge with the minimal weight at its boundary with the requirement of the inclusion of at least one node from each group in the final tree .kansal and torquato have developed efficient algorithms to generate both gmst and gist structures , as well as a method to convert gist structure incrementally into a more globally optimized gmst - like structure ( see figure 18 ) .the readers are referred to the original paper for more algorithmic details .these methods allow various structural features to be observed as a function of the degree to which either criterion is imposed and the intermediate structures can then serve as benchmarks for comparison when a real image is analyzed .figure 18 : examples of gmst and gist .we note that a general procedure by which information extracted from a single , fixed network structure can be utilized to understand the physical processes which guided the formation of that structure is highly desirable in understanding the invasion network of tumor cells , since the temporal development of such a network is extremely difficult to observe . to this end ,kansal and torquato examined a variety of structural characterizations and found that the occupied edge density ( i.e. , the fraction of edges in the graph that are included in the tree ) and the tortuosity of the arcs in the trees ( i.e. , the average of the ratio of the path length between two arbitrary nodes in the tree and the euclidean distance between them ) correlate well with the degree to which an intermediate structure resembles the gmst or gist . since both characterizations are straightforward to determine from an image ( e.g. , only the information of the tree is required for tortuosity and additional information of underlying graph is needed for occupied edge density ) , they are potentially useful tools in the analysis of the formation of invasion network structures . once the distribution of the invasive cells in the brain is understood , a cellular automaton simulation tool for glioblastoma that is useful in a clinical setting could be developed .this of course would apply more generally to invasion networks around other solid tumors .in this paper , we have reviewed the work that we have performed to attempt to develop an ising model of cancer .we began by describing a minimalist 4d cellular automaton model of cancer in which healthy cells transition between states ( proliferative , hypoxic , and necrotic ) according to simple local rules and their present states , which can viewed as a stripped - down ising model of cancer . using four proliferation parameters ,this model was shown to reflect the growth dynamics of a clinically untreated tumor very well .this was followed by discussion of an extension of the model to study the effect on the tumor dynamics and geometry of a mutated subpopulation and how tumor growth is affected by chemotherapeutic treatment , including induced resistance , with additional three treatment parameters .an improved ca model that explicitly accounts for angiogenesis as well as the heterogeneous and confined environment in which a tumor grows were discussed .a general cancer simulation tool that merges , adapts and improves all of the aforementioned mechanism into a single ca model was also presented and applied to simulate the growth the gbm in a vascularized confined cranium . finally , we touched on how one might characterize the invasive network organization ( local versus global organization ) around a solid tumor using spanning trees .however , we must move well beyond the improved ca model as well as other computational models of cancer in order to make real progress on controlling this dreaded set of diseases . formulating theoretical and computational tools that can be utilized clinically to predict neoplastic progression and propose individualized optimal treatment strategies to control cancer growthis the holy grail of tumor modeling .although the development of our most comprehensive cellular automaton model is potentially a useful step towards the long - term goal of an ising model for cancer , numerous complex mechanisms involved in tumor growth and their interactions needs to be identified and understood in order to truly achieve this goal .for example , an effective ising model of cancer must incorporate molecular - level information via a better understanding of the cellular origin of the tumor .such information might become available if imaging techniques for spatial statistics of cell / molecular heterogeneity can be developed .this would enable an improved understanding of invading cancer cells : cell motility , cell - cell communication and phenotypes of invading cells .such knowledge is crucial in order to predict the effects of treatment and tumor recurrence .the incorporation of stem cells , oncogenes and tumor suppressor genes in computational models would aid in our understanding of tumor progression . in addition , we must quantitatively characterize the biological ( host ) environment ( i.e. , a heterogeneous material / medium ) in which cancer evolves , including both the microstructure and the associated physical properties .for example , a better knowledge of diffusion and transport of nutrients , drugs , etc . would significantly improve the accuracy of the model simulating the effects of vasculature evolution and treatment .similarly , cell mechanics and mechanical stresses must be understood . in such cases ,imaging of the biological environment over a wide spectrum of length and time scales will be crucial .figure 19 : a cartoon of a two - phase medium .it is important to emphasize that the theory heterogeneous media is a huge field within the physical sciences that can be brought to bear to better understand the host heterogeneous microenvironment of cancer and metastases ( see figure 19 ) .for example , there exist powerful and sophisticated theoretical / computational techniques to characterize the microstructure of heterogeneous materials and predict their physical properties .specifically , the details of complex microstructures are described in terms of various statistical descriptors ( different types of correlation functions ) , which in turn determine the physical properties of the heterogeneous materials . in particular , the effective properties that have been predicted include the diffusion coefficient , reaction rates , elastic / viscoelastic moduli , thermal conductivity , thermal expansion coefficient , fluid permeability , and electrical conductivity .accurate characterizations of these properties of the host environment and tumor mass are essential in order to significantly improve models for tumor growth and invasion .for example , a knowledge of the elastic properties enables one to better model the effects of physical confinement and the mechanical response of solid tumor ; while the diffusion coefficient and fluid permeability are crucial to model transport of nutrients and proteins , delivery of drugs and even the migration of cancer cells .these techniques have been used to propose a novel biologically constrained three - phase model of the brain microstructure .given such information , the ca model can be modified accordingly to take into account the available cell / molecular details of the tumor mass , its invasion network and the host heterogeneity ( e.g. , the capillary vasculature and adaptive physical confinement ) . real - time tumor growth and treatment simulationscan be carried out to generate data of clinical utility .for example , instead of only producing data which qualitatively reflects the general effects of tumor treatment and resistance , one could use the model to make reliable prognosis and to optimize individual treatment strategy. it would be fascinating to see if a more refined ising model for cancer predicted a `` phase transition '' phenomenon , which would be in keeping with the behavior of the standard ising model for spin systems .for example , it is not hard to imagine that part of the tumorigenesis process involves a `` phase transition '' between pre - malignant cells and malignant cells .we also note that variational principles and optimization techniques have been fruitfully applied to design structures with optimal properties .can optimization techniques be applied to understand and control cancer ?although optimization methods have begun to be employed for such purposes , there full potential has yet to be realized . for tumor treatment , for example , optimization techniquescould be employed to design chemotherapy / radiation strategies depending on tumor size , genomic information and the heterogeneous environment as well as the optimal durations of treatment and rest periods . given sufficient patient - specific information , optimized treatment strategies can be designed for individual patients .a variety of optimization techniques could be brought to bear here , including simulated annealing methods , and linear and nonlinear programming techniques .figure 20 : minimal surface structure we have developed an optimization methodology that provides a means of optimally designing multifunctional composite microstructures .we have shown how the competition between two different performance demands ( thermal versus electrical behaviors or electrical versus mechanical behaviors ) results in unexpected microstructures , namely , minimal surfaces ( see figure 20 ) , which also appear to optimal for fluid transport as well as diffusion - controlled reactions .this work suggests that it may be fruitful to explore the development of cancer , which not only involves competition but cooperation , from a rigorous multifunctional optimization viewpoint .cancer processes involve a competition between the primary clone , sub - clones , healthy tissue , immune system , etc . as well as a cooperation between different cells types ( e.g. , stroma cells and cancer cells ) in a heterogeneous environment .this competition / cooperation can be translated into an optimization problem in space and time .adaptation of this multifunctional optimization approach to cancer modeling could provide an alternative to game - theory approaches to understanding cancer .figure 21 : diamond and disordered ground state .even more challenging and intriguing questions can be asked : can we exploit the unique properties of normal stem cells to control cancer ( e.g. , to deliver therapy to tumors or to have them compete with the tumor ) ?can we use inverse optimization methods to design `` hypothetical '' cancers or stem cells with particular the cell - cell interactions to yield targeted behaviors and then make them in the lab ?these `` inverse '' problems are motivated by their analog in statistical mechanics . in statistical mechanics ,the `` forward problem '' is one in which a hamiltonian ( interaction potential ) for a many - body system is specified and then structure of the system and its thermodynamics are predicted . by contrast , the `` inverse '' problem of statistical mechanics seeks to find the `` optimal '' interaction potential that leads spontaneously to a novel targeted " structure ( or behavior ) .we have discovered optimal interaction potentials that yield unusual or counterintuitive targeted ground ( zero - temperature ) states , e.g. , low - coordinated diamond crystal and disordered states with only isotropic pair potentials ( see figure 21 ) .ground states are those many - particle configurations that arise as a result of slowly cooling a liquid to absolute zero temperature . the aforementioned obtained targeted ground states are so unusual because much of our experience involves ground states that are highly - coordinated crystal structures .an extremely challenging and fascinating question is whether we can devise inverse optimization techniques to control cancer ?it is clear that theoretical methods based in the physical and mathematical sciences offer many different fruitful ways to contribute to tumor research .however , for this approach to be successful , intensive interactions with cell biologists , oncologists , radiologists , clinicians , physicists , chemists , engineers , and applied mathematicians are essential .such an interdisciplinary approach appears to be necessary in order to control this deadly disease .this could be achieved most effectively if we could have an analog of the `` manhattan project '' in which there was a single facility with such an interdisciplinary team of scientists dedicated to this supreme achievement .* acknowledgments * the author thanks yang jiao and jana gevertz for very useful discussions and their critical reading of this manuscript .the research described was supported by award number u54ca143803 from the national cancer institute .the content is solely the responsibility of the authors and does not necessarily represent the official views of the national cancer institute or the national institutes of health .10 hanahan d and weinberg r a. the hallmarks of cancer . , 100:5770 , 2000 .crossa f r and tinkelenberga a h. a potential positive feedback loop controlling cln1 and cln2 gene expression at the start of the yeast cell cycle . , 65:875883 , 1991 .deisboeck t s , berens m e , kansal a r , torquato s , rachamimov a , louis d n , and chiocca e a. patterns of self - organization in tumor systems : complex growth dynamics in a novel brain tumor spheroid model ., 34:115134 , 2001 .brand u , fletcher j c , hobe m , meyerowitz e m , and simon1dagger r. dependence of stem cell fate in arabidopsis on a feedback loop regulated by clv3 activity . , 289:617619 , 2000 .kitano h. cancer as a robust system : implications for anticancer therapy ., 4:227235 , 2004 .chaplain m a j and sleeman b d. modelling the growth of solid tumours and incorporating a method for their classification using nonlinear elasticity theory ., 31:431473 , 1993 .tracqui p , cruywagen g c , woodward d e , bartoo g t , murray j d , and alvord e c. a mathematical model of glioma growth : the effect of chemotherapy on spatio - temporal growth . , 28:1731 , 1995 .gatenby r a. applications of competition theory to tumour growth : implications for tumour biology and treatment . , 32:722726 , 1996 .anderson a r a and chaplain m a j. continuous and discrete mathematical models of tumor - induced angiogenesis ., 60:857900 , 1998 .panetta j c. a mathematical model of drug resistance : heterogeneous tumors ., 147:4161 , 1998 .kansal a r , torquato s , harsh g r , chiocca e a , and deisboeck t s. simulated brain tumor growth using a three - dimensional cellular automaton ., 203:367382 , 2000 .kansal a r , torquato s , chiocca e a , and deisboeck t s. emergence of a subpopulation in a computational model of tumor growth ., 207:431441 , 2000 .schmitz j e amd kansal a r and torquato s. a cellular automaton model of brain tumor treatment and resistance ., 4:223239 , 2002 .mcdougall s r , anderson a r a , chaplain , m a j , and sherrat j a. mathematical modelling of flow through vascular networks : implications for tumor - induced angiogenesis and chemotherapy strategies ., 64:673702 , 2002 .alarcn t , byrne h m , and maini p k. a multiple scale model for tumor growth ., 3:440475 , 2005 . cristini v , frieboes h b , gatenby r , caerta s , ferrari m , and sinek j. morphological instability and cancer invasion . , 11:67726779 , 2005 . frieboes h b , zheng x , sun chung - ho , tromberg b , gatenby r , and cristini v. an integrated computational / experimental model of tumor invasion ., 66:15971604 , 2006 .gevertz j l and torquato s. modeling the effects of vasculature evolution on early brain tumor growth ., 243:517 , 2006 .swanson k r , rockne r , rockhill j k , and alvord e c jr . combining mathematical modeling with serial mr imaging to quantify and predict response to radiation therapy in individual glioma patient ., 9:575 , 2007 .gevertz j l , gillies g , and torquato s. simulating tumor growth in confined heterogeneous environments ., 5:036010 , 2008 .gevertz j and torquato s. growing heterogeneous tumors in silico ., 80:051910 , 2009 .gerlee p and anderson a r. evolution of cell motility in an individual - based model of tumour growth ., 259:6783 , 2009 .ramis - conde i , chaplain m a , anderson a r , and drasdo d. multi - scale modelling of cancer cell intravasation : the role of cadherins in metastasis . , 6:16008 , 2009 .hb frieboes , f jin , yl chuang , sm wise , js lowengrub , and v cristini .three - dimensional multispecies nonlinear tumor growth- ii : tumor invasion and angiogenesis . , 264:125478 , 2010 .byrne h. dissecting cancer through mathematics : from the cell to the animal model ., 10:221231 , 2010 .coffey d s. self - organization , complexity and chaos : the new biology for medicine . , 4:882885 , 1998 .holland e c. glioblastoma multiforme : the terminator ., 97:62426244 , 2000 .c j wheeler , k l black , g liu , m mazer , x - x zhang , s pepkowitz , d goldfinger , h ng , d irvin , and j s yu . vaccination elicits correlated immune and clinical responses in i glioblastoma multiforme patients ., 68:59555964 , 2010 .holash j , maisonpierre p c , compton d , boland p , alexander c r , zagzag d , yancopoulos g d , and weigand s j. vessel cooption , regression , and growth in tumors mediated by angiopoietins and vegf ., 284:19941998 , 1999 .brat d j , castellano - sanchez a a , hunter s b , pecot m , cohen c , hammond e h , devi s n , kaur b , and van meir e g. pseudopalisades in glioblastoma are hypoxic , express extracellular matrix proteases , and are formed by an actively migrating cell popultion ., 64:920927 , 2004 .singh s k , clarke i d , hide t , and dirks p b. cancer stem cells in nervous system tissues . , 23:72677273 , 2004 .singh s k , hawkins c , clarke i d , squire j a , bayani j , hide t amd mekelman r m , cusimano m d , and dirks p b. identification of human brain tumour initiating cells ., 432:396401 , 2004 .dchting w and vogelsaenger t. recent progress in modelling and simulation of three - dimensional tumor growth and treatment ., 18:7991 , 1985 .qi a s , zheng x , du c y , and an b s. a cellular automaton model of cancerous growth ., 161:112 , 1993 .smolle j and stettner h. computer simulation of tumour cell invasion by a stochastic growth model . , 160:6372 , 1993 .wasserman r , acharya r , sibata c , and shin k h. a patient - specific in vivo tumor model ., 136:111140 , 1996 .steel g g and peckham m j. exploitable mechanisms in combined radiotherapy - chemotherapy : the concept of additivity ., 5:8591 , 1997 .berkman r a , clark w c , saxena a , robertson j t , oldfield e h , and ali i u. clonal composition of glioblastoma multiforme ., 77:432437 , 1992 .coons s w and johnson p c. regional heterogeneity in the dna content of human gliomas ., 72:30523060 , 1993 .paulus w and peiffer j. intratumoral histologic heterogeneity of gliomas . a quantitative study ., 64:442447 , 1989 .yung w a , shapiro j r , and shapiro w r. heterogeneous chemosensitivities of subpopulations of human glioma cells in culture . , 42:992998 , 1982 .nowell p c. the clonal evolution of tumor cell populations ., 194:2328 , 1976 .loeb l a. microsatellite instability : marker of a mutator phenotype in cancer ., 54:50595063 , 1994 .lengauer c , kinzler k w , and vogelstein b. genetic instabilities in human cancers ., 396:643649 , 1998 .heppner g h and miller b e. therapeutic implications of tumor heterogeneity ., 16:91105 , 1989 .schnipper l e. clinical implications of tumor cell heterogeneity ., 314:14231431 , 1986 .bredel m. anticancer drug resistance in primary human brain tumors ., 35:161204 , 2001 .endicott l c and ling v. the biochemistry of the p - glycoprotein - mediated multidrug resistance ., 58:137171 , 1989 .german u a. p - glycoprotein a mediator of multidrug resistance in tumour cells . , 32:927944 , 1996 .poppenborg h , munstermann g , knopfer m m , hotfilder m , and wolff j e a. c6 cross - resistant to cisplatin and radiation . , 17:20732077 , 1997 .chadhary p m and roninson i b. induction of multidrug resistance in human cells by transient exposure to different chemotherapeutic drugs ., 85:632639 , 1993 .gekeler v , beck j , noller a , wilisch a , frese g , neumann m , handgretinger r , ehninger g , probst h , and niethammer d. drug - induced changes in the expression of mdr - associated genes - investigations on cultured cell lines and chemotherapeutically treated leukemias ., 69:s19s24 , 1994 .gerweck l e , kornblinth p l , burlett p , wang j , and sweigert s. radiation sensitivity of cultured glioblastoma cells ., 125:231241 , 1977 .kayama t , yoshimoto t , fujimoto s , and sakurai y. intratumoral oxygen pressure in malignant brain tumors ., 74:5559 , 1991 .zhang w , hara a , sakai n , andoh t , yamada h , anf p h. gutin y n , and kornblinth p l. radiosensitization and inhibition of deoxyribonucleic acid repair in rat glioma cells by long term treatment with 12-o - tetradecanoylphorbol 13-acatate . , 32:432437 , 1993 .holash j , wiegand s j , and yancopoulos g d. new model of tumor angiogenesis : dynamic balance between vessel regression and growth mediated by angiopoietins and vegf . , 18:53565362 , 1999 .maisonpierre c s , suri c , jones p f , bartunkova s , wiegand s j , radziejewski c , compton d , mcclain j , aldrich t h , papadlopoulous n , daly t j , davis s , sato t n , and yancopoulos g d. angiopoietin-2 , a natural antagonist for tie2 that disrupts in vivo angiogenesis ., 277:5560 , 1997 .secomb t w , hsu r , beamer n b , and coull b m. theoretical simulation of oxygen transport to brain by networks of microvessels : effects on oxygen supply and demand on tissue hypoxia . , 7:237247 , 2000 .giese a and manfred w. glioma invasion in the central nervous system ., 39:235252 , 1996 . visted t , enger p o , lund - johansen m , and bjerkvig r. mechanisms of tumor cell invasion and angiogenesis in the central nervous system ., 8:289304 , 2003 .kansal a r and torquato s. globally and locally minimal weight branched tree networks ., 301:601619 , 2001 .torquato s. .springer - verlag , new york , 2002 .helmlinger g , netti p a , lichtenbeld h c , melder r j , and jain r k. solid stress inhibits the growth of multicellular tumor spheroids ., 15:77883 , 1997 .t. sun , p. meakin , and t. jossang .minimum energy dissipation model for river basin geometry ., 49:48654872 , 1994 .i. rodriguez iturbe and a. rinaldo . .cambridge university press , cambridge , 1997 .g. b. west , j.h .brown , and b. j. enquist . a general model for the origin of allometric scaling laws in biology ., 276:122126 , 1997 .haouari m dror m and chaouachi j. generalized spanning trees ., 120:583592 , 2000 .torquato s , kim i c , and cule d. effective conductivity , dielectric constant , and diffusion coefficient of digitized composite media via first - passage - time - equations ., 85:15601572 , 1999 .s b lee , i c kim , c a miller , and s torquato .random - walk simulation of diffusion - controlled processes among static traps ., 39:1183311839 , 1989 .torquato s. diffusion and reaction among traps : some theoretical and simulation results ., 65:11731207 , 1991 .quintanilla j and torquato s. new bounds on the elastic moduli of suspensions of spheres . , 77:43614372 , 1995 . torquato s. exact epression for the effective elastic tensor of disordered composites ., 79:681685 , 1997 .kim i c and s torquato .effective conductivity of suspensions of hard spheres by brownian motion simulation . , 69:22802289 , 1991 .gibiansky l v and torquato s. thermal expansion of isotropic multiphase composites and polycrystals ., 45:12231252 , 1997 .torquato s and lu b. rigorous bounds on the fluid permeability : effect of polydispersivity in grain size ., 2:487491 , 1990 .torquato s. effective electrical conductivity of two - phase disordered composite media ., 58:37903797 , 1985 . sen a k and torquato s. effective electrical conductivity of two - phase disordered anisotropic composite media ., 39:45044516 , 1989 . j. l. gevertz and s. torquato . a novel three - phase model of brain tissue microstructure . , 4:e100052 , 2008 .torquato s and pham d c. optimal bounds on the trapping constant and permeability of porous media ., 92:255505 , 2004 .sigmund o and torquato s. design of materials with extreme thermal expansion using a three - phase topology optimization method ., 45:10371067 , 1997 .torquato s , hyun s , and donev a. multifunctional composites : optimizing microstructures for simultaneous transport of heat and electricity . , 89:266601 , 2002 .s. torquato and a. donev .minimal surfaces and multifunctionality ., 460:18491856 , 2004 .torquato s. optimal design of heterogeneous materials ., 40:101129 , 2010 .y. jung and s. torquato .fluid permeabilities of triply periodic minimal surfaces . , 92:255505 , 2005 .j. l. gevertz and s. torquato .mean survivial times of absorbing triply periodic minimal surfaces ., 80:011102 , 2009 .d dingli , f a c c chalub , f c santos , s van segbroeck , and j m pacheco .cancer phenotype as the outcome of an evolutionary game between normal and malignant cells ., 101:11301136 , 2009 .reya t , morrison s j , clarke m f , and weissman1 i l. stem cells , cancer , and cancer stem cells ., 414:105111 , 2001 .rechtsman m , stillinger f h , and torquato s. optimized interactions for targeted self - assembly : application to honeycomb lattice ., 95:228301 , 2005 .rechtsman m , stillinger f h , and torquato s. synthetic diamond and wurtzite structures self - assemble with isotropic pair interactions ., 75:031403 , 2007 .batten r d , stillinger f h , and torquato s. novel low - temperature behavior in classical many - particle systems . , 103:050602 , 2009 .torquato s. inverse optimization techniques for targeted self - assembly ., 5:11571174 , 2009 .* _ neoplasm _ : a neoplasm is a synonym for a tumor . * _ glioma _ : a collection of tumors arising from the glial cells or their precursors in the central nervous system . *_ cellular automaton _ : a spatially and temporally discrete model that consists of a grid of cells , with each cell being in one of a number of predefined states .the state of a cell at a given point in time depends on the state of itself and its neighbors at the previous discrete time point .transitions between states are determined by a set of local rules . *_ ising model _ : the ising model is an idealized statistical - mechanical model of ferromagnetism that is based on simple local - interaction rules , but nonetheless lead to basic insights and features of real magnets , such as phase transitions with a critical point . *_ voronoi cell _ : given a set of points , the voronoi cell is the cell that is formed about an arbitrary point in the set by finding the region of space closer to that point than any other point in the system ( torquato , 2002 ) . * _ delaunay triangulation _ : given a voronoi graph ( a set of voronoi cells ) , the delaunay graph is its dual that results from joining all pairs of sites that share a voronoi face .if this graph consists of only simplices , the graph is called a delaunay triangulation ( torquato , 2002 ) . * _ quiescent _ : a cell is considered quiescent if it is in the g0 phase of the cell cycle and is not actively dividing . * _ necrotic _ : a cell is considered necrotic if it has died due to injury or disease , such as abnormally low oxygen levels .[ fig1 ] [ fig2 ] -contrast enhanced brain mri - scan showing a right frontal gbm tumor , as adapted from ref .perifocal hypointensity is caused by significant edema formation .the hyper - intense , white region ( ring - enhancement ) reflects an area of extensive blood - brain / tumor barrier leakage . since this regional neovascularsetting provides tumor cells with sufficient nutrition it contains the highly metabolizing , e.g. dividing , tumor cells therefore , this area corresponds to the outermost concentric shell of highly proliferating neoplastic cells in our model ( see figure 5).,title="fig:",scaledwidth=35.0% ] [ fig4 ] {fig4a.eps } & \includegraphics[width=0.25\textwidth]{fig4b.eps } \\ \includegraphics[width=0.25\textwidth]{fig4c.eps } & \includegraphics[width=0.25\textwidth]{fig4d.eps } \\ \end{array}$ ] [ fig5 ] . the outer shell , with thickness ,is composed of proliferative cells .both length scales and are determined by nutritional needs of the cells via diffusional transport.,title="fig:",scaledwidth=35.0% ] [ fig6 ] and the necrotic region radius are shown .( b ) two non - proliferative cells that more than away from the tumor edge are turned into necrotic and two proliferative cells are selected with probability to check for division . if there are non - tumorous cells within a distance from the selected proliferative cell , it will divide ; otherwise , it will turn into a non - proliferative cell .( c ) one of the selected proliferative cell divides and the other turns into a non - proliferative cell.,title="fig:",scaledwidth=90.0% ] [ fig12 ] '' and `` high , '' respectively .each tumor is set to have the same volume at some `` diagnosis '' time .note that the emerging tumor s dynamics initially follow the base case , but later follow the highly aggressive case.,title="fig:",scaledwidth=55.0% ] [ fig15 ] ( left panel ) , intermediate ( middle panel ) and large ( right panel ) .the distinct clonal sub - populations in each tumor are represented with a different color , ranging from red ( highest -values ) to violet ( lowest -values ) .all tumors here are fully - developed.,title="fig:",scaledwidth=65.0% ] [ fig16 ] , with 30% of the cells being proliferative , 66.4% being hypoxic and 3.6% being necrotic .( b ) after 65 days , the dimensionless area is 0.0195 units , with 51.2% of the cells being proliferative , 33.0% being hypoxic and 15.8% being necrotic .( c ) after 85 days , the dimensionless area is 0.0362 units , with 48.2% of the cells being proliferative , 16.8% being hypoxic and 35.0% being necrotic .( d ) after 115 days , the dimensionless area is 0.0716 units , with 45.1% of the cells being proliferative , 18.6% being hypoxic and 36.3% being necrotic .the deep blue outer region ( darkest of the grays in black and white ) is comprised of proliferative cells , the yellow region ( lightest of the grays in black and white ) consists of hypoxic cells and the black center contains necrotic cells .green cells ( intermediate gray shade in black and white ) are apoptotic .the white speckled region of space represents locations in which the tumor can not grow .the lines represent blood vessels .if viewing the image in color , red vessels were part of the original tissue vasculature , and the purple vessels grew via angiogenesis.,title="fig:",scaledwidth=65.0% ] [ fig19 ] and and volume fractions and .the quantity represents any general physical property of phase ( e.g. , diffusion coefficient , electrical or thermal conductivity , elastic moduli , viscosity , and magnetic permeability ) .the material phases can either be solid , liquid or gas depending on the specific context . here and represent the macroscopic and microscopic length scales , respectively .right panel : when is much bigger than , the heterogeneous material can be replaced by homogeneous medium with an effective property .,title="fig:",scaledwidth=85.0% ]
the holy grail of tumor modeling is to formulate theoretical and computational tools that can be utilized in the clinic to predict neoplastic progression and propose individualized optimal treatment strategies to control cancer growth . in order to develop such a predictive model , one must account for the numerous complex mechanisms involved in tumor growth . here we review resarch work that we have done toward the development of an``ising model '' of cancer . the ising model is an idealized statistical - mechanical model of ferromagnetism that is based on simple local - interaction rules , but nonetheless leads to basic insights and features of real magnets , such as phase transitions with a critical point . the review begins with a description of a minimalist four - dimensional ( three dimensions in space and one in time ) cellular automaton ( ca ) model of cancer in which healthy cells transition between states ( proliferative , hypoxic , and necrotic ) according to simple local rules and their present states , which can viewed as a stripped - down ising model of cancer . this model is applied to model the growth of glioblastoma multiforme , the most malignant of brain cancers . this is followed by a discussion of the extension of the model to study the effect on the tumor dynamics and geometry of a mutated subpopulation . a discussion of how tumor growth is affected by chemotherapeutic treatment , including induced resistance , is then described . how angiogenesis as well as the heterogeneous and confined environment in which a tumor grows is incorporated in the ca model is discussed . the characterization of the level of organization of the invasive network around a solid tumor using spanning trees is subsequently described . then , we describe open problems and future promising avenues for future research , including the need to develop better molecular - based models that incorporate the true heterogeneous environment over wide range of length and time scales ( via imaging data ) , cell motility , oncogenes , tumor suppressor genes and cell - cell communication . a discussion about the need to bring to bear the powerful machinery of the theory of heterogeneous media to better understand the behavior of cancer in its microenvironment is presented . finally , we propose the possibility of using optimization techniques , which have been used profitably to understand physical phenomena , in order to devise therapeutic ( chemotherapy / radiation ) strategies and to understand tumorigenesis itself . salvatore torquato department of chemistry , princeton university , princeton , nj 08544 , usa department of physics , princeton university , princeton , nj 08544 , usa princeton center for theoretical science , princeton university , princeton , nj 08544 , usa program in applied and computational mathematics , princeton university , princeton , nj 08544 , usa princeton institute for the science and technology of materials , princeton university , princeton , nj 08544 , usa * corresponding author contact information * : = salvatore torquato + tel . : 609 - 258 - 3341 + fax : 609 - 258 - 6746 + e - mail : torquato.edu * short title * : toward an ising model of cancer * classification numbers * : 87.17.aa , 87.19.xj _ keywords _ : tumor growth , glioblastoma multiforme , cellular automaton , heterogeneous media , optimization
according to the hoop - conjecture , a cosmic string that contracts to a size smaller than its schwarzschild radius will collapse and form a black hole .this process is of particular interest and importance in connection with primordial black holes , hawking radiation , high energy cosmic gamma bursts etc . in a network of cosmic strings , only a very small fraction , , of strings are expected to collapse to black holes .many attempts have been made to obtain a value for ( see for instance - ) , but the results deviate wildly .interestingly enough , only in one of the pioneering papers on the subject , the one by polnarev and zembowicz , is the derivation of based on exact analytical expressions for the cosmic strings involved in the process . in all other discussions the derivation of based on linearized expressions for the string configurations ( for instance ) , rather general arguments and estimates ( for instance ) , observational data concerning high energy cosmic rays ( for instance - ) or the derivation is purely numerical ( for instance ) . and even in ref . , the final computation of is actually numerical , although it could in fact have been performed analytically . in the present paperwe consider the question of collapse for the analytical two - parameter family of strings introduced by turok .this is the same family of strings that was considered in .however , simple explicit examples show that the results obtained in are correct only in part of the parameter - space. we will show that some essential points were missed in ref . , and that the results obtained there are not completely correct .first of all , we make a classification of the string configurations according to their general behavior during one period of oscillation .together with some simple explicit examples , this analysis reveals that the corresponding result obtained in is in fact only correct in approximately half of the two - dimensional parameter - space .secondly , we then derive the exact analytical expression for the probability of string collapse to black holes .our result for agrees partly with that of ref . in the sense that , where is the string tension and is newtons constant . however , we find a numerical prefactor in the relation , expressed in terms of euler s gamma - function , which is of the order . for comparison , in ref . the prefactor was found ( numerically ) to be of the order .finally we show that our careful computation of the prefactor helps to understand the discrepancy between previously obtained results and , in particular , that for large " values of , there may not even be a discrepancy .we also give a simple physical argument that can immediately rule out some of the previously obtained results .the paper is organized as follows . in section [ sec2 ], we introduce the string configurations of turok and discuss the results obtained by polnarev and zembowicz . in section [ sec3 ] ,we give the precise definition of the minimal string size , and we make a complete classification of the two - dimensional parameter - space .the classification is in terms of the specific time(s ) at which the value comes out for a particular string configuration during one period of oscillation " . in section [ sec4 ], we derive an exact analytical expression for the probability of string collapse to black holes and we give the approximate result in the physically realistic limit . finally in section [ sec5 ] , we argue that our careful computation of the prefactor helps to understand the discrepancy between previously obtained results , and we give our concluding remarks .some of the details of the computations used in sections [ sec3 ] and [ sec4 ] are presented in the appendix .the string equations of motion in flat minkowski space take the form : supplemented by the constraints : it is convenient to take , where is a constant with dimension of length .( [ waveeq])-([gaugefix ] ) become : a two - parameter solution to eqs .( [ neweom ] ) was first introduced by turok : \nonumber\\ y(\tau,\sigma ) & = & \frac{a}{2}~\left[{(1-\alpha)\cos ( \sigma -\tau ) + \frac{\alpha}{3}\cos 3 ( \sigma - \tau)+ ( 1 - 2 \beta ) \cos ( \sigma + \tau ) } \right ] \nonumber \\z(\tau,\sigma ) & = & \frac{a}{2}~\left[{2\sqrt{\alpha(1-\alpha)}\cos ( \sigma - \tau ) + 2\sqrt{\beta(1-\beta)}\cos ( \sigma + \tau ) } \right ] \label{turokstring}\end{aligned}\ ] ] where } ] .this family of solutions generalizes the solutions considered in ref . . the total mass - energy of the turok - string is: where: it follows that : which ( by construction ) is independent of the parameters ( ) .similarly , one finds for the momenta : in fact , the string center of mass is located at at all times , and the strings are symmetric under reflection in origo.the schwarzschild radius corresponding to the energy ( [ stringenergy ] ) is for realistic cosmic strings , the dimensionless parameter is quite small , say .it is then clear that a turok - string ( [ turokstring ] ) will typically be far outside its schwarzschild radius ( as follows since typically ) .however , for certain particular values , a string might at some instant during its evolution be completely inside its schwarzschild radius .such a string will collapse and eventually form a black hole , according to the socalled hoop - conjecture . to determine whether this happens or not , we must first find the minimal 3-sphere , that completely encloses the string , as a function of time .after minimization over time , we then get the minimal 3-sphere that can ever enclose the whole string .let the radius of this sphere be ( it will be defined more stringently in the next section ) .then the condition for collapse is : that is to say , in a pioneering paper , polnarev and zembowicz considered , among other things , the question of collapse of the turok - strings ( [ turokstring ] ) .in connection with the minimal string size they found : * the strings have their minimal size at * for generic parameters : result ( [ polz1 ] ) would be expected for a string experiencing a monopole - like oscillation , i.e. starting from maximal size at , then contracting isotropically to minimal size at , and then re - expanding to its original maximal size at . as for the result ( [ polz2 ] )it was simply stated without proof or derivation . in sections 3,4we shall show that ( [ polz1 ] ) and ( [ polz2 ] ) are not completely correct .in fact , they are correct in part of the two - dimensional parameter - space but incorrect in other parts . a simple example showing that ( [ polz2 ] ) can not be correct , is provided by the case . in that case the result ( [ polz2 ] ) would actually give , which would mean that the whole string had collapsed to a point at some instant .this would imply that at some instant , but that is certainly impossible for .the problem is that ( [ polz2 ] ) gives the minimal distance from origo to the string , but it is the maximal distance which is relevant for the minimal 3-sphere .for a given pair of parameters ( ) , we define the minimal string size as the ( square of the ) radius of the minimal 3-sphere that can ever enclose the string completely .more precisely : \end{array } \left [ \begin{array}{cc } \mbox{maximum}\\ \sigma\in\left[0 ; \pi\right ] \end{array } \left(r^2\left(\tau , \sigma\right)\right)\right ] \label{defr}\ ] ] where : as obtained using eq .( [ turokstring ] ) .thus , for a fixed time , we first compute the maximal distance from origo ( i.e. the string center of mass ) to the string .this gives the minimal 3-sphere that encloses the string at that particular instant .we then minimize this maximal distance over all times .this gives altogether the minimal string size . andthis is obviously the quantity that must be compared with the ( square of the ) schwarzschild radius .notice that we need only maximize over ] in eq .( [ defr ] ) .this is due to the reflection symmetry and time periodicity of the turok - strings ( [ turokstring ] ) .+ we now outline the computation of . the details can be found in the appendix , and some analytical results are given also in section [ sec4 ] .we first solve the equation : for fixed time .this leads to a quartic equation in : where the coefficients depend on time as well as on the parameters .the explicit expressions are given in the appendix .the solutions to equation ( [ quartic ] ) are explicitly known , leading to for given values of . by insertion of these solutions into , it is then straightforward to obtain the maximal distance in the square bracket of eq .( [ defr ] ) .this is now a function of , which finally has to be minimized over ]. then they expand for a while , and then recontract and reach the minimal size again at .then they expand again towards the original size at . in this family of strings ,the value of depends on .+ + it must be stressed that the strings in most cases do not expand or contract _ isotropically_. they typically expand in some directions while contracting in other directions .this is why we use the expressions generally expand " and generally contract " , which refer to the minimal string size as a function of time ( the radius of the minimal 3-sphere enclosing the string , as a function of time ) .notice that besides the three above - mentioned families of strings , there are a number of degenerate cases at the different boundaries .for instance , at the boundary , the strings have their minimal size at , , and . on the other hand , at the boundary between regions* i * and * ii * , the strings have their minimal size at , and .notice also that the 3 points , and correspond to rigidly rotating strings , thus they have their minimal ( and maximal ) size at times .+ let us close this section with a comparison with the result ( [ polz1 ] ) of ref . .we see that the result ( [ polz1 ] ) is correct in the region * i * of parameter - space , but incorrect in regions * ii * and * iii*. +in this section we consider the question of collapse of the turok - strings ( [ turokstring ] ) . as already discussed in the previous section , the only strings with a chance to collapse are those corresponding to parameters located in the vicinity of .that is , we need only consider strings in the family * i * of fig .2 . using the results of the appendix , it is then straightforward to show that the minimal string size , as defined in eq .( [ defr ] ) , is given by : where : notice that eq .( [ rtwo ] ) is precisely the result ( [ polz2 ] ) of polnarev and zembowicz .however , in ref . , the other solution ( [ rone ] ) was completely missed , and this is actually the relevant solution in eq .( [ collapse ] ) in approximately half of the parameter - space .according to the hoop - conjecture ( see for instance ) , the condition for collapse to a black hole is then given by eq .( [ condition ] ) , with given by eqs .( [ collapse])-([rtwo ] ) : which should be solved for as a function of .this can be easily done analytically .the result is shown in fig .3 : the part of parameter - space fulfilling inequality ( [ newcond ] ) is bounded by the -axis , the -axis , the straight line and the two curves : where : \left[8\alpha^2 - 6\alpha + 9\left(\frac{r_s}{a } + 1\right)\frac{r_s}{a}\right]\nonumber\end{aligned}\ ] ] notice also that , and the probability for collapse into black holes is then given by the fraction : d\alpha \label{prob}\end{aligned}\ ] ] this equation represents the exact analytical result for the probability of collapse of the turok - strings ( [ turokstring ] ) , for a given value of .the integrals in ( [ prob ] ) are of hyper - elliptic type , and not very enlightening in the general case . however , using that typically ( see ) , a simple approximation is obtained by keeping only the leading order terms : the result ( [ approxprob ] ) is a very good approximation for , thus for any realistic " cosmic strings we conclude : which is our final result of this section . it should be stressed that we have been using the simplest and most naive version of the hoop - conjecture : namely , we did not take into account the angular momentum of the strings .however , numerical studies of other families of strings showed that inclusion of the angular momentum only leads to minor changes in the final result , so we expect the same will happen here .it should also be stressed that we have neglected a number of other physical effects that might change the probability of collapse .these include the gravitational field of the string and gravitational radiation . finally , as in all other discussions of the probability of string collapse ,we are faced with the problem that we do not know the measure of integration in parameter - space . thus using another measure in eq .( [ prob ] ) would generally give a different result ( see also ref .in this paper we examined , using purely analytical methods , the question of collapse of turok s two - parameter family of cosmic strings .we made a complete classification of the strings according to the specific time(s ) the minimal string size is reached during one period .this revealed that the previously obtained results were only correct in part of the two - dimensional parameter - space .we then obtained an exact analytical expression for the probability of collapse into black holes for the turok strings , which partly agrees with that of ref . in the sense that .however , we showed that there is a large numerical prefactor in the relation .this factor is of the order , and not , as previously stated , of the order .one might say that it is perhaps not so important whether the prefactor is or since the exponent will more or less kill the probability of collapse anyway .this may very well be true for small " values of ( say , ) , but for large " values of ( say , ) the situation is completely different .in fact , we shall now argue why it is so important to carefully compute the prefactor : we find that when using our result , a clear picture is beginning to emerge .different computations based on different families of strings ( not surprisingly ) produce slightly different exponents , but they also produce completely different prefactors .importantly , the two go in the same direction : an exponent larger by is followed by a prefactor larger by a factor ( roughly speaking ) .for instance , the one - parameter family of kibble and turok gives , our computation for the two - parameter family gives and the caldwell - casper computation gives .we find it extremely interesting that for ( which is the range where the caldwell - casper computation is valid ) , the three computations basically agree giving .we therefore find that our careful computation of the prefactor is very important and has helped to understand the discrepancy between previously obtained results . andin particular , for large " values of , there may not even be a discrepancy since the different exponents and prefactors of the different computations actually produce more or less the same number for the probability of collapse .for small " values of , the picture is unfortunately not so clear , and more detailed investigations seem necessary .it is actually possible to give a physical argument showing that the results of different computations ( if they are done correctly ) should merge for large " values of .consider the rigidly rotating straight string , corresponding to and .it is easy to show that it is precisely inside its schwarzschild radius for .now , it is well - known that the rigidly rotating string has the maximal angular momentum per energy ( it is exactly on the leading regge trajectory ) , and therefore is expected to be the most difficult string to collapse into a black hole . therefore , for all other strings have already collapsed , and we should expect .this is indeed the case for the three above mentioned computations , while it does not hold for the computation of ref. .more generally , the condition can be considered as a physical boundary condition , and therefore can be used to immediately rule out some of the previously obtained results for ; for instance those of and . as a possible continuation of our work, it would be very interesting to consider more general multi - parameter families of strings , to see how general our result for actually is .such families of strings have been constructed and considered for instance in , and more general ones can be obtained along the lines of . unfortunately , there are also still some open questions , as we discussed at the end of section [ sec4 ] .the main problem seems to be that we still do not know exactly what is the measure of integration in parameter - space .0.3 cm * acknowledgements * + one of us ( a.l.l . ) would like to thank m.p .dabrowski for discussions on this topic , at an early stage of the work . +in this appendix , we give some details of the results presented in the sections [ sec3 ] and [ sec4 ] .+ the distance from origo to the string , as a function of and , is conveniently written as a polynomial in and . from ( [ defdist ] ) and ( [ turokstring ] ) : \nonumber \\[2 mm ] & & \hspace*{-6mm}-\frac{4\alpha\beta}{3 } \left[\cos\left(2\tau\right)\cos\left(2\sigma\right ) + \sin\left(2\tau\right)\sin\left(2\sigma\right)\right]\cos\left(2\sigma \right ) \big\ } \label{polyn}\end{aligned}\ ] ] where : \cos\left(2\tau\right)\nonumber \\ c_1 & = & \frac{3}{4\alpha\beta}\left [ \frac{2\alpha}{3}\left(1 - \beta\right)\sin\left(4\tau\right ) + \frac{8\alpha}{3}\left(1 - \alpha\right)\sin\left(2\tau\right)\right ] \label{ugly } \\c_2 & = & \frac{3}{4\alpha\beta}\bigg [ \frac{2\alpha}{3}\left(1 - \beta\right)\cos\left(4\tau\right ) + \frac{8\alpha}{3}\left(1 - \alpha\right)\cos\left(2\tau\right ) \nonumber \\ & & \hspace*{6mm}+ 2\left(2\sqrt{\alpha\left(1 - \alpha\right ) } \sqrt{\beta\left(1 - \beta\right)}-\beta\left(1-\alpha\right)\right)\bigg ] \nonumber\end{aligned}\ ] ] then it is straightforward to show that the condition ( [ maxim ] ) leads to where and : the solutions to eq .( [ quarticagain ] ) can be written down in closed form .define ^\frac{1}{3}\nonumber \\ { \mathcal w } & \equiv & \frac{2^{\frac{1}{3}}\mathcal{x}}{3\mathcal{z } } + \frac{\mathcal{z}}{3 \ : 2^{\frac{1}{3}}}\nonumber\end{aligned}\ ] ] then the 4 solutions are : which give for given values of . these solutions are inserted into ( [ polyn ] ) and then the result of the square bracket in eq .( [ defr ] ) is determined .finally this function of must be minimized .+ as an example , consider strings in the region * i * of parameter - space ; see fig .[ fig2 ] . using the above formulas ,one finds : with \ ] ] insertion into eq .( [ polyn ] ) then leads directly to the result for the minimal string size in the region * i * , where \end{aligned}\ ] ] c.f .( [ rone])-([rtwo ] ) .+ 333 k.s .thorne , in _ magic without magic : john archibald wheeler _ , ed j. klauder ( w.h . freeman and company , 1972 ) .a. polnarev and r. zembowicz , _ phys ._ d*43 * , 1106 ( 1991 ) . j. garriga and a. vilenkin , _ phys ._ d*47 * , 3265 ( 1993 ) . j. garriga and m. sakellariadou , _ phys ._ d*48 * , 2502 ( 1993 ) .s.w . hawking , _ phys ._ b*231 * , 237 ( 1989 ) . h. honma and h. minakata , _ formation of black holes from cosmic strings _ , tmup - hel-9109 ( 1991 ) , unpublished . r.r .caldwell and e. gates , _ phys ._ d*48 * , 2581 ( 1993 ) .macgibbon , r.h .brandenberger and u.f .wichoski , _ phys ._ d*57 * , 2158 ( 1998 ) .wichoski , j.h .macgibbon and r.h .brandenberger , _ phys .rept . _ * 307 * , 191 ( 1998 ) .x. li and h. cheng , _ class .quantum grav . _* 13 * , 225 ( 1996 ) .r.r . caldwell and p. casper ,_ d*53 * , 3002 ( 1996 ) .s.w . hawking , _ phys ._ b*246 * , 36 ( 1990 ) . c. barrabes , _ class .quantum grav . _ * 8 * , 199 ( 1991 ) .j. fort and t. vachaspati , _ phys ._ b*311 * , 41 ( 1993 ) . n. turok , _ nucl ._ b*242 * , 520 ( 1984 ) .kibble and n. turok , _ phys ._ b*116 * , 141 ( 1982 ) .a. vilenkin , _ phys ._ * 121 * , 263 ( 1985 ) .a. vilenkin and e.p.s .shellard , _ cosmic strings and other topological defects _( cambridge university press , 1994 ) .a. erdelyi , _ higher transcendental functions _ ( mcgraw - hill , 1953 ) .d. delaney , _ phys ._ d*41 * , 1775 ( 1990 ) .a.l . chen , d.a .dicarlo and s.a .hotes , _ phys_ d*37 * , 863 ( 1988 ) . r.w .brown and d. delaney , _ phys .lett . _ * 63 * , 474 ( 1989 ) .figure 1 : the radius of the minimal 3-sphere completely enclosing a string with parameters plotted for all parameter - space .notice that is close to zero only near , that is for the near - spherical string configurations .figure 3 : the region of parameter - space which contains the strings falling inside their own schwarzschild radius is bounded by , , and the two curves . here is chosen in order to illustrate the general form of the region .as decreases , the region is relatively stretched out and becomes quite narrow .
= 1.5em we examine the question of collapse of turok s two - parameter family of cosmic strings . we first perform a classification of the strings according to the specific time(s ) at which the minimal string size is reached during one period . we then obtain an exact analytical expression for the probability of collapse to black holes for the turok strings . our result has the same general behavior as previously obtained in the literature but we find , in addition , a numerical prefactor that changes the result by three orders of magnitude . finally we show that our careful computation of the prefactor helps to understand the discrepancy between previously obtained results and , in particular , that for large " values of , there may not even be a discrepancy . we also give a simple physical argument that can immediately rule out some of the previously obtained results . _ physics department , university of odense , _ + _ campusvej 55 , 5230 odense m , denmark _
random graphs appeared in the mathematical literature as a convenient tool to prove the existence of graphs with a certain property : instead of a direct constructive proof exhibiting such a graph , one can construct a random ensemble of graphs and show that this property is true with a positive probability . soon afterwards the study of random graphs acquired interest on its own and led to many beautiful mathematical results .a large class of problems in this field can be formulated in the following generic way : a graph being given , what is the probability that a graph extracted from the random ensemble under consideration contains as a subgraph ? with a more quantitative ambition , one can define as the random variable counting the number of occurrences of distinct copies of in , and study its distribution .these problems are relatively simple when the pattern remains of a finite size in the thermodynamic limit , i.e. when the size of the random graph diverges .the situation can become much more involved when and have large sizes of the same order , as can grow exponentially with the system size .in this article we shall consider these questions when the looked for subgraph is a long circuit ( also called loop or cycle ) , i.e. a closed self - avoiding path visiting a finite fraction of the vertices of the graph .the level of accuracy of the rigorous results on this problem depends strongly on the random graph ensemble .the regular case ( when all vertices of the graph have the same degree ) is the best understood one .it has for instance been shown that -regular random graphs with have with high probability hamiltonian circuits ( circuits which visit all vertices of the graph ) and the distribution of their numbers is known .this study has been generalized to circuits of all length in .less is known for the classical erds - rnyi ensembles , where the degree distribution of the vertices converges to a poisson law of mean .most results concerns either the neighborhood of the percolation transition at , or the opposite limit of very large mean connectivity , either finite with respect to the size of the graph or diverging like its logarithm ( it is in this latter regime that the graphs become hamiltonian ) .we shall repeatedly come back in the following on this discrepancy between regular random graphs where probabilistic methods have been proved so successful and the other ensembles for which they do not seem powerful enough and might be profitably complemented by approaches inspired by statistical mechanics .we will discuss in particular a conjecture formulated by wormald , according to which random graph ensembles with a minimal degree of 3 ( and bounded maximal degree ) should be hamiltonian with high probability . besides this probabilistic point of view ( what are the characteristics of the random variable associated to the number of circuits ) , the problem has also an algorithmic side : how to count the number of circuits in a given graph ?exhaustive enumeration , even using smart algorithms , is restricted to small graphs as the number of circuits grows exponentially with the size .more formally , the decision problem of knowing if a graph is hamiltonian ( i.e. that it contains a circuit visiting all vertices ) is np - complete . a probabilistic algorithm for the approximate counting of hamiltonian cycles is known , but is restricted to graphs with large minimal connectivity .random graphs have also been largely considered in the physics literature , mainly in the real - world networks perspective , i.e. in order to compare the characteristics of observed networks , of the internet for instance , with those of proposed random models .empirical measures for short loops in real world graphs were for instance presented in .long circuits visiting a finite fraction of the vertices were also studied in .the behavior of cycles in the neighborhood of the percolation transition was considered in , and the average number of circuits for arbitrary connectivity distribution was computed in . in this paperwe shall turn the counting problem into a statistical mechanics model , which we treat within the bethe approximation .this will led us to an approximate counting algorithm , cf .[ sec_bethe ] .we will then concentrate on random graph ensembles and compute the typical number of circuits with the cavity replica - symmetric method in sec .[ sec_typical ] .the next two sections will be devoted to the study of the limits of short and longest circuits , then we shall investigate the validity of the replica - symmetry assumption in sec .[ sec_stability ] .we perform a comparison with exhaustive enumerations on small graphs in sec .[ sec_enumerations ] and draw our conclusions in sec . [ sec_conclusion ] .three appendices collect more technical computations .a short account of our results has been published in .let us consider a graph on vertices ( also called sites in the following ) , with edges ( or links ) .the notation shall mean that the edge joins the vertices and .the degree , or connectivity , of a site is the number of links it belongs to .the graphs are assumed in the main part of the text to be simple , i.e. without edges from one vertex to itself or multiple edges between two vertices .we denote the set of neighbors of the vertex , and use the symbol to subtract an element of a set : if is a neighbor of , will be the set of all neighbors of distinct of .the same symbol will be used for the set of edges incident to the vertex , the context will always clarify which of the two meanings is understood . a circuit of length is an ordered set of different vertices , , such that is an edge of the graph for all ] and a function defined on ,\ell_{\rm max}] ] a permutation of the indices which orders the hard fields in decreasing order , } \ge p_{[2 ] }\ge \dots p_{[k]} ] .the replica symmetric solution studied in the main part of the text is recovered by taking the distributions concentrated on a single value . to investigate its local stability ,one gives them an infinitesimal variance .expanding eq .( [ eq_1rsb ] ) in the limit of vanishing s , one obtains the following relation : for the rs solution to be stable against this perturbation , the variances of the 1rsb order parameters should decrease upon iterations of the above relation .this can be studied numerically for any random graph ensemble , by iterating the above relation on a population of couples , the value of being drawn from .the variances can be initially all taken to 1 ( note that eq .( [ eq_rsb_recur ] ) is linear in the s ) , in the course of the dynamics the s are periodically divided by a number , chosen each time to maintain the average value of constant . after a thermalization phase converges ( in order to gain numerical precision one computes the average over the iterations of ) , its limit being ( resp ) if the rs solution is unstable ( resp .this method , pioneered in the context of the instability of the 1rsb solution in , can be replaced by the computation of the associated non - linear susceptibility , see for instance . for regular random graphs of connectivity , where all rs messages take the same value given in eq .( [ eq_y_regular ] ) , one can readily compute the value of the stability parameter , it is easy to check that when lies in its allowed range , confirming the validity of the rs ansatz .it would have been anyhow surprising to discover an instability in this case where the annealed computation is exact .another case which is analytically solvable is the limit ( i.e. ) .indeed , we have seen that the messages scales then as , and it turns out that , independently of the rescaled messages . recalling that , one finds in this limit : for any connectivity distribution , the rs ansatz is always stable in the small regime .all the numerical investigations of we conducted for ensembles with minimal connectivity 3 suggest that in this case the replica symmetric solution is stable for all values of .note that here the zero temperature limit of can be studied directly at the level of evanescent fields , as .we thus conjecture that the whole function computed with the rs cavity method is correct for these ensembles , and in particular the quenched entropy of hamiltonian circuits stated in eq .( [ eq_wormald_refined ] ) .the situation is less fortunate for poissonian graphs .the reader may have anticipated the appearance of non integer hard fields in the zero temperature limit for mean connectivities lower than as an hint of rsb .the datas presented in the left panel of fig .[ fig_sketch_sigma ] shows indeed that for small , the stability parameter crosses 1 when is increased above some finite value .this critical value of increases with the mean connectivity , and an educated guess makes us conjecture that it diverges at .the rightmost curve for shows indeed for all the values of we could numerically study .a precise extrapolation of turned out however to be rather difficult .note also that the study directly at is largely complicated here by the fact that the hard fields do not take a finite number of distinct values as is often the case in usual optimization problems , but extend on the contrary on all relative integers . in summary ,the conjectured scenario is that at high enough connectivities the whole curve , and in particular its zero temperature limit , is correctly described by the rs computation . for lower connectivities there will be a critical length above which replica symmetry breaks down .we also believe that this scenario , sketched in the right part of fig .[ fig_sketch_sigma ] , is valid not only in the poissonian case , but for all families of random graph ensembles ( with a fastly decaying connectivity distribution ) with a control parameter which drives the graphs towards a continuous percolation transition , the fraction of degree 2 sites in the 2-core growing as the transition is approached .let us finally propose an interpretation for the occurrence of replica symmetry breaking for the largest circuits in presence of a large fraction of degree 2 sites in the 2 core , by relating it to an underlying extreme value problem . in the discussion of sec .[ sec_bounds ] , one could indeed tag the edges of the reduced graph with a strictly positive integer , by counting the number of edges of which were collapsed onto .the length of a circuit of is thus the weighted length of the corresponding circuit of , i.e. the sum of the labels on the edges it visits .these weighted lengths are correlated random variables , because of the structural constraint defining a circuit : for a given graph , not all the sums of tags correspond to circuits of length .when the fraction of degree 2 site is small enough , these correlations are sufficiently weak for the rs ansatz to treat them correctly , when long chains of degree 2 vertices become too numerous they somehow pin the longest circuits , which cluster in the space of configurations and cause the replica symmetry breaking . for poissonian random graphs , from left to right , , .right : sketched behaviour of the quenched entropy for generic families of random graphs . from top to bottom a control parameter drives the graphs towards a continuous percolation transition , the maximal length of the circuits is reduced . in the neighborhood of the percolation transition replica symmetry breaking takes place for large enough circuits , and should be taken into account to compute the dashed part of the curve . ,title="fig:",width=302 ] for poissonian random graphs , from left to right , , .right : sketched behaviour of the quenched entropy for generic families of random graphs . from top to bottom a control parameter drives the graphs towards a continuous percolation transition , the maximal length of the circuits is reduced . in the neighborhood of the percolation transition replica symmetry breaking takes place for large enough circuits , and should be taken into account to compute the dashed part of the curve ., title="fig:",width=302 ]we present in this section the results of the numerical experiments we have conducted in order to check our analytical predictions .these experiments are based on the exhaustive enumeration algorithm of which allows to generate all the circuits of a given graph , and in particular to compute the numbers of circuits of a given length .this algorithm runs in a time proportional to the total number of circuits , hence exponential in the size of the graphs for the cases we are interested in , which obviously puts a strong limitation on the sizes we have been able to study .let us begin with the investigation of the erds - rnyi ensembles and . in the former ,each of the potential edges between the vertices of the graph is present with probability , independently of each other , in the latter a set of among the edges is chosen uniformly at random . with and , these two ensembles are expected to be equivalent in the large - size limit . in particular the vertex degree distribution converges in both cases to a poisson law of mean , the cavity computation thus predicts that their typical properties should be the same in the thermodynamic limit .this is not true for the the annealed entropies which are easily computed exactly even at finite sizes , see app .[ sec_app_combinatorial ] , and which remain distinct in the thermodynamic limit . in the left part of fig .[ fig_sigmas_c3 ] we present the annealed and quenched entropies for both ensembles , computed from 10000 graphs of size and mean connectivity . the finite sizequenched entropy has been estimated using the median of the random variables .the annealed entropies are very different in both ensembles ( and in perfect agreement with the computation of app .[ sec_app_combinatorial ] ) , and clearly different from the quenched ones .the striking feature of this plot is the almost perfect coincidence of the median in the two ensembles ; this was expected in the thermodynamic limit , but is already very clear at this moderate size . on the right panel of fig .[ fig_sigmas_c3 ] , the quenched entropy is plotted for two graph sizes , along with its extrapolated values in the thermodynamic limit , which agrees with the cavity computation . and of mean connectivity , for graphs of size , computed from the mean and the median of on samples of 10000 graphs .right : the quenched entropy for at , and , symbols are the extrapolation in the limit from several values of , solid line is the replica symmetric cavity computation ., title="fig:",width=302 ] and of mean connectivity , for graphs of size , computed from the mean and the median of on samples of 10000 graphs .right : the quenched entropy for at , and , symbols are the extrapolation in the limit from several values of , solid line is the replica symmetric cavity computation ., title="fig:",width=302 ] as argued above , the difference between annealed and quenched entropies can be also seen in the exponentially larger value of the second moment of with respect to the square of the first moment .this fact is illustrated in fig .[ fig_2nd_moment ] , where the analytic computation of the ratio presented in app .[ sec_app_combinatorial ] is confronted with its numerical determination . for at ,the symbols are numerically determined values which converge in the large size limit to the solid line , analytically computed in app .[ sec_app_combinatorial ] .right : finite size analysis for , solid line is a best fit of the form , where is constrained to its analytic value ( dashed line ) , and the form of the fit is justified in app .[ sec_app_combinatorial].,title="fig:",width=302 ] for at , the symbols are numerically determined values which converge in the large size limit to the solid line , analytically computed in app .[ sec_app_combinatorial ] .right : finite size analysis for , solid line is a best fit of the form , where is constrained to its analytic value ( dashed line ) , and the form of the fit is justified in app .[ sec_app_combinatorial].,title="fig:",width=302 ] we also considered the largest circuits in each graph , of length and degeneracy , and computed the averages and for various connectivities .their extrapolated values in the thermodynamic limit are compatible with the predictions and of the cavity method , within the numerical accuracy we could reach .this is true also for connectivities smaller than , where we argued above in favor of a violation of the replica symmetry hypothesis : the corrections due to rsb should be smaller than the numerical precision we reached .another set of experiments concerned uniformly generated graphs with an equal number of degree 3 and 4 vertices .we checked that the probability for such graphs to be hamiltonian converges to 1 when increasing their size .the values for the annealed and quenched entropies for the hamiltonian circuits are too close to be distinguished numerically .however the study of the ratio of the first two moments of ( see fig . [ fig_2nd_moment_3p4 ] ) indicates that they should be strictly distinct in the thermodynamic limit . for graphs with an equal fraction of degree 3 and 4 vertices .solid line is a best fit of the form , with , as found in app .[ sec_app_combinatorial].,width=302 ]let us summarize the main results presented in this paper .we have proposed an approximative counting algorithm that runs in a linear time with respects to the size of the graph .we also presented an heuristic method to compute the typical number of circuits in random graph ensembles , which yields a quantitative refinement of wormald s conjecture on the typical number of hamiltonian cycles in ensembles with minimal degree 3 ( eq .( [ eq_wormald_refined ] ) ) and a new conjecture on the maximal length of circuits in ensembles with a small fraction of degree 2 vertices in their 2 cores ( eq . ( [ eq_lmax_expansion ] ) ) .several directions are opened for future work .first of all we believe that a rigorous proof of wormald s conjecture , which seems difficult to reach by variations around the second moment method , could be obtained by statistical mechanics inspired techniques . in recent yearsthere has been indeed a series of mathematical achievements in the formalization of the kind of method used in this article .one line of research is based on guerra s interpolation method , and culminated in talagrand s proof of the correctness of the parisi free - energy formula for the sherrington - kirkpatrick model .these ideas have also been applied to sparse random graphs in . alternatively the local weak convergence method of aldous has been successfully applied to similar counting problems in random graphs . there has also been a recent interest in the corrections to the bethe approximation for general graphical models .it would be of great interest to implement these refined approximations for the counting problem considered in this paper .this should lead on one hand to a more precise counting algorithm , and on the other hand give access to the finite - size corrections of the quenched entropy .we expect in particular that the difference between circuits and unions of vertex disjoint circuits will become relevant for these corrections . the convergence in probability of expressed by eq .( [ eq_cvp ] ) can a priori be promoted to a stronger large deviation principle : according to the common wisdom , the finite deviations of this quantity from are exponentially small .a general method for computing these rate functions has been presented in and could be of use in the present context. an interesting question could be to compute the exponentially small probability that a random graph is not hamiltonian in ensembles where typical graphs are so . in the algorithmic perspective, one could try to take advantage of the local informations provided by the messages .in particular they could be useful to explicitly construct long cycles , in a `` belief inspired decimation '' fashion : most probable edges in the current probability law would be recursively forced to be present , and the bp equations re - runned in the new simplified model .the neighborhood of the percolation transition should also be investigated more carefully , in particular the effects of replica - symmetry breaking onto the structure of the configuration space .the case of heavy - tailed ( scale - free ) degree distributions deserves also further work .the assumption of fast decay we made here is indeed crucial for some of our results : bianconi and marsili showed in that scale - free graphs , even with a minimal connectivity of 3, can fail to have hamiltonian cycles . other random graph models ( generated by a growing process , or incorporating correlations between vertex degrees ) could also been investigated .let us finally mention two closely related problems which are currently studied with very similar means .circuits can be defined as a particular case of -regular graphs , with . replacing the number of allowed edges around any site from 2 to in eq .( [ eq_wi ] ) , one can similarly study the number of -regular subgraphs in random graph ensemble .the case corresponds to matchings , which was largely studied in the mathematical literature and have been reconsidered by statistical mechanics methods in .the appearance of -regular subgraphs in random graphs was first considered in , see for a statistical mechanics treatment .we warmly thanks rmi monasson with whom the first steps of this work have been taken .we also acknowledge very useful discussions with ginestra bianconi , andrea montanari , andrea pagnani , federico ricci - tersenghi , olivier rivoire , martin weigt and lenka zdeborov .the work was supported by evergrow , integrated project no .1935 in the complex systems initiative of the future and emerging technologies directorate of the ist priority , eu sixth framework .we collect in this appendix the combinatorial arguments for the computation of the first and second moment of the number of circuits in various random graph ensembles .let us denote the number of circuits of length in a graph , and the set of circuits of length in the complete graph of vertices , its cardinality being indeed , choosing such a circuit amounts to select an ordered list of the vertices it will visit , modulo the orientation and the starting point of the tour .introducing the indicator function equal to if is a subgraph of , otherwise , we can write let us now describe the random graph ensembles we shall consider in the following .the first two are the classical erds - rnyi random graph ensembles . in ,each of the edges is present with probability , independently of the others . in ,a set of distinct edges is chosen uniformly at random among the possible ones .we shall concentrate on the thermodynamic limit , and with the mean connectivity kept finite . in this regime and are essentially equivalent : drawing at random from amounts to draw from a binomial distribution of parameters , and then drawing at random a graph from . in the limit described above, the number of edges in is weakly fluctuating around .moreover the degree of a given vertex in the graph converges in both cases to a poisson random variable of parameter . for an arbitrary degree distribution of mean , one can define the uniform ensemble of graphs obeying this constraint of degree distribution .a practical way of drawing a graph from this ensemble is the so - called configuration model , defined as follows .each of the vertices is randomly attributed a degree , in such a way that vertices have degree ( we obviously skip some technical details : should be a function of , such that is an integer ) . - links goes out of each vertex of degree .then one generates a random matching of the half - links and puts an edge between sites which are matched . in generalone obtains in this way a multigraph , i.e. there appear edges linking one vertex with itself , or multiple edges between the same pair of vertices .however , discarding the non - simple graphs leads to an uniform distribution over the simple ones . to compute averages over the graph ensemble, one can thus use the configuration model and condition on the multigraph to be simple . for claritywe shall denote the number of circuits in the unconditioned multigraph ensemble .note also that regular random graphs are a particular case of this ensemble , with .taking the average over the graphs of eq .( [ combin_def_n ] ) leads to for the ensembles we are considering , where the probability for a circuit to be present is independent of . before inspecting the various cases ,let us state the asymptotic behaviour of in the limit , finite , obtained with the stirling formula : in the first formula we have introduced the function .in the probability has a very simple expression , . the mean number of circuits thus reads where the first expression is valid for any , and the second one has been obtained in the thermodynamic limit with the annealed entropy for this first ensemble is : note that if , the algebraic prefactor in ( [ eq_er1_ann ] ) is slightly different , in the probability reads obviously this expression has a meaning only for , as there can not be circuits longer than the total number of edges .this gives an exact expression for for any and .the expansion in the thermodynamic limit with leads to with the annealed entropy again the different algebraic prefactor in ( [ eq_er2_ann ] ) can be easily computed also for .let us now make a few comments on these results .first , when , both annealed entropies are negative for all values of where they are well defined .consequently is exponentially small in the thermodynamic limit , and thanks to the so - called markov inequality ( or first moment method ) valid for positive integer random variables , \le \overline{{\cal n}_l(g ) } \ , \ ] ] with high probability there are no circuits of extensive lengths in these graphs .this could be expected : the percolation transition occurs at , in this non percolated regime the size of the largest component is of order , and thus extensive circuits can not be present .as a second remark , let us note that for , the annealed entropy of the first ensemble is strictly positive for ,\ell_{\rm a}(c)[ ] , typically the graphs can not contain circuits of edges , however an exponentially small fraction of the graphs have 2-core larger than their typical sizes .these untypical graphs contribute with an exponential number of circuits to the annealed mean , which is in consequence not representative of the typical behaviour of the ensemble .finally , let us underline that the annealed entropies ( [ eq_er1_sigma],[eq_er1_sigma ] ) for the two ensembles are definitely different .for instance , in the second ensemble , the entropy is defined only for : the number of edges in the graph being fixed at , no circuits can be longer than the number of edges . on the contrary , in the first ensemble ,arbitrary large deviations of the number of edges from its typical value are possible , even if with an exponential small probability .the expectation of the number of circuits of length in the multigraph ensemble extracted with the configuration model was presented in . for the sake of completeness and to make the study of the second moment simpler we reproduce the argument here . in this case one has where the sum is over positive integers constrained by , and we used the classical notation .the s are the number of sites of degree in the circuit , which are to be distributed among the sites of degree .the term accounts for the choice of the half links around each site , and finally the ratio of the double factorials is the probability that the matching of half - links contains the desired configuration . introducing the integral representation of the kronecker symbol , , where is a complex variable integrated along a closed path around the origin , this expression can be simplified in in the thermodynamic limit the integralcan be evaluated by the saddle - point method , combining the expansion with the one of yields \ , \label{eq_sigmaa}\end{aligned}\ ] ] where here and in the following stands for equivalence upto subexponential terms , i.e. means as . in the regular caseone has from which the prefactors are more easily computed moreover the conditioning on the multigraph being simple can be explicitly done in the regular case , thanks to the relative concentration of .this yields \quad { \rm for } \0<\ell \le 1 \ .\ ] ] we checked that the numerical findings of were in perfect agreement with these exact results .note that in the regular case , this conditioning modifies the value of only by a constant factor , thus the annealed entropy is the same in the graph and in the multigraph ensemble .it is not clear to us whether this fact should remain true for arbitrary connectivity distributions .we now turn to the computation of the second moment of the number of circuits , which has been inspired by .taking the square of eq .( [ combin_def_n ] ) and averaging over the ensemble leads to we have indeed isolated the term in the sum , which is readily computed , from the off - diagonal terms .the last expression is more easily understood after having a look at fig .[ fig_union ] , where we sketched the shape of the union of two distinct circuits .this pattern is characterized by , the number of common paths shared by and , , the number of edges in these paths , and , the number of vertices which belongs to both circuits but are not neighbored by any common edge .one finds vertices at the extremities of the common paths , vertices in the interior of the common paths , hence vertices belong to both circuits , to none of them . in consequence the sum is over non - negative integers subject to the constraints : is the number of pairs of distinct circuits of the complete graph whose union has the characteristics , and is the ( ensemble - dependent ) probability that such pattern appears in a random graph .let us show that where the combinatorial factor is by convention set to .to construct such a pattern , one has to choose among the vertices those which are in but not in , in but not in ( both of these categories contain vertices ) , those in the common paths ( ) and those shared by the circuits but with no adjacent common edges ( ) .this can be done in distinct ways .let us call the number of common paths of edges , for , which obey the constraints and .the sites can be distributed into such an unordered set of unorientated paths in distinct ways .we have to sum this expression on the values of satisfying the above constraints . by picking up the coefficient of in finds that when this factor should be one , in agreement with the above convention .finally is formed by choosing an ordered list of the vertices which belongs only to it , the isolated common vertices , and of the orientated common paths , modulo the starting point and the global orientation of this tour , hence a factor and the same arises when constructing .( [ eq_combin_mlxyz ] ) is obtained by multiplying the various contributions . in the thermodynamic limit with finite , stirling formula yields \ , \\m(l , x , y , z)&= & y - 2 \ell +x\ln 2 - 2 h(x ) + h(y ) - h(z ) - h(y - x ) \nonumber \\ & + & 2 h(\ell -y ) - 2 h(l - x - y - z ) - h(1- 2 \ell + x+y+z ) \ . \label{eq_combin_m2}\end{aligned}\ ] ] are on and above the horizontal central line , those of on and below . in this drawing , , , , .,width=302 ] for both and ensembles , the probability depends only on the number of edges present in the union of the two circuits , . for the non trivial range of parameters where the first moment is exponentially large , the first term in eq .( [ eq_combin_2nd ] ) can be neglected .the sum over can be evaluated with the saddle - point method , yielding \, \quad \tau(\ell ) = \underset{y}{\max } [ p(2 \ell -y ) + \widehat{m}(\ell , y ) ] \ , \ ] ] where and we introduced \label{eq_m2 } \ .\end{aligned}\ ] ] the range of parameters in the various optimizations are such that .the step between eqs .( [ eq_m1 ] ) and ( [ eq_m2 ] ) amounts to maximize over , which can be done analytically .it is then very easy to determine the function numerically .finally , defining , we determined numerically this function ( see fig .[ fig_2nd_moment ] ) and found that for all parameters such that : the second moment of is then exponentially larger than the square of the first moment , which forbids the use of the second moment method to determine the typical value of . the computation of in the configuration model can be done similarly to the one of ( cf .( [ eq_combin_pl_arb ] ) ) . to simplify notationslet us define , , and the multinomial coefficient for .we also use . with these conventionsone finds indeed , ( resp . , ) is the number of vertices with two ( resp .three , four ) half - edges involved in the pattern , and the number of such vertices among the ones of degree . in consequencethe sum is over non - negative integers with , , and , , .these last three constraints can be implemented using the complex integral representation of kronecker s delta , themselves evaluated by the saddle point method in the thermodynamic limit : \ , \\p(\ell , x , y , z ) & = & \frac{1}{2 } h(c-4 \ell + 2 y ) - \frac{1}{2 } h(c ) + 2 \ell -y + h(2 \ell - 3x -y -2 z ) +h(2x ) + h(z ) + h(1 - 2\ell+x+y+z ) \nonumber \\ & + & \underset{\theta_1,\theta_2,\theta_3}{{\rm ext } } \left [ \sum_{k=2}^\infty q_k \ln(1 + ( k)_2 \theta_1 + ( k)_3 \theta_2 + ( k)_4 \theta_3 ) - ( 2\ell -3x - y -2x)\ln \theta_1 - 2x \ln \theta_2 - z\ln \theta_3 \right ] \ .\nonumber\end{aligned}\ ] ] once this function has been determined for a given degree distribution , the exponential order of can be computed as \, \quad \tau(\ell ) = \underset{x , y , z}{\max } [ p(\ell , x , y , z ) + m(\ell , x , y , z ) ] \ , \ ] ] where is given in eq .( [ eq_combin_m2 ] ) . in the regular case, the maximization over the 6 parameters can be performed analytically , and yields , proving the concentration ( at the exponential order ) of around its mean .we expect that for any ( fastly decaying ) connectivity distribution not strictly concentrated on a single integer , when . a proof of this conjecture would be a quite painful exercise in analysis that we did not undertake .we however verified numerically this statement for the hamiltonian circuits of random graphs with an equal mixture of vertices of degree 3 and 4 , yielding .we have been rather loose in treating the algebraic prefactor hidden in for the various expressions of .however it is rather simple to determine the power of in this prefactor , collecting the contributions which arise from the stirling expansions , the transformation of sums into integrals , and the evaluation of the latter with the saddle point method .this leads to \ , \ ] ] as we observed numerically in sec .[ sec_enumerations ] .note also that some informations on the structure of the space of configurations can be obtained from this kind of computations .the average number of pairs of circuits at a given `` overlap '' ( number of common edges ) is indeed obtained from the second moment computations if the parameter is kept fixed . in the statistical mechanics treatment of the main part of the text we used a model which counts the number of subgraphs of made of the union of vertex disjoint circuits of total length .we want to show in this appendix that , at the leading exponential order , the average of equals the one of in the various ensembles considered in this appendix .let us denote the set of subgraphs of the complete graph on vertices made of unions of vertex disjoint circuits of total length , and its cardinality .as such subgraphs are still made of edges connecting vertices , , where the probability is the one defined previously for the computation of .let us define the cardinality of the subset of where the subgraphs are made of disjoint circuits .a short reasoning leads to where the integers are the number of circuits of length in the subgraph . from this expressionit is easy to check that , and that as long as is finite in the thermodynamic limit , .more precisely , one can show that the leading behaviour of is not modified by contributions with growing with .indeed , \\exp\left [ \frac{1}{2}\left ( \frac{t^3}{3 } + \dots \frac{t^l}{l } \right ) \right ] \ , \ ] ] where f(t) ] , and a bilinear form on the space of probability distribution functions , consider now this form with its arguments being a distribution and its image through the functional : \rangle = \int_0^\infty dx_0 \ , a(x_0)\ , dx_1 \ , a(x_1 ) \dots dx_k \ , a(x_k ) \frac{x_0 h_k(x_1,\dots , x_k)}{1+x_0 h_k(x_1,\dots , x_k ) } \ .\ ] ] the rational fraction in the integral can be transformed in the following way : both the denominator of this fraction and the integration measure being invariant under the permutations of the s , the integral can be computed by symmetrizing the numerator of the fraction .the normalization of then gives \rangle = \frac{2}{k+1 } \ .\label{eq_id_ps}\ ] ] the proof of is now straightforward : \rangle \ .\ ] ] using the identity ( [ eq_id_ps ] ) and the relation between and ( cf .( [ eq_qtilde_q ] ) ) , is found to be the sum of for , and hence is equal to 1 by normalization .we also verified numerically that in presence of degree 2 vertices , and hence of non trivial hard fields , the limit of eq .( [ eq_lbeta ] ) which involves both evanescent and hard fields , coincide with the expression eq .( [ eq_lmax ] ) in terms of hard fields only .we believe this could be proved analytically , yet we did not find a simple way to do it .
we apply in this article ( non rigorous ) statistical mechanics methods to the problem of counting long circuits in graphs . the outcomes of this approach have two complementary flavours . on the algorithmic side , we propose an approximate counting procedure , valid in principle for a large class of graphs . on a more theoretical side , we study the typical number of long circuits in random graph ensembles , reproducing rigorously known results and stating new conjectures .
it is a critical task in network science and its applications to find methods to efficiently detect , monitor and control the behavior of nodes in networks .finding small dominating sets on static or slowly evolving networks is an effective approach in achieving these objectives .a dominating set of a network with node set is a subset of nodes , such that every node not in is adjacent to at least one node in , while the minimum dominating set ( mds ) is the smallest cardinality dominating set . dominating setsprovide key solutions to various critical problems in networked systems , such as network controllability , social influence propagation , optimal sensor placement for disease outbreak detection , and finding high - impact optimized subsets in protein interaction networks .the effective use of dominating sets in these problems demands profound understanding of the behavior of dominating sets with respect to various network features , as well as developing effective methods for finding different types of dominating sets that are optimal solutions for different problems . in most applications that utilize dominating sets , the main goal is to minimize the number of selected dominator nodes , because implementing dominators usually incurs some form of per - node cost .however , finding the mds of a network is a well - known np - hard problem in graph theory .it was proven that finding a sublogarithmic approximation for the size of mds is also np - hard , but a logarithmic approximation can be efficiently found by a simple greedy search algorithm . while research is focused on finding better approximations to the mds and minimum connected dominating sets ( applicable to wireless communication and sensor networks ) , and developing exponential algorithms to find the exact mds , it remains a fundamental challenge to develop cost - efficient strategies for selecting dominators in a network . in this work ,we consider the additional factor of local connectivity information availability that affects the cost of finding dominating sets .efficient dominating set search algorithms require full knowledge of network structure and connectivity patterns ( i.e. , adjacency matrix , or equivalent adjacency information ) . obtaining this information in large networks ( over tens of millions of nodes )involves additional expenses that can ultimately lead to overall suboptimal costs .in addition , sophisticated search methods tend to have polynomial computational time complexity with high orders in the number of nodes or edges , therefore their applicability to large real networks is questionable .our present study is aimed towards designing dominating set selection strategies that satisfy the cost - efficiency demands in terms of required connectivity information , computational complexity , and the size of the resulting dominating set .we develop these methods for selecting dominators in heterogeneous networks , particularly in scale - free networks , described by a power - law degree distribution [ .networks with this fundamental property appear in numerous real - world systems , including social , biological , infrastructural and communication networks . herewe show that the degree - dependent probabilistic selection method becomes optimal in its deterministic limit .in addition , we also find the precise limit where selecting high - degree nodes exclusively becomes inefficient for network domination .literature provides detailed analysis on the bounds of dominating sets in various types of networks with respect to structural properties .cooper et al . analyzed the behavior of mds in model scale - free networks created by preferential attachment rule that generates networks with power - law exponent of .they found that the mds size is bounded above and below by functions linear in , where denotes the number of nodes in the network .similar research has been conducted on random regular graphs and erds - rnyi ( er ) graphs .zito studied the size of the minimum independent dominating set on _r_-regular random graphs with and demonstrated that the size of this set and consequently the size of the mds is upper bounded by a linear function of .later , br et al . improved the prefactor of the bound of the size of mds in _ r_-regular graphs using a greedy algorithm .in addition , wieland et al . derived general bounds for dense er graphs using fixed edge probability and demonstrated that the mds size scales as log . however , this result can not be applied to sparse graphs with fixed average degrees .recent studies analyzed the scaling behavior of mds in scale - free networks with a wide range of network sizes and degree exponents .it was found that the mds size decreases as is lowered , and in certain special cases when the network structure allows the presence of degree hubs ( when ) , the mds size shows a transition from linear to scaling with respect to network size , making these heterogeneous networks very easy to control .however , the impact of network assortativity , which is a fundamental property in real networks , has not been studied . in complex networked systems , mixing patternsare usually described by assortativity measures. a network is considered assortative if its nodes tend to connect to other nodes which have similar number of connections , while in a disassortative network the high degree nodes are adjacent to low degree nodes . investigating the behavior of dominating sets with respect to assortativity is essential for deeper understanding of the network domination problem .several studies conducted on real - world networks have shown that social systems are assortative , while technological ones exhibit disassortative behavior .social psychology studies have shown that humans are more likely to establish a connection with individuals from the same social class , or with whom they share common interests , such as education or workplace .this tendency , named homophily , also governs the attachment rules in real - life social systems , and it is reflected in the mixing patterns of these networks , which are of significant importance in dynamical processes on social networks .specific connectivity schemes affect influence propagation and epidemic spread , and are also responsible for web page ranking and internet protocol performance .newman proposed a method to quantify assortativity in networks using a pearson correlation between degrees at the end of edges , which he defined as the assortativity coefficient .however , a recent study of litvak and van der hofstad has shown that this coefficient has limited applicability ( only for finite variances ) and is also dependent on network size . in order to resolve these biases, they proposed a new approach to measure assortativity based on spearman s , which is a pearson correlation coefficient between ranked variables .this method provides consistent assortativity values , irrespective of network size , thus allowing assortativity comparison between various network sizes .in addition , it can also reveal strong dependencies more efficiently in large networks .therefore , we also use spearman s as the assortativity measure in our work . here, we also develop and employ a new method to efficiently control assortativity in network ensembles . using this technique ,our goal is to provide a large - scale analysis on the behavior of various dominating sets , with respect to a wide range of network parameters , including assortativity .finally , we also compare our findings on model scale - free networks and real - world network samples .we start our study by considering potential directions on how to build dominating sets in a network without full adjacency information .we must select nodes based solely on their individual properties , such as the node degree , and potentially a limited amount of global network information , such as the number of nodes , average degree , and power - law degree exponent .we construct our probabilistic methods ( and their deterministic limit ) based on this information . the results of alon and spencer a graph - theoretical approach to finding an upper bound for the size of the minimum dominating set , and as part of it they propose a probabilistic method for selecting dominator nodes . while their approach is theoretical , we can carry out their method , numerically , to obtain a probabilistic dominating set , and study its properties in scale - free networks .finding a probabilistic ( random ) dominating set ( rds ) in a graph has the following steps .first , we visit each node , and add it to an initially empty set , with probability ( a parameter chosen arbitrarily , ] , is the probability of selecting a node with degree into set , is the degree distribution of the neighbors of a node with degree .the first integral calculates the expectation of , while the rest is the expectation of .the latter is obtained by counting the nodes that are not in ( the first part ) , but only those that also have no neighbors in ( the expression in square brackets ) .we can plug in the properly normalized power - law degree distribution in .further , for uncorrelated networks we have . for rds with uniform node selection probability we have , resulting in : \label{rds}\ ] ] for rds with degree - dependent probability we have , resulting : }{k_{\max}^{1-\gamma}-k_{\min}^{1-\gamma } } , \label{rds_degree}\ ] ] with finally , for cds we have , where is the heaviside step function that returns for positive arguments and otherwise , yielding : }{k_{\max}^{1-\gamma}-k_{\min}^{1-\gamma } } , \label{cds}\ ] ] with note , that in all the above formulas , denotes the exponential integral function , .the detailed derivation of the analytical estimates can be found in supplementary information , section s.3 .figure [ fig - analytic - real ] shows the accuracy of our analytical estimates in comparison with the numerical results of rds and cds .further results on scale - free networks with different and values are provided in the supplementary information , section s.3.4 , showing that as the increases , the accuracy of the analytical estimates improves . for cds and degree - independent rdsthe estimates are very close to the numerically obtained values , even with a small .the estimates for degree - dependent rds are slightly less accurate , but still sufficient to provide a useful approximation of the expected dominating set size .therefore , we can easily calculate a very accurate expected size of these dominating sets in uncorrelated scale - free networks , based on nothing beyond basic network parameters .using our edge - mixing method to control the assortativity of a network , we have compared the sizes of dominating sets as a function of assortativity , measured by spearman s .figure [ fig - ds - rho ] shows our results for a synthetic network and a real social network , while the same comparison for different network parameters is provided in the supplementary information , section s.5 for artificial networks , and section s.6 for real networks .as expected , the size of most dominating sets increase with higher assortativity , except for rds with degree - independent selection probability .the most dramatic size increase is observed in dds , which indicates that this method can only be considered viable in real - world applications for highly disassortative networks .also , as the assortativity increases , cds becomes larger than the simple rds at a certain point , indicating that favoring high - degree nodes as dominators is not an effective strategy when the network is highly assortative .while the mds size obtained by greedy search also increases with increasing assortativity , it shows the smallest increase , thus the advantage of greedy search over other methods is more pronounced .we also analyze the effects of assortativity on the _ optimal _ degree threshold value that minimizes the size of cds .figure [ fig-3d ] provides a complete dependence map of the optimal with respect to two vital network parameters : power - law degree exponent , and assortativity , measured by spearman s .regardless of and , we can see that is roughly proportional to the network s average degree .also , we observe that for any particular network assortativity ( and value ) , .however , it is intriguing that for a fixed value , has a maximum approximately at .our first results revealed that the numerically computed size of rds with uniform node selection probability is much smaller than the upper bound provided by alon and spencer .since their bound assumes that all nodes not dominated by the set are , in the worst case , nodes of the smallest degree , the difference between their bound and our result shows the relative number of nodes that have higher degree neighbors , yet not dominated . in scale - free networks, we indeed expect to find a significant number of lowest degree nodes with high - degree neighbors ( especially in disassortative networks ) , explaining our observations .it is also remarkable that rds with optimally chosen parameter can always provide a smaller dominating set than a simple degree - ranked node selection . while the latter may be favored for its simplicity and plausibility to be effective in heterogeneous networks , our results show that it is not the case ; the usefulness of degree - ranked dominating sets beyond theoretical studies is very limited .the cutoff dominating set ( cds ) , proposed as a limiting case of rds with degree - dependent node selection probability , is proven to be a very effective dominating set selection method . given full network information ,a sequential implementation of the algorithm finds cds for all possible degree threshold values in time .however , since the algorithm only uses local connectivity information , a distributed version can be easily designed , for large networks .further , the value of optimal ( that minimizes the cds size ) has little dependence on particular network parameters , as shown in fig .[ fig-3d ] , thus it can be estimated easily if detailed network information is not available .based on our extensive numerical simulations , we conjecture that using the optimal the cds size is the smallest of all degree - dependent rds ( with any ) , and it approaches the mds size provided by the greedy algorithm , irrespective of the network s distinct topological properties .this conjecture is further validated by fig .[ fig - real ] that presents results on several real - world network samples .we can also understand cds as a method that bridges the degree - ranked and greedy dominator selection methods . when selecting the very first nodes of the dominating sets , both greedy and degree - ranked methods start by selecting the highest degree nodes .later , they diverge ; the degree - ranked selection continues with the high - degree nodes , while greedy specifically seeks out nodes that increase domination maximally , typically smaller degree nodes .the degree - ranked selection eventually becomes very inefficient only because of the presence of low degree nodes connected only to each other ( and hard to reach ) .thus , degree - ranked selection is efficient at first , but there is a point at which the method should abandon such selection and instead look for nodes that are still not dominated , and target them specifically .this is exactly what cds does : it is essentially a degree - ranked selection until is reached ( set ) , and then the remaining undominated part is simply added as dominators ( set ). while the analytical estimates for rds and cds are highly accurate , they are only applicable to uncorrelated scale - free networks .however , the base formula ( eq . [ base ] ) can be used for any network ( not only scale - free ) , if the degree distribution and degree correlations can be expressed ( or approximated ) by some formula . without analytical expressions ,one can still calculate the base formula numerically , using observed ( sampled ) estimates of the degree distribution and degree correlations , assuming that collecting these esimates requires less time than actually running the rds or cds algorithms , or if full adjacency information is not available .the accuracy of our analytical estimates for rds and cds seem to be lower for low and values .this inaccuracy is an artifact of our average degree control method , which controls by adjusting , and removing a certain fraction of smallest degree nodes .the latter becomes significant when ( for low ) , because it causes a slight deviation from a perfect power - law degree distribution . in order to use the analytical formulas ( which are very sensitive to ), we have to estimate a fractional , as if it were a cutoff of a continuous and otherwise perfectly satisfied power - law distribution .in reality , we deviate from power - law , leading to the inaccurate estimates .however , as increases , also increases , and the relative deviation from a perfect power - law decreases , hence the increased accuracy .the implication for real networks is that we can expect similarly less accurate estimates if the degree distribution deviates from power - law . our numerical study of dominating set sizes with respect to assortativity reveals a general tendency that the dominating set becomes larger as assortativity increases .we can understand this easily . in case of a disassortative network ,high degree nodes connect mostly to low degree nodes , therefore we can expect small dominating sets , due to efficient domination via high - degree nodes .in fact , when scale - free networks may become so disassortative that star subgraphs form and the size of mds becomes . on the other hand , hubs are less effective in dominating assortative networks , since most of their connections are used to connect to other high degree nodes .therefore , the impact of assortativity on each dominating set selection method depends on how much the method relies on high - degree nodes as dominators .this is why the degree - ranked selection shows the worst performance on highly assortative networks , followed by the degree - dependent rds ( and its limiting case , the cds ) , which also favors high - degree nodes .since technological scale - free networks tend to be disassortative , and although social networks tend to be assortative , extreme assortativity is rare , we can safely conclude that cds is a viable alternative of greedy selection for most scale - free networks . in summary , we explored probabilistic dominating set selection strategies in scale - free networks with respect to various network properties .we found that as a particular limiting case of degree - dependent random node selection , a deterministic cutoff dominating set ( cds ) provides the smallest dominating set among probabilistic methods , and is widely applicable to heterogeneous networks .even if full adjacency information is not available , the size of cds ( and rds ) can be accurately predicted using our analytical estimates .we construct our ensembles of synthetic scale - free networks ( undirected and unweighted ) using the configuration model .first , we generate a power - law degree distribution with the desired power - law exponent and average degree .the latter is controlled by adjusting the minimum degree cutoff of the distribution , while we always keep the maximum degree cutoff fixed : either ( the maximum possible in any network , hence essentially unrestricted ) or ( structural cutoff , making the network uncorrelated ) .we obtain a degree sequence from the degree distribution by inverse transform sampling . given the degree sequence , the configuration model assigns the corresponding number of half - edges ( stubs ) to each node , and connects randomly ( uniformly ) chosen pairs of stubs to form links between nodes .this procedure is repeated until there are no free stubs left .the result is a multigraph ; however , we convert multiple links to single links and remove self - loops to obtain a simple graph . due to this pruning of multiple linkswe have some loss of edges , but since we generate networks with , this loss is negligible .we have used the same network construction method in our previous work ( including the method of controlling the average degree by selecting the proper value from a precomputed lookup table ) ; according to our previous notation we have here conf and cconf networks .we use two types of dominating sets for comparison with probabilistic dominating set selection methods .the first one is an approximation of the mds , found by a sequential greedy algorithm .this method selects nodes one by one , at each step selecting a node that provides the maximal increase in the number of dominated nodes in the network ( with random tie - breaking ) ; this is the same method as used in .the second method is the degree - ranked dominating set selection ( dds ) , where we build the dominating set by selecting nodes in decreasing order of degree ( with random tie - breaking ) until the selected set dominates the entire network . to find a probabilistic dominating set ( with any particular node selection probability ) , we use the following algorithm .first , we initialize set to contain all nodes of the network , and initialize set to an empty set .then , we visit each node exactly once , and determine whether it should be added to set , based on the current node selection probability . if so , then we add the current node to set , remove it from set ( if present ) , and also remove all of its neighbors present in set . this way ,once all nodes have been evaluated , we obtain the probabilistic dominating set as .we use hashed sets for and , which makes the addition , check of containment , and removal of nodes from the sets an time operation ( in amortized time ) .we loop over all nodes exactly once , and visit all their neihbors , therefore we visit each edge exactly twice , making the algorithm s time complexity and memory complexity , where is the number of edges and is the number of nodes in the network .note , for sparse networks with small average degree , .when we calculate a cutoff dominating set ( cds ) , there is an additional optimization we use to find the cds size for _ all _ possible degree cutoff values , including the optimal one that minimizes cds size , in the same time complexity as finding cds for only one value .first , we sort nodes into degree classes in time using counting sort ( or bucket - sort ) .the linear time complexity comes from the fact that both the number of nodes and the range of their degree values is .then , we loop over all degree classes in decreasing order of degree , and for each degree class we add all nodes to set ( and remove them and their neighbors from set at the same time ) . this way , we can check the value of after finishing each degree class , which is exactly the size of cds with equal to the current class degree .we can either output the size of cds at the current degree , or simply record which cds size at which was the smallest .since we process each node exactly the same way as in rds ( except for the specific order in which they are processed ) , we have the same time complexity , and it is not increased by the time needed to sort the nodes .we control assortativity by randomly mixing the network s edges , using a markov - chain of double - edge swaps with biased acceptance probabilities . without the bias , this method was used in and to sample networks with a given degree distribution . herewe use the same method , but we introduce an additional condition for accepting a randomly proposed and otherwise possible edge swap .this acceptance probability is parametrized by a control value $ ] that introduces the bias toward accepting a higher or lower fraction of swaps that make the network more assortative , based on its value , in the following way : therefore , we obtain the most disassortative network when and the most assortative network when . however , the relationship between and any particular assortativity measure , such as spearman s , is non - trivial , as shown in fig .[ fig - ac ] .the detailed description of this method is included in the supplementary information , section s.4 .we thank t. nguyen for preparing the network - structure files used in this research from the flickr and foursquare data sets . this work was supported in part by grant no .fa9550 - 12 - 1 - 0405 from the u.s .air force office of scientific research ( afosr ) and the defense advanced research projects agency ( darpa ) , by the defense threat reduction agency ( dtra ) award no .hdtra1 - 09 - 1 - 0049 , by the national science foundation ( nsf ) grant no .dmr-1246958 , by the army research laboratory ( arl ) under cooperative agreement number w911nf-09 - 2 - 0053 , by the army research office ( aro ) grant w911nf-12 - 1 - 0546 , and by the office of naval research ( onr ) grant no .n00014 - 09 - 1 - 0607 .the views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies either expressed or implied of the army research laboratory or the u.s . government .f.m . , n.d . ,.cz . , l.sz . , b.k.s . and g.k .designed the research ; f.m . and n.d .implemented and performed numerical experiments and simulations ; f.m ., n.d . , .cz . , l.sz . , b.k.s . and g.k .analyzed data and discussed results ; f.m ., n.d . , .cz . , l.sz . , b.k.s . and g.k .wrote and reviewed the manuscript .competing financial interests : the authors declare no competing financial interests . 100 cowan , n. j. , chastain , e. j. , vilhena , d. a. , freudenberg , j. s. , bergstrom , c. t. nodal dynamics , not degree distributions , determine the structural controllability of complex networks .plos one 7(6 ) : e38398 ( 2012 ) .nacher , j. c. & akutsu , t. structural controllability of unidirectional bipartite networks .rep . * 3 * , 1647 ( 2013 ) .nacher , j. c. & akutsu , t. analysis on critical nodes in controlling complex networks using dominating sets . in proceedings of the _ 2013 international conference on signal - image technology & internet - based systems _( ieee , 2013 ) pp .649654 .kelleher , l. , cozzens , m. dominating sets in social network graphs .sciences * 16 * , 267279 ( 1988 ) .wang , f. , du , h. , camacho , e. , xu , k. , lee , w. , shi , y. , shan , s. on positive influence dominating sets in social networks .sci . * 412 * , 265269 ( 2011 ) .eubank , s. , anil kumar , v. s. , marathe , m. v. , srinivasan , a. , wang n. structural and algorithmic aspects of massive social networks . in soda 04 proc . of the fifteenth annual acm - siam symposium on discrete algorithms , pp .718727 ( 2004 ) .wuchty , s. controllability in protein interaction networks .usa , early edition , april 28 ( 2014 ) ; http://dx.doi.org/10.1073/pnas.1311231111 ( accessed june 3 , 2014 ) .alon , n. transversal numbers of uniform hypergraphs .graphs combin . *6 * , 14 ( 1990 ) .alon , n. , spencer , j.h .the probabilistic method .( willey , new york , 2000 ) .raz , r. , safra , s. a sub - constant error - probability low - degree test , and a sub - constant error - probability pcp characterization of np . in proc . of the 29th annual acm symposium on theory of computing , pp .475484 ( 1997 ) .potluri , a. , and singh , a. two hybrid meta - heuristic approaches for minimum dominating set problem .notes comput . sc .7077 , 97104 ( 2011 ) .hedar , a. r. , ismail , r. hybrid genetic algorithm for minimum dominating set problem .notes comput . sc .6019 , 457467 ( 2010 ) .blum , j. , ding , m. , thaeler , a. , cheng , x. connected dominating set in sensor networks and manets .du and p. pardalos ( eds . ) , handbook of combinatorial optimization , 329369 ( 2004 ) .ruan , l. , du , h. , jia , x. , wu , w. , li , y. , and ko , k .-i . a greedy approximation for minimum connected dominating sets .329 , 325330 ( 2004 ) .wan , p .- j ., alzoubi , k. m. , frieder , o. distributed construction of connected dominating set in wireless ad hoc networks .* 9 * , 141149 ( 2004 ) .chen , q. , fan , w. t. , and zhang , m. distributed heuristic approximation algorithm for minimum connected dominating set .computer engineering * 35 * , 9294 ( 2009 ) .simonetti , l. , da cunha , a. s. , lucena , a. the minimum connected dominating set problem : formulation , valid inequalities and a branch - and - cut algorithm .notes comput . sc .6701 , 162169 ( 2011 ) .fomin , f. v. , grandoni , f. , pyatkin , a. v. , stepanov , a. a. combinatorial bounds via measure and conquer : bounding minimal dominating sets and applications .acm trans .algorithms , * 5 * , 9 ( 2008 ) .fomin , f. v. , grandoni , f. , kratsch , d. a measure & conquer approach for the analysis of exact algorithms .j. acm , * 56 * , 25 ( 2009 ) .van rooij , j. m. m. , nederlof , j. , van dijk , t. c. inclusion / exclusion meets measure and conquer : exact algorithms for counting dominating sets .notes comput . sc .5757 , 554565 ( 2009 ) .van rooij , j. m. m. , bodlaender , h. l. exact algorithms for dominating set .discrete appl ., * 159 * , 21472164 ( 2011 ) .haynes , t.w . ,hedetniemi , s.t . , slater , p.j . fundamentals of domination in graphs ( marcel dekker , new york , 1998 ) .cooper , c. , klasing , r. , zito , m. lower bounds and algorithms for dominating sets in web graphs .internet mathematics 2 , 275300 ( 2005 ) .barabsi , a .-l . , albert r. emergence of scaling in random networks .science * 286 * , 509512 ( 1999 ) .erds , p. , rnyi a. on the evolution of random graphs .inst . hung .sci . * 5 * , 1761 ( 1960 ) .zito , m. greedy algorithms for minimisation problems in random regular graphs .esa 01 proceedings of the 9th annual european symposium on algorithms , 525536 ( 2001 ) .br , cs . ,czabarka , . ,dankelmann , p. , szkely , l. bulletin of the i. c. a. * 64 * , 7382 , ( 2012 ) .arnautov , v.i .estimation of the exterior stability number of a graph by means of the minimal degrees of the vertices .i programmirovanie 11 , * 38 * 126 , 1974 ( in russian ) .clark , w.e ., shekhtman , b. , suen , s. , fisher .upper bounds for the domination number of a graph .numer . * 132 * , 99123 ( 1998 ) .wieland b. , godbole a. p. on the domination number of a random graph .the electronic journal of combinatorics * 8 * , r37 ( 2001 ) .molnr , f. jr ., sreenivasan , s. , szymanski , b. k. , korniss , g. minimum dominating sets in scale - free network ensembles .rep . * 3 * , 1736 ( 2013 ) .nacher , j. c. , akutsu , t. dominating scale - free networks with variable scaling exponent : heterogeneous networks are not difficult to control .new j. phys .* 14 * , 073005 ( 2012 ) .viger , f. , latapy , m. efficient and simple generation of random simple connected graphs with prescribed degree sequence . in proc .the 11th intl . comp . and440449 ( 2005 ) .eguiluz , v. , klemm , k. , phys .* 89 * , 108701 ( 2002 ) .eubank , s. , guclu , h. , anil kumar , v. , marathe , m. , srinivasan , a. , toroczkai , z. , and wang , n. modelling disease outbreaks in realistic urban social networks .nature * 429 * , 180184 ( 2004 ) .fortunato , s. , bogua , m. , flammini , a. , and menczer , f. on local estimations of pagerank : a mean field approach . internet mathematics * 4 * , 245266 ( 2007 ) .li , l. , alderson , d. , doyle , j. , and willinger , w. towards a theory of scale - free graphs : definition , properties , and implications . internet mathematics * 2 * , 431523 ( 2005 ) . newman , m. mixing patterns in networks .e * 67 * , 026126 ( 2003 ) .newman , m. assortative mixing in networks .89 * , 208701 ( 2002 ) .litvak , n. and van der hofstad , r. uncovering disassortativity in large scale - free networks .e * 87 * , 022801 ( 2013 ) .spearman , c. the proof and measurement of association between two things .the american journal of psychology * 15 * , 72 - 101 ( 1904 ) .molloy , m. , reed , b. a critical point for random graphs with a given degree sequence .random structures and algorithms * 6 * , 161180 ( 1995 ) .britton , t. , deijfen , m. , martin - lf , a. generating simple random graphs with prescribed degree distribution .. phys . * 124 * , 13771397 ( 2005 ) .catanzaro , m. , boguna , m. , pastor - satorras , r. generation of uncorrelated random scale - free networks .e * 71 * , 027103 ( 2005 ) .bogu , m. , pastor - satorras , r. , vespignani , a. cut - offs and finite size effects in scale - free networks .b * 38 * , 205509 ( 2004 ) . stanford network analysis project ( snap ) , http://snap.stanford.edu/data .) ] and experimental [ black solid curve ] dominating set sizes as a function of node selection probability in random dominating sets ( rds ) .the analytic optimal upper bound [ eq . ( [ p_ords ] ) ] is indicated by the horizontal black dashed line .the size of mds , dds , and the analytical estimate of the minimum of rds is also presented for comparison .results are averaged over 200 network realizations ; , , , with dominating set searches , averaged for every network sample.,width=377 ] prefactor in the degree - dependent node selection probability [ eq .( [ p_min1 ] ) ] .data is averaged over 200 network samples and 20 repetitions of dominating set searches for each sample .network parameters : , and .,width=377 ] degree cutoff parameter in the degree - dependent node selection probability [ eq .( [ p_min2 ] ) ] . for comparison ,curves of rds are plotted for various values .cds corresponds to .,width=377 ] prefactor of node selection probability [ eq .( [ p_min1 ] ) ] , while subfigures ( b ) , ( d ) and ( f ) show the same as function of degree cutoff [ eq .( [ p_min2 ] ) ] . for comparison , the degree - independent probabilistic ( ) , degree - ranked and greedy dominating set sizes are also plotted .network parameters : ca - hepth : , , ; ca - condmat : , , ; ca - hepph : , , . ] ) , ( [ rds_degree]),and ( [ cds ] ) ] and numerically computed sizes of rds and cds in uncorrelated ( cconf ) scale - free networks . for numerical results ,data is averaged over 200 network samples .parameters : and . ]assortativity measure in( a ) a synthetic network and ( b ) a real - world network ( gnutella ) . networks with assortativity values different from the original network are obtained by guided edge - mixing with double - edge swaps . ]degree cutoff values ( that minimize the size of cds ) as a function of degree exponent and spearman s .each layer represents different average degrees , in uncorrelated ( cconf ) networks with .data is averaged over network realizations .data grid resolution : , .,width=529 ]
we study ensemble - based graph - theoretical methods aiming to approximate the size of the minimum dominating set ( mds ) in scale - free networks . we analyze both analytical upper bounds of dominating sets and numerical realizations for applications . we propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks . one of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree - ranked method . we show that a degree - dependent probabilistic selection method becomes optimal in its deterministic limit . in addition , we also find the precise limit where selecting high - degree nodes exclusively becomes inefficient for network domination . we validate our results on several real - world networks , and provide highly accurate analytical estimates for our methods .
the theory of phase separation , or domain coarsening , has undergone a significant development in the last three decades .the most important finding is that well - defined ordered domains arise and grow with time in such a way that the coarsening process exhibits scaling . in other words , at the late stages of the evolution the system is characterized by a single length scale that gives a typical linear size of the domains .it is well established , at least for systems with a scalar order parameter , that with for _ non - conserved _ dynamics and for _ conserved _ dynamics . for the ising spin model ,glauber spin - flip dynamics exemplifies the former , while kawasaki spin - exchange dynamics exemplifies the later .several important correlation functions exist .one such function , the autocorrelation or , equivalently , the two - time equal - space correlation function , , is defined by , where is the order parameter .then , scaling implies : , with an exponent .the general two - point correlation function , , can be expressed through , namely .exact results for are known in few cases , while the bounds with the spatial dimension were proposed by fisher and huse . for the vector model in the limit , . in this study ,the value is obtained for the voter model , defined below .this result indicates that both the upper and the lower bounds can be realized .it should be noted that in most coarsening processes the dynamics does not exhibit a qualitative dependence on the temperature as long as . at the critical temperature ,the dynamics is generally different , and ordered domains usually do not occur .however , the correlation length exists and grows with time as , where is the dynamical exponent .the correlation length should be considered as the analog of the domain size , and the the exponent is replaced by defined by . in the voter model , temperature is absent but since the dynamics is noiseless , the voter model dynamics is zero temperature in nature . however, the `` critical '' temperature is also zero . if one introduces noise by allowing environment - independent opinion changes , the system does not coarsen ( see , e.g. , ) .thus , we will actually establish for the voter model . a general discussion of the conditions under which the equality holds is given by majumdar and huse .in this study , we introduce a family of quantities which provides insight into the `` history '' the the coarsening process .we denote by the fraction of voters who changed their opinion exactly times during the time interval .the first of these quantities , , is equal to the fraction of persistent voters , _i.e. _ voters who did not change their opinion up to time . this quantity has been introduced independently for two equivalent one - dimensional models , the glauber - ising model , and the single - species annihilation process .furthermore , the corresponding generalizations to arbitrary dimensions were discussed in and , respectively. derrida _ established the exact asymptotic decay of this quantity , , as suggested earlier by numerical simulations .another exact result establishes , with , for the 1d time - dependent ginsburg - landau equation at zero temperature . for the voter model ,several quantities such as the one - time and the two - time correlations are exactly solvable in arbitrary dimensions .these correlation functions allow an exact calculation of the average number of opinion changes and other interesting quantities .hence , the voter model is a natural starting point for investigation of .although we do not obtain the full distribution , most of its features are illuminated by combining the above exact results with heuristic arguments and with the mean - field solution .generally , exhibits a scaling behavior . for ,the scaling function is poissonian and is peaked at , while for the distribution is maximal near the origin .additionally , for unequal opinion concentrations different scaling functions for even and odd opinion changes are found . using random walks techniques ,we obtain the even and odd scaling functions in the limit of an infinitesimal minority concentration .the rest of this paper is organized as follows . in sec .ii , we first solve for on a complete graph .we then reexpress some exact relationships for the voter model , in arbitrary dimension , in terms of the distribution . combining with the mean - field solution ,these exact relationships allow us to guess the scaling form of .this guess suggests a usual scaling form in one dimension , and a mean - field like sharply peaked distribution for .these predictions are then compared with numerical data in one , two , and three dimensions . in sec .iii , we describe exact solution of the mean - field equations for the case of initially different concentrations . then we present exact results for the autocorrelation function in arbitrary dimension , and exact results for the fraction of persistent voters in one dimension .we proceed by investigating the extreme case of infinitesimal minority opinion . in this limit ,the model is equivalent to a pair of annihilating random walkers who are nearest neighbor at . simplifying further the problem to the case of a random walker near the absorbing boundary we derive a complete analytical solution .finally , we perform numerical simulations for the case of unequal concentrations and confront the results to exact predictions . finally , we give a brief summary in sec .in this section we first define the voter model . we restrict attention to the symmetric case , _i.e. _ , equal opinion densities .we start by analyzing the mean - field theory of the model , and then obtain several exact results in arbitrary dimensions .we then present a scaling ansatz and check it using numerical simulations .we start by defining the voter model .consider an arbitrary lattice and assume that each site is occupied by a `` voter '' who may have one of two opinions , denoted by and .each site keeps its opinion during some time interval , distributed exponentially with a characteristic time , set to unity for convenience , and then assumes the opinion of a randomly chosen neighboring site .if a site is surrounded by sites with the same opinion , it does not change its opinion .hence , such dynamics are zero - temperature in nature .noise can be introduced by allowing a voter to change its opinion independently of its neighbors . however ,a voter system with noisy dynamics does not coarsen , and we restrict ourselves to the noiseless voter dynamics .these dynamics are so simple that they naturally arise in a variety of situations , see e.g. .an important link is with the glauber - ising model : in one dimension , and _ only _ in 1d , the voter dynamics is identical to the glauber dynamics .this equivalence is not restricted to zero temperature , 1d noisy voter dynamics is also identical to glauber dynamics at a positive temperature .we now consider the voter model dynamics on a mean - field level , by simply treating all sites as neighbors .such a theory is _ exact _ on a complete graph .moreover , it is expected to hold in sufficiently large spatial dimensions .we first consider the symmetric case were the opinions concentrations , and , are equal , and the interesting case of unequal concentrations will be discussed later .the fraction of sites which have changed their opinions times up to time , evolves according to with to ensure .the distribution clearly satisfies the normalization condition , , and one can verify that eq .( [ mfe ] ) conserves this sum . solving eq .( [ mfe ] ) subject to the initial condition , one finds that the opinion change distribution function is poissonian in particular , the fraction of persistent voters , _i.e. _ voters who did not change their opinion up to time , decreases exponentially , .the probability that a voter has its initial opinion at time is thus .asymptotically , this probability exponentially approaches the value , and therefore voters quickly `` forget '' their initial opinion .the distribution is peaked around the average , and the width of the distribution , , is given by . in the limits , , , and finite , approaches a scaling form where the scaling distribution is gaussian .this infinite - dimension scaling solution will be compared below to simulation results in three dimensions .to summarize , the quantity incorporates many statistical properties of the system such as the probability of maintaining the original opinion , the probability of having the original opinion , and the average number of opinion changes .we now review several relevant known exact results for the voter model in arbitrary dimension and reexpress them in terms of .both the one- and two - body equal - time correlation functions , are exactly solvable on arbitrary lattice in arbitrary dimension .it proves useful to formulate the voter model on the language of ising spins , _i.e. _ , a opinion is identified with spin and a opinion with spin .the state of the lattice is described by ] , see . indeed , for the symmetric voter model , and .the concentration of persistent minority species , , is equal to the fraction of persistent spins in the -state potts model with . using the notation , one has . of course , , since changes between sub - opinions should not be counted .the exponent can be found by allowing a non - integer number of opinions , .this formula is found by an analytical continuation to arbitrary of the relation , which clearly holds in the equal - concentration case with an integer .therefore , . for the above example , , implies . in general, the concentration of persistent voters decays algebraically ^ 2-{1\over 8}.\ ] ] following eq .( [ 1dscaling ] ) , can be written in terms of a simple scaling function in one dimension .the behavior reflects the anomalously large number of persistent voters found in the system at long times . on the other hand , eq .( [ p0beta ] ) implies a difference in nature of the scaling functions for sites of initial and opinion , . in the limit of large , the tail is dominated by gaussian fluctuations , while in the limit , the anomalous decay of eq .( [ p0beta ] ) determines the behavior . combining these two limits, we have in the limit of a vanishing minority opinion concentration , , one has , and .both the mean - field results and our numerical simulations , to be described in the following , suggest that distribution of even number of changes dominates over its odd counterpart .we expect that the above suggested scaling form holds for the even distribution , or equivalently , for the modified distribution . to summarize , the exact form of the fraction of persistent voters combined with scaling considerations suggest that different scaling functions correspond to the minority and the majority opinions . for better understanding of the asymmetric case , it is useful to consider the case of an infinitesimal concentration of one opinion , .we naturally restrict ourself to the situation where a single voter is placed in a sea of opinion . identifying an interface between and domains with a random walker , an equivalence to two annihilating random walkers who are nearest neighbors at ,is established .the distribution is thus equal to the fraction of sites visited times by the two walkers .we further simplify the problem by considering the fraction of sites visited by a single random walk with a trap as one of its nearest neighbors .although the two problems are not identical , we expect that the results are similar in nature and differ only by numerical prefactors .the reason is that the distance between the two random walks itself performs a random walk with in the vicinity of a trap . in the limit of a vanishing opinion concentration , ,the opinion change density is equal to zero .however , if we divide by the density of the interfaces , , and then go to the limit , we obtain a nontrivial distribution , .this distribution gives the total number of links crossed times by the walker ; we will denote it by . as said previously , for the symmetric initial conditions , , the scaling behavior of the form is expected .however , for asymmetric initial conditions , two different scaling forms , even and odd , should appear . in the present extreme case, we expect and .we learn from eq.([1dlimitsuneq ] ) that near the origin .hence , the distribution function approaches a time independent form : .these results can be confirmed by considering the analogy to a single random walk near a trap . as the walker will ultimately come to the origin with probability one, every link will be crossed an even number of time and so the ultimate distribution for ( and since the link is crossed with ultimate probability one by the walker ) .so , in the extreme case we are considering , the even - odd oscillations are obvious and pronounced : the asymptotic even values are positive while the odd values are zero . in order to compute for even , we consider the link .the probability that the walker starting at reaches for the first time , thus crossing the link , is given by .then the ultimate probability that the walker will go from site to site , crossing the link a second time , is one .the probability that the walker starting at will arrive at before crossing the link again , is given by .therefore , is the contribution of the link into , the average number of links crossed twice by the walker .thus , we have after having crossed the link twice , the walker could cross this link again before reaching the adsorbing barrier at .any such crossing from the left happens with probability , while the next crossing from the right happens with probability one .thus , we arrive at the remarkably simple formula expressing through the zeta function for large , the sum can be approximated by the integral which confirms the above prediction . to determine the scaling functions and , it proves useful to consider , the probability that the walker passes times through during the time interval .the is then given by in this equation and in the following we will treat as a continuous variable ; in the long - time limit , this should be asymptotically correct .we then write for : and we consider a walker starting at ; is the probability that this walker reaches at time without going to the origin ; is the probability that this walker first reaches the origin at time ; is the probability that this walker first passes at the origin at time without passing through ; is the probability that this walker with an absorbing boundary at does not pass through the origin up to time and is the probability that this walker does not reach the origin up to time .( [ even ] ) is cumbersome in form but simple in nature : the formula for is just a finite - time generalization of eq .( [ pn ] ) , namely it corresponds to the situation when a walker has performed oscillations around the link , and at time a walker , or his remains , is to the left of .( [ odd ] ) has been constructed similarly and describes the situation with a walker to the right of at time . the convolution structure of eqs .( [ even ] ) and ( [ odd ] ) suggests to apply the laplace transform .indeed , , satisfy and fortunately , the probabilities have been already computed : it is in principle possible now to compute various . for example , the contribution to from links with is where the contribution from the first link is , which gives the asymptotic value of we now turn to determination of the scaling functions . in the long - time limit , , corresponding to , eq .( [ evenlap ] ) becomes which then implies performing the inverse laplace transform , one finds indeed the anticipated scaling behavior suggested earlier is confirmed with the scaling function in particular , the limiting forms are for the odd distribution , a similar scaling form is expected : when , we can use the naive expansion as previously but we should keep the upper limit finite , , since the integrand logarithmically diverges on the upper limit : with the exponential integral . making use of eq .( [ oddscaling ] ) one gets another relation for , with . thus we obtain the laplace transform of the function , performing the inverse laplace transform , we get performing asymptotic analysis yields notice that in the both limiting cases , .it proves insightful to compute the moments of even and odd distributions , and .asymptotically , it is easy to compute even moments eq . ( [ evenmoment ] ) is valid only for ( when , the prefactor diverges ) . to determine the most interesting zero moment , i.e. the total number of links crossed even times , we use the the laplace transform of eq .( [ evens ] ) to obtain and eventually , with the euler constant .this result is consistent with a direct summation of up to . for negative ,even moments are finite , .odd moments behave similarly , .a lengthy computation gives the prefactor eq .( [ oddmoment ] ) is valid for all nonnegative , and in particular the ( average ) total number of links crossed odd times approaches a surprising constant thus although the odd part of the distribution approaches zero as , the moments remain nontrivial . to test the above predictions we performed numerical simulations of the voter model with different initial concentrations , in one dimension .the rich behavior predicted by the mean - field and the exact results was confirmed by the simulation results .we studied the fraction of persistent voters for the case , and we found the decay exponents , and for the minority and the majority opinion , respectively .these values are in excellent agreement with eq .( [ p0beta ] ) .we also confirmed that each of the four functions and can be rewritten in a scaling with the scaling variable .the dominance of the even part of the distribution , is nicely demonstrated by fig .( 4 ) ( one realization of a system of sites ) and the asymptotics of the even scaling function eq . ( [ 1dlimitsuneq ] ) are verified .we performed also simulations for the extreme case , where one site with initial opinion is in a sea of opinions .as shown above , this problem is equivalent to the average number of times a link is crossed by two annihilating random walkers .we show on fig .( 4 ) the even and odd scaling functions for realizations of this system . the asymptotic results eqs.([easymp ] ) found in the simplified problem of one random walker in the presence of an absorbing boundary conditions are verified up to numerical prefactors .in particular , the even scaling functions of fig .( 4 ) is found to behave asymptotically ( ) as to be compared with of eq .( [ escaling1 ] ) .we have investigated the voter model , one of the simplest models of non - equilibrium statistical mechanics with _ non - conserved _ dynamics .we have introduced the set of quantities , defined as the fraction of voters who changed their opinion times up to time .the distribution was shown to exhibit a scaling behavior that strongly depends on the dimension of the system and on the opinion concentrations . for , the system does not coarsen , and the distribution is poissonian . in one - dimension ,we have solved for in the extreme case when the minority opinion is infinitesimal .the case when the minority phase occupies a negligible volume has been studied in the classical work for the _ conserved _ dynamics and has proven very important in the development of the theory of phase ordering kinetics .it would be very interesting to generalize the extreme - case solution to arbitrary .the quantity reflects the history of the coarsening process .knowledge of this distribution enables insight into interesting quantities such as the fraction of consistent or `` frozen '' sites , the fraction of sites with their original opinion , and the average number of changes in a site .this study suggests that is a tool for investigations of coarsening processes in more realistic models .it possible that a poissonian generally describes systems that do not coarsen , while asymmetric distributions which are pronounced near the origin correspond to coarsening systems .it is a pleasure to thank s. redner for fruitful discussions .e. b. was supported in part by nsf under award number 92 - 08527 and by the mrsec program of the national science foundation under award number dmr-9400379 .l. f. was supported by the swiss national science foundation .fig . 1 scaling for the symmetric case in one - dimension .the quantity is plotted versus for different times , . fig .3 the fraction of persistent voters in 2d , versus .an average over 300 samples of linear size for ( solid line ) and over 50 samples of linear size for ( dashed line ) .4 the even and odd distribution functions for different in 1d . is plotted versus .different scaling functions correspond to the even ( upper curves ) and the odd ( lower curves ) parts of the distribution .the solid lines correspond to the case for one sample of linear size .the dashed lines correspond to the o. m. todes , _ j. phys .( soviet ) _ * 20 * , 629 ( 1946 ) ; i. m. lifshitz , _ zh . eksp . teor .* 42 * , 1354 ( 1962 ) [ _ jetp _ * 15 * , 939 ( 1962 ) ] ; i. m. lifshitz and v. v. slyozov , _j. phys .solids _ * 19 * , 35 ( 1961 ) ; c. wagner , _ z. elektrochem . _ * 65 * , 581 ( 1961 ) .
we investigate coarsening and persistence in the voter model by introducing the quantity , defined as the fraction of voters who changed their opinion times up to time . we show that exhibits scaling behavior that strongly depends on the dimension as well as on the initial opinion concentrations . exact results are obtained for the average number of opinion changes , , and the autocorrelation function , in arbitrary dimension . these exact results are complemented by a mean - field theory , heuristic arguments and numerical simulations . for dimensions , the system does not coarsen , and the opinion changes follow a nearly poissonian distribution , in agreement with mean - field theory . for dimensions , the distribution is given by a different scaling form , which is characterized by nontrivial scaling exponents . for unequal opinion concentrations , an unusual situation occurs where different scaling functions correspond to the majority and the minority , as well as for even and odd .
networks are useful tools to characterize complex systems .the system components are represented as nodes and their mutual interactions as edges .finding structures in such networks is therefore of great relevance for understanding the mechanisms that underlie the system evolution .this explains the increasing interest in the topic , particularly in the detection of communities .communities are groups of nodes with a high level of group inter - connection .they can be seen as relative isolated subgraphs with few contacts with the rest of the network .communities have an obvious significance for social networks where they correspond to groups of close friends or well - established teams of collaborators .however , they are also important for characterizing other real - world networks such as those coming from biology or from technology and transport .communities are not the only meaningful structures in networks : in ecology , computer and social sciences structurally equivalent nodes have been also considered . these nodes are characterized by similar connectivity patterns and are expected to play similar roles within the system .there has been a long tradition of applying bayesian and maximum likelihood methods to structure detection in networks .these methods have the advantage that , depending on the statistical model used , they can be very general detecting both communities and structural equivalent set of nodes .the drawback , shared with many other methods , is that structure detection usually implies computational expensive exploration of the solutions maximizing the posterior probability or the likelihood .recently , a maximum likelihood method that considers node clustering as missing information and deals with it using an expectation maximization ( em ) approach has been introduced by newman and leicht .this method is computationally less costly to implement and we will denote it by the acronym nl - em from now on . nl - em is able to identify network structure relying on three basic assumptions : ( _ i _ ) the actual connectivity of the network is related to a coherent yet _ a priori _ unknown grouping of the nodes , ( _ ii _ ) the presence or absence of a link is independent from the other links of the network and ( _ iii _ ) the groups are tell - tales of processes that gave rise to the graph .no extra information is assumed except for the network itself and the number of groups . under these assumptions ,the method infers the classification of nodes that most likely generated the graph detecting communities and also structurally equivalent sets of nodes . herewe will show that due to the simple structure of the nl - em likelihood , its classifications are based on a subset of nodes which turn out to be responsible for establishing the group memberships of their neighbors .we are able to rank the nodes according to the amount of group - allocation information they transmit to their neighbors and thereby identify those that are essential for establishing each group .these nodes , which we will refer to as stabilizers , constitute the backbone of the classification : the classification would not be viable without them and conversely , stabilizers turn out to emerge as a result of their distinct connection patterns on the given graph . given the generality of the nl - em underlying assumptions and that the resulting classifications can be validated by comparison with other clustering methods , we suggest that the stabilizers have an important inherent value for understanding the processes that generated the given network .such an expectation is supported by our results on empirical graphs for which additional information regarding the nodes intrinsic properties is available .we will also briefly discuss the extension of this concept to other inference methods such as bayesian clustering techniques .we begin with a quick summary of nl - em as applied to graphs . labeling the nodes as , the variables are : , the probability that a randomly selected node is in group , , the probability that an edge leaving group connects to node , and , the probability that node belongs to group .the method is a mixture model where an edge between nodes and ( expressed as ) given the groups of and ( and ) is observed with probability the edges are considered as independent so the probability that a given grouping realizes an observed network can be written as , \label{eqn : proba}\ ] ] where is the set formed by the neighbors of node .the group assignment captured by the terms is treated as missing information .the expectation step of em can thus be implemented as an average over the log - likelihood . \label{loglik}\ ] ] the maximization of is subject to the normalization constraints , and leads to where is the degree of node .the group assignment probabilities are determined _ a posteriori _ from as the maximization of can be carried out with different techniques . in order to account for the possible existence of a rough likelihood landscape with many local extrema , we employed an algorithm that alternates between simulated annealing and direct greedy iteration of eqs .( [ eqn : em ] ) and ( [ eqn : qir ] ) .the group membership of the nodes is encoded by the probabilities .it is thus natural to ask for the conditions on a node and its neighbors to have crisply classified into a single group so that .the answer highlights the role of the neighbors in establishing a node s membership credentials .looking at the expression for , eq .( [ eqn : qir ] ) , where the non - zero prefactors whose sole role is to ensure proper normalization have been suppressed , one finds that for each group there must be at least one neighbor of whose probability is zero .however , as seen from eq .( [ eqn : em ] ) , whether is zero or not for some group depends in turn on the group memberships of the neighbors of . hence having a node crisply classified as belonging to a group sets strong constraints on its neighbors and their respective neighborhoods .these constraints propagate throughout the network during the nl - em iteration until a final configuration for and is established . in this sense, a node is passing information about group membership to its neighborhood through the probabilities .this information is negative , of the form `` you do not belong to group '' when is zero and we say that node stabilizes its neighbors against membership in group .it is worth noting the parallels of this mechanism with message passing algorithms . in a classification into groups each crisply classified node be stabilized against groups .thus one can regard the number of groups a node stabilizes against as a measure of the amount of information that passes to its neighbors .if , node can stabilize its adjacent nodes alone providing thus complete information about their group membership .on the other hand , when , provides only partial information .the crisp classification of a neighbor requires then the combined action of other adjacent nodes in order to attain full group membership information .we denote as _ stabilizers _ of the union set of neighbors that alone or in combined action pass essential information to establishing its membership in a single group ( a more precise definition will be given below ) .the above analysis implies that any crisply classified node must be stabilized by one or more stabilizers .therefore , if the assumptions of the statistical model are justified and the resulting node classification is meaningful , the identification of the corresponding stabilizers may offer useful additional information . based on their classification andthe information passed , four types of nodes can be distinguished : nodes can be strong or weak depending on whether they are crisply classified into a single group or not , and they can be stabilizers or not , depending on whether they pass essential information for establishing an adjacent node s group membership . if we consider a node and denote by , the set of groups that does not connect to , and by , the set of groups that does not belong to , the nl - em equations ( [ eqn : em ] ) and([eqn : qir ] ) relate these sets as follows : forming a set of consistency relations with a simple meaning : a node can not belong to a group to which its neighbors do not connect , and the common set of groups to which a node s neighbors do not belong must correspond to the groups that it does not connect to . if we require in particular that a node is strong , _i.e. _ it is crisply classified as belonging to a particular group , then .given the sets associated with the neighbors of a strong node , not all adjacent nodes need to contribute to its full stabilization .likewise , node can be stabilized by different combinations of its neighbors sets .this is best illustrated by an example shown in fig .[ fig : example ] .suppose that the groups are and let us assume that node is crisply classified as .let have four neighbors with corresponding sets , , and .it is clear that all four nodes together must stabilize , as otherwise would not be a strong node .however , the sets of neighbors or each suffice to stabilize node .the node is redundant , since it does not contribute a new class against which or are not already stabilizing .in other words , if the set is considered , node can be removed without altering the stabilization of .the same is not true for the nodes and .the notion of stabilization sets and stabilizer nodes can be defined as follows : a subset of nodes adjacent to is a stabilization set of , if the removal of any one of the nodes from the set causes not to be stabilized by that set anymore .a node is a stabilizer if it is member of at least one stabilization set .the definition of stabilizer involves thus a stabilization relation with at least one of the node neighbors . in the above example , and are the only stabilizers of .non - stabilizer nodes can be removed without affecting stabilization , while whenever a stabilizer is removed the number of ways in which a given node is stabilized decreases . in the example of fig .[ fig : example ] , the removal of node would cause complete loss of stabilization of , while removal of or would leave with only a single stabilization .it can be shown that the removal of a stabilizer will never turn a previously non - stabilizer node into a stabilizer , but it might turn some stabilizers into non - stabilizers .note that in a sense stabilizer is more important than or , since it is part of every stabilization of and its removal will thus render a weak node .in fact , one could attach a strength to each stabilizer by keeping track of the number of stabilizations in which it is involved , but , for sake of simplicity , we will not pursue this here . given an nl - em classification with strong nodes , we can immediately identify the stabilizers that are responsible for the crisp classifications .details on how to implement the identification of stabilizers are provided in appendix a.2 .the relation stabilizes induces a directed subgraph on the original network and we will refer to this as the stabilizer subgraph .the relation between two stabilizer nodes is not necessarily of mutual stabilization : a necessary condition for adjacent strong nodes and to mutually stabilize each other is that both and are empty .the connections among strong stabilizers capture the relations between groups in the graph . in that senseone can regard the stabilizers as exemplary members of the groups . in the undirected graphs of figs .[ fig : random ] - [ fig : adj ] the stabilizer subgraph has been superposed .the extension of these concepts to nl - em classifications in directed graphs is similar , details are given in appendix b. the case of nl - em classifications into two groups is particularly simple .denoting the groups as and , a crisply - classified ( strong ) node belongs to either or and a strong node of a given group has to be stabilized against the complementary group .all nodes with non - empty are therefore stabilizers , and if more than one is present all are equivalent , each stabilizing a given node independently from the other stabilizers . moreover , the strong stabilizers are nodes that are stabilized themselves by some of their neighbors which necessarily are also stabilizers . the conditions of eq .( [ eqn : sets ] ) permit only two possible configurations of the stabilizer subgraphs . either strong stabilizers of group to strong stabilizers of their own group , or stabilizers of group connect to those of the complementary group . in the former casewe get a disjoint community like partition ( _ cf . _[ fig : sen ] ) of the stabilizer graph , whereas in the latter case we obtain a bipartite partition ( _ cf .[ fig : adj ] ) . furthermore , the nl - em classification into two groups reveals a simple but meaningful hierarchical structure in the way the different type of nodes in the classification relate .strong ( non - stabilizer ) nodes are nodes for which , so these nodes connect to nodes of both groups ( weak or strong ) , however in order for them to be strongly classified as in one group , let us say , ( ) they can only connect to those stabilizer nodes with the compatible stabilizer classes ( ) . in turn, the neighborhood of strong stabilizer nodes with or can consist only of nodes strongly classified as or , respectively .the weak stabilizer nodes are by definition nodes for which , but for which or .thus weak stabilizer nodes can not connect to strong stabilizer nodes , but they can stabilize strong ( non - stabilizer ) nodes . finally , the weak nodes that are neither strong nor stabilizing can connect to strong non - stabilizing nodes and other weak nodes . in this way the connection rules for the strong stabilizers , weak stabilizers , strong nodes , and weak nodes set up a hierarchy of nodes at the core of which are the strong stabilizers .as we observed in the previous section , a node can be stabilized by its neighbors in multiple ways .this redundancy renders classifications robust against disorder introduced by the addition or removal of edges up to a certain point . to illustrate this we consider a benchmark with four communities .the initial network is generated with four disjoint groups of nodes each , with the nodes having on average in - group links .these groups correspond to the four clusters of fig .[ fig : random](a)-(d ) . random links connecting different groups are added to the basic configuration and the number of stabilizers are tracked as a function of the average number of out - group links .[ fig : random ] shows the stabilizers obtained from an nl - em classification into groups at disorder level and . when we find a crisp classification where all nodes are strong stabilizers , meaning that all nodes stabilize and are being stabilized .furthermore , all of them provide complete stabilization information , , with a single stabilizer sufficing to crisply classify a neighbor . since , there is on average 16-fold redundancy in the stabilization of each node .as random connections are added to the network , the four clusters become connected with each other .some of the stabilizers start to stabilize against fewer classes , giving rise to a decrease in the average . in the right panel of fig .[ fig : random_class ] , we have plotted how the average stabilization information decays when increases . in order for nodes with to be stabilizers they have to act in combined action with other nodes , as in the example of fig .[ fig : example ] .thus an increase of the level of disorder causes both a reduction in the redundancy of stabilizations of strong nodes and a shift towards stabilizations by combined action of more than one stabilizer .the increase in disorder eventually leads to a loss of strong nodes , implying that the classification deteriorates . in order to assess the quality of classifications, we use the entropy , as defined in the entropy measures the crispness of a classification .when , all the nodes are strong , while corresponds to case where the classification of the nodes is maximally uncertain .the right panel of fig .[ fig : random_class ] displays as a function of , showing that the crispness of the classification is lost for large .the increase in entropy is closely related to what happens to the different nodes in the classification as edges are added , particularly to the stabilizers .the variation of the number of the different type of nodes with is shown in the left panel of fig .[ fig : random_class ] .as the addition of new edges progresses , some nodes cease to be strong stabilizers .when a node is not a strong stabilizer anymore , it can still remain strong as long as there are other nodes stabilizing it in its neighborhood . as can be seen in the left panel of fig .[ fig : random_class ] , this is what is happening up to : the number of strong stabilizers decreases while the number of strong nodes rises accordingly .therefore , initially the effect of adding edges is to convert strong stabilizers into strong nodes .most of the nodes remain strong ( stabilizer or not ) , and the classification is essentially crisp with an entropy . with the further addition of edges ,the number of strong nodes starts to decrease as a result of the loss of stabilization , giving rise to the appearance of weak stabilizing and non - stabilizing nodes at .continuing to , the entropy of the classification remains very low because there still is a sizable number of strong nodes supported by a few weak and strong stabilizers ( see panels b and c in fig .[ fig : random ] ) . asfurther edges are added , the number of weak stabilizers starts to decrease as well , and eventually most of the nodes are weak and non - stabilizing , accounting for the quick rise in the classification entropy starting around .we focus now on some empirical examples to show the special role that the stabilizers play in a classification and the type of information that they convey while also highlighting the versatility of our analysis . as explained , classifications into two groups are particularly simple and in this case the stabilizers can be easily identified once a solution of the nl - em clustering is given .this simplicity makes them good candidates to illustrate the properties of the stabilizers .we present first two examples of this type that show the role of the stabilizers and the relations between them .we then turn to a directed network with a classification into groups in order to illustrate a more general situation . ,strong nodes , dem .strong nodes and dem .stabilizers . ]the first example is a network built from the voting records of the us senate .the nodes represent senators that served the full two year term ( ) during which issues were voted . since our aim is to construct a network based on political affinity , we draw an edge between two senators if they voted in the same way at least once .the edges are weighted by the reciprocal of the number of co - voting senators minus one , a common practice for collaboration networks . in this way ,an agreement in minority on an issue has a higher value than that in an unanimous vote , differentiating more clearly close political standings . due to circumstantial quasi - unanimous votes ,the network is initially close to fully connected . a threshold such that edges with lower weights are removed can be introduced , and the resulting networks can be analyzed as the threshold increases .we have applied two - group nl - em to these networks .once the threshold is high enough , the clusters found follow well the divide between democrats and republicans .the instance in which about half of the senators , either republicans or democrats , are stabilizers is displayed in figure [ fig : sen ] .congress roll calls and their derived networks have been extensively studied in the literature .one of the most interesting results is that single votes of a representative can be understood with a low dimensional spatial model ( dw - nominate ) in which a set of coordinates can be assigned to each congressman characterizing his / her political stand on the different issues .since the 90 s the number of dimensions required has been reduced in good approximation to only one that strongly correlates with the congressman s view on socio - economic questions ( liberal vs. conservative ) . in fig[ fig : sen ] , we show the relation between being a stabilizer and the location in the liberal - to - conservative dimension .the stabilizers tend to be the most radical members of the senate who are probably defining the overall position of their groups .this exercise can be repeated on networks obtained with different thresholds .it can be seen that as the threshold increases more and more nodes turn into stabilizers . keeping track of the senators that become stabilizers at different thresholdsallows for a refined exploration of the political spectrum .note in particular that the above results have been obtained by simply looking at the co - voting relation and without considering the vote records in detail , _i.e. , the actual issue put to vote_. in our second example we show how by extracting the sub - graph of stabilizers we can obtain from its structure useful information about what features distinguish a stabilizer node and how the groups relate in a classification .we consider a semantic network in which the nodes are adjectives and nouns occurring most frequently in charles dickens novel _ david copperfield _a relation between any two of these words is established if they occur in juxtaposition . in fig .[ fig : adj ] , we have represented the network , the best nl - em partition in two groups and identified the types of nodes .there turn out to be two sub - groups containing nouns or adjectives only that are strong stabilizers .these two sub - groups bear the responsibility for the classification of remaining words by association .note that the only input to the nl - em method is the network .we are not introducing any bias for the partition in adjectives and nouns .most of the remaining words are well classified .the stabilizers , central to establishing the classification , are the words always occurring in strict combinations like _ true friends _ , never mixing with members of the same group and they form a bi - partite sub - graph of stabilizers as shown in the right panel of fig .[ fig : adj ] .conversely , nonstabilizing nodes are words appearing in mixed roles , such as the word _ little _ in the adjective - adjective - noun triplet _ poor little mother_. as obtained from the nl - em classification of the little rock lake foodweb .the maximum number of strong stabilizers occurs around .this number is close to the trophic level which is around , suggesting that a classification into groups might capture the trophic levels.,width=377 ] our final example , showing a more general case with groups , is the little rock food - web .the vertices of this network are species living in the aquatic environment of little rock lake in wisconsin .each directed link represents a predation relation pointing from predator to prey .the number of trophic levels is around four and turns out to be the number of groups for which the nl - em algorithm produces a partition with highest abundance of strong stabilizers , as shown in fig .[ fig : foodstab ] where we have plotted the number of stabilizers of an nl - em solution against the number of groups . a property of the four group classification depicted in fig .[ fig : food ] is that it keeps basal species ( green ) in one group , top predators ( cyan ) in another , and assigns the rest to two different groups based on the prey they feed on at the basal level .the species that are not strong stabilizers , for instance nodes , or , could be related to a missing data problem . in the case of ( _ hydroporus _ ) or ( _ lepidoptera pyralidae _ ) , the species appear only as prey having no connection to lower levels . however , its consumers are not typically feeding on basal species , they are `` cyan '' , and this results in an nl - em classification that assigns them into the `` red '' group .group nl - em classification of the little rock lake foodweb .nodes are species and directed links correspond to predation relations . the node labeling follows .( b ) : fraction of species belonging to each group plotted against their prey - averaged trophic level ( ) and the standard deviation of of their preys , as defined in .the radius of the spheres is proportional to the _ log _ of the percentile .spheres with two colors include species of more than one group ( each sphere or half - sphere is independent ) .( c ) : averages of and over the species forming each group . ] as seen in fig .[ fig : food](a ) , most of the species of the network are strong stabilizers . their abundance is a direct result of the highly structured organization of the foodweb : similar species have similar prey which , as our analysis shows , is also linked to their trophic levels ( see fig . [fig : food ] b and c ) . or more correctly , the consistent choice of species a predator does not prey on is what renders them stabilizers .the possibility of classifying species in low dimensional spaces depending on their trophic level and on the intervality of their prey distribution has been extensively discussed in the literature .our stability analysis reveals an underling structure in the connectivity pattern of the foodweb , which is responsible for the success of these low dimensional models .the maximum likelihood function upon which the nl - em inference method is based is rather generic and depends on the assumption that nodes with similar connections should be grouped together . using this likelihood functionwe were able to show that a subset of nodes , the stabilizers , associated with a given grouping play a central role as they form the backbone of the classification which could not be attained without them .the mathematical basis behind the concept of stabilizers is rather intuitive and follows from the product form of the group assignment probabilities , , in eq .( [ eqn : qir ] ) , which is in turn a direct consequence of the assumption that the edges are statistically independent ( eq . ( [ eqn : proba ] ) ) .such an assumption is common to a number of probabilistic clustering methods .we can rewrite eq .( [ eqn : qir ] ) as where ^{\frac{1}{k_j}},\ ] ] so that the prefactors are equally absorbed into .note that is in the interval ] indicate the classes against which node is stabilizing . a recursively called subroutine , givenin fig .a.1 in pseudo - code , performs the task of determining all stabilizations of a strong node , given the sets . in the algorithmswe have assumed that there is already defined a procedure , which takes a list and returns the indices where the list element equals to along with the number of elements found .also in our notation when two lists are operated on term by term we denote this as \gets listone [ * ] \;\ ; { \rm < operator > } \;\ ; listtwo[*] ] has been added . .a generalization of nl - em to directed graphs that preserves structural equivalence was recently provided in our earlier work .we assume that given a node , a link to a node can be either out - going , in - going or bi - directional .we thus introduce the probabilities : * that a directed link leaving a vertex of group connects to node , * that a directed link pointing to a node in group exists from , and * that a bidirectional link exiting from group connects to , and construct the probability of realizing a directed graph as , \label{prob2}\ ] ] , , and are the set of adjacent nodes of from which receives an in - coming , out - going , and bi - directional link , respectively .the likelihood can now be written as , \label{lbardirected}\ ] ] which has to be maximized under the following constraint on the probabilities , implying that there is no isolated node .the probability , that a randomly selected node belongs to group , is again given by .the final result is where , and are the in - degree , out - degree and bi - directional degree of node , respectively .these expressions have to be again supplemented with the self - consistent equation for which now reads note that when we have only bi - directional links so that for all , and it follows from eq .( [ eqn : thetarj ] ) that .thus we recover the undirected em equations eqs .( [ eqn : em ] ) and ( [ eqn : qir ] ) under the identification .the case of directed graphs is similar to the undirected case with a few minor modifications . given a nl - em classification of a directed graph , we associate with each node the following four sets : * , the set of groups that does not have an out - going connection to , * , the set of groups that does not have an in - going connection to , * , the set of groups that does not have an bi - directional connection to , * , the set of groups that does not belong to , along with their complements , , , , and .the nl - em equations , eqs .[ eqn : thetarj ] and [ eqn : q_ir ] , relate the sets and to each other as follows : defining the set of all stabilizer classes associated with a node , irrespective of the directionality as the stabilization condition for a node becomes identical to the one for the undirected case , guimer r , mossa s , turtschi a and amaral lan , _ the worldwide air transportation network : anomalous centrality , community structure , and cities global roles _ , 2005 _ proc .usa _ * 102 * 7794 the question of whether a given graph admits a nl - em solution that is a ( strong ) partition of all of its nodes into groups , can thus be seen as a coloring problem : we seek a partition of the nodes , i.e. a coloring , corresponding to , such that is satisfied for all .
components of complex systems are often classified according to the way they interact with each other . in graph theory such groups are known as clusters or communities . many different techniques have been recently proposed to detect them , some of which involve inference methods using either bayesian or maximum likelihood approaches . in this article , we study a statistical model designed for detecting clusters based on connection similarity . the basic assumption of the model is that the graph was generated by a certain grouping of the nodes and an expectation maximization algorithm is employed to infer that grouping . we show that the method admits further development to yield a stability analysis of the groupings that quantifies the extent to which each node influences its neighbors group membership . our approach naturally allows for the identification of the key elements responsible for the grouping and their resilience to changes in the network . given the generality of the assumptions underlying the statistical model , such nodes are likely to play special roles in the original system . we illustrate this point by analyzing several empirical networks for which further information about the properties of the nodes is available . the search and identification of stabilizing nodes constitutes thus a novel technique to characterize the relevance of nodes in complex networks . = 1
in wireless communications networks , bonding multiple frequency bands into one virtual channel improves the transmission speed for individual network users at the medium access control ( mac ) layer , as the channel throughput is theoretically linearly dependent on channel bandwidth .practical evaluation of channel bonding concepts has attracted substantial amount of attention , especially from the wireless local area networks ( wlan ) research community .furthermore channel bonding is already present in some wireless networking standards , including ieee 802.11n , ieee 802.11ac and ieee 802.22 .intuitively , in networks utilizing channel bonding , each data flow will be transmitted faster as more channels are used for an individual connection .however , more occupied channels for one user translate into less data transmission opportunities for other network nodes . while this relationship is obvious , its analytical aspects have been relatively unexplored . moreover , to the best of our knowledge , analytical models exploring this tradeoff and potential benefits of channel bonding mac protocolsdo not exist .surprisingly , the channel bonding concept has not been addressed analytically in the context of opportunistic spectrum access ( osa ) network behavior , specifically considering mac layer features ( particularly random access mode ) and pu and su traffic characteristics .as osa networks utilize multiple , potentially non - contiguous channels , with primary users ( pus ) randomly reappearing on their respective channels , it is important to understand the performance of channel bonding mac protocols for osa networks .the objective of the paper is to understand the conditions under which it is beneficial to bond multiple frequency channels for osa networks . to accomplish this, we present a mathematical framework based on a markov chain analysis that enables the performance of channel bonding to be compared against classical osa mac protocols that do not use channel bonding .the framework enables the investigation of average mac level network throughput as a function of the number of osa network users , the number of frequency channels , channel bonding order , traffic level of secondary users ( sus ) , pu activity , and the spectrum sensing method , among many other parameters . the system model in our workis founded upon the model of and historically upon the model of .this has provided for an ease of comparison of our results with earlier models of ( non - bonded ) multichannel mac protocols .however , unique properties of our model ( especially the allowance of channel bonding , which has not been considered in ) resulted in a fundamentally new formulation of the markov chain used in calculating the performance metrics . in the literature related to channel bonding performance , one of the first studies of channel bondingis found in , where mac protocols with channel bonding were advocated for throughput increase .it has been shown experimentally that channel bonding results in a large benefit for wireless networks . in this studyonly a small - scale network , i.e. with less than four nodes , was considered , thus it was unknown if the conclusions hold for large network sizes . in the authors present measurement results of channel bonding functionality in ieee 802.11n .they show through experiments that when transmitted power is fixed for wlans with and without channel bonding , the network with bonded channels obtains smaller throughput than a network without this option enabled . to alleviate thisproblem authors propose an algorithm that adaptively merges the channels depending on the perceived link quality , where users with the same link quality are associated with the same access point .the algorithm has been evaluated only through experiments . in another study a wlan system with channel width adaptation was developed utilizing ofdm .a fixed number of channels were used , thus the effect of channel bonding adaptation on the network performance was not investigated . summarizing , the investigation of channel bonding mac protocols was strictly limited to the experimental platforms and it is unknown how channel bonding mac protocols perform in scenarios not considered in the above mentioned studies .a separate group of papers on channel bonding performance in an osa context relates to early analytical studies .the benefit of channel bonding ( in a broader context of frequency agility ) has been evaluated analytically in .this study assumed that the osa network assigns channels on a pre - detected pool of frequency bands , and mac protocols are abstracted and not considered . in a spectrum allocation framework for osa - based wlan networks utilizing dynamically changing channel bonds was presented .first , a static and centralized channel allocation framework was proposed .then a distributed mac protocol utilizing a spectrum allocation scheme called b - smart was discussed , and an analytical model based on a markov chain analysis was used to assess the control channel throughput of b - smart . the model used approximating expressions related to the probabilities of frame collision , frames being idle and successful frame transmission from , which were applicable only to non - modified ieee 802.11 systems with a distributed coordination function . the work in has been extended to wlan networks in , where an analytical framework to evaluate efficiency of channel bonding for non - osa networks with multiple access points has been proposed .however , as in the case of , the framework was applicable to static frequency allocation only , where the specifics of the protocol that network nodes use for communication were not a focus of the approach .the tool to model the problem was based on a linear programming formulation from which linear programming relaxation and a greedy heuristic were developed . a similar approach to analyze channel bondingwas presented in , where analysis of an osa ad - hoc network with static channel allocation for users cooperating with access points was analyzed .non - contiguous channel allocation was allowed with more than one channel assigned to a single su . in summary , all existing analytical tools to evaluate channel bonding are applicable to static frequency allocation and do not consider mac protocol features . in the context of our work we need to mention , where a channel bonding feature for ieee 802.22 networks was investigated .however , the proposed model only used approximations to reach a tractable solution , was applicable to static resource allocation networks based on ofdma , and did not consider random access features of the mac protocol .finally , we need to mention experimental osa platforms with channel bonding .one of the first networks utilizing channel bonding for osa network was the whitefi platform , i.e. a fully operational wlan network working in the tv white spaces , which utilized varying size channels for data communications , i.e. 5 , 10 and 20mhz .in whitefi channel bonding was possible only for adjacent channels .performance of the platform was evaluated via simulations and experimentation with a limited set of prototype devices .a description of some of the algorithms and hardware used in whitefi platform was also discussed in .other osa network prototypes utilizing channel bonding can be found in .the so called jello framework fused multiple non - contiguous bands into one virtual channel , which was more flexible in this respect than whitefi .indeed , the authors have shown a benefit of channel bonding , however measured only in terms of disruption rate and residual spectrum use .implemented in a gnu radio , jello was evaluated in a testbed consisting of only a few nodes where the results were compared against simulation . in addition , off - line experimental studies of channel allocation for osa , based on real - life measurement data of pu occupancy , have been presented in . again , while this interesting study clearly showed how networks can benefit from accessing pu occupied channels , and how channel bonding benefits osa networks , there are no analytical results that were presented therein . the rest of the paper is organized as follows .the system model is introduced in section [ sec : system_model ] , while the analytical model is presented in section [ sec : analytical_model ] .numerical results are presented in section [ sec : numerical_results ] and the paper is concluded in section [ sec : conclusions ] .we assume a single hop osa ad - hoc network composed of nodes communicating with each other using multichannel mac with a dedicated control channel ( dcc ) , as discussed first in . each node is assumed to transmit a saturated flow of framed data , where each new frame is generated with probability ., i.e. the connection request probability , governs the collision resolution strategy . a higher value of results in a higher number of collisions on the control channel , while a lower results in fewer collisions leading to a lower channel utilization .furthermore , note that the assumption of traffic saturation for each node follows the classical assumption of many other works that have considered performance analysis of mac protocols .as ( * ? ? ?iii - b ) notes `` ( ... ) saturation throughput is a major performance measure to evaluate mac protocols '' .we refer for example to ( * ? ? ?ii - c ) , ( * ? ? ?iv ) , and ( * ? ? ?5 ) , where such an assumption has been made while analyzing osa mac protocols . note that by `` saturated '' traffic we mean traffic which is always sent by every node , however , at non - regular intervals governed by the control channel access probability .nodes contend for the channel access by transmitting a request to send ( rts ) frame via dcc , requesting a connection to other user of the network .the connection request is successful when only one su requests a connection .the connection is established when the intended receiver responds with clear to send ( cts ) frame on dcc when it is not involved in data exchange with other users . at the event of collisionall connection requests are lost and users must contend for the dcc access after a random amount of time .this approach closely mimics the operation of the s - aloha protocol .pu and su transmission is slotted ( sus follow pus slot boundaries ) .the assumption on the slotted transmission of the su and its synchrony with the pu is common in theoretical mac analysis for osa networks , see for example ( * ? ? ?ii ) , ( * ? ? ?2 ) , ( * ? ? ?1 ) , ( * ? ? ?iii - a ) ( * ? ? ?1 ) , ( * ? ? ?ii - a ) . moreover ,the assumption of pu / su slot synchrony is due to conformity with our previous work on multichannel mac protocols for osa networks , i.e. , where the same assumption has been made .this allows for an easy comparison of the results obtained in this paper with the previous results we have obtained on osa multichannel mac protocol performance .furthermore , certain papers , e.g. do assume pu / su synchrony in the practical network scenarios ( in the case of : secondary gsm network use , and in the context of : secondary ieee 802.16 network use ) .then , we emphasize that the pu / su slot synchrony assumption allows for obtaining upper bounds on the throughput in comparison to slot - asynchronous interface , as it has been remarked in .please note however , that our model can indeed be generalized to a non - slotted pu / su transmission .however , this would require further analysis on channel access policies , like those performed in , which are beyond the scope of this paper .each slot is seconds long , with seconds of the spectrum sensing time performed at the beginning of the time slot by osa network users and is equal to a length of one rts / cts frame exchange .frame lengths are geometrically distributed such that the average frame length is time slots , where is the probability of a time slot being occupied by the frame transmission is the probability that during frame transmission next time slot will be empty .this will end transmission and force a sender to contend over a control channel for a new virtual channel for new frame transmission .] , is the theoretical throughput of a single physical channel , is the number of physical channels bonded to form a virtual channel , is the throughput reduction factor for a -bonded virtual channel , where and , described in detail in section [ sec : phy_assumptions ] , and is the size of the frame in bits .as we assume that the control channel throughput ( in b / s ) is the same as the throughput of data channel , i.e. , we can easily calculate the length of the control packet as .in other words , in our model data and control packets , although of different lengths , are sent at the same rate . in the present paper , the main difference with respect to the original design of the multichannel mac with dcc ( see again ) lies in assuming that each new connection will utilize out of channels , where is the total number of channels , one channel is used for control data exchange only . ]at time , where is the maximum channel bond order . in the case of less than physical channels available for a new connection , all free channels are used for a frame transmission .we assume a set of accessible pu channels ( which still can be randomly occupied by pus ) is known to the whole network and assigned unique identifiers .the sender , through the rts frame , communicates to the receiver the information on the ordered set of available channels seen from its perspective , just like in ( * ? ? ?4.1 ) ( in the context of continuous frequency blocks ) .next the receiver , through the cts frame , replies with the set of ordered available channels from its perspective . after a successful rts / cts exchangeboth the sender and the receiver switch to the first channels that are common to both of them .we assume that each communicating pair is able to bond non - adjacent frequency bands , with no guard bands considered , similar to .thus , our framework serves as an upper bound on the performance of any channel bonding protocol considering physical layer constraints on the channel bonding process . note that an analysis of the optimal channel bond resource allocation , considering guard bands , were considered recently , e.g. , in . in the remainder of this paperwe refer to this channel bonding mac protocol as flexible channel bonding . each pu channel of bandwidth at time slot is randomly occupied by the pu with probability , where is geometrically distributed .while not all pu traffic is memoryless , refer e.g. to studies of that prove such a statement , many studies reveal that the complete opposite is true .various pu systems can indeed be correctly described by a memoryless process .for example , in , one of the most complete long - term studies of channel occupancy by various wireless systems , the geometric distribution is quite common , constituting almost 60% of pu traffic distributions for systems of interest , including dect , gsm ( 900/1800 ) and 2.4ghz ism .due to a single hop domain analysis we can reasonably assume for tractability that each osa node performs detection individually and the decision on the pu state is the same for each node in the osa network .furthermore , each osa node is equipped with a wideband spectrum sensor to obtain information about all pu channels during the sensing phase .decision on the pu state is prone to errors due to false alarm , , or mis - detection , .assuming rayleigh fading and gaussian noise the respective expressions are given in ( * ? ? ?* eq . ( 1 ) ) and ( * ? ? ?* eq . ( 3 ) ) . for those expressions we will give respective values for the pu channel bandwidth and snr of the pu signal at the su energy detector with detection threshold , resulting in required and , when presenting the numerical resultswe assume that the osa network loses throughput only due to false alarms , and it can successfully deconstruct a frame on the arrival of a pu during a mis - detection . on the other hand , we assume that the detection of a pu on any of the physical channels constituting one virtual channel causes termination of an entire frame transmission .although this is a very conservative strategy it simplifies the analysis significantly , while also serving as a lower bound on the obtained mac throughput considering frame disruption resolution strategies . in the analysis we consider two cases with respect to virtual channel throughput . in the first casewe assume that throughput of a virtual channel with bonds is exactly , as in , i.e. . in the second case we assume , due to the fixed power that users can emit per virtual channel , the throughput for a -bonded channel is , where and . note that the function can be defined arbitrarily , depending on what physical layer constraint is considered , e.g. , it can mimic the exponential decay of throughput for virtual channels of increasing orders , or it can mimic step - wise throughput reduction with increasing . for comparison in this studywe also consider other multichannel mac protocols .the first protocol is a classic multichannel mac protocol analyzed first in , where bonding is not allowed , i.e. channel bonding protocol with .the second protocol , denoted as -only bonding multichannel mac , is a less flexible version of the main channel bonding mac protocol .that is , when a newly admitted connection sees fewer than free channels , then this connection is blocked and the user contends again for the channel access .this protocol mimics the operation of networks which are able to bond only a fixed number of channels per newly arriving frame .the third protocol is an adaptive version of the considered channel bonding multichannel mac protocol , that changes the order of channel bonding dynamically depending on the network and/or traffic conditions .details of the protocol will be presented in section [ sec : results_adaptive_bonding ] . in the paperwe will study the flexible channel bonding mac protocol via analysis , while -only channel bonding and adaptive channel bonding will be considered via simulations .to calculate network - wide average mac layer network throughput we propose to use a markov chain analysis framework .the model presented here extends the work of to the case of channel bonding .therefore we follow the same naming convention and use the same definition of connection arrangement and termination probabilities ( as the system model in this work follows those of and the earlier model of ) . however , the structure of the markov chain , formulation of transition probabilities , introduction of a new function : connection preemption probability , and calculation of performance metrics are fundamentally new and unique to our work .the roadmap of this section is as follows .first , in section [ sec : chain_definition ] we introduce the definition of a markov chain used to analyze our proposed mac protocol .next , in section [ sec : arrangement_probability ] and section [ sec : termination_probability ] we present the definition of the connection arrangement and connection termination probability , respectively , while in section [ sec : overlap_probability ] we introduce the definition of the connection preemption probability .these three functions govern the way connections are established and terminated within the osa network of interest .section [ sec : transition_probability ] derives the complete markov chain transition probabilities ( using the three above mentioned functions ) which are later used in section [ sec : metric_calculation ] to derive the performance metrics of interest . assuming without loss of generality that , i.e. that the osa network needs to compete for limited resources , let denote a state of the markov chain at time , where state , denotes the number of active connections occupying physical channels at time . then, is the set of all possible states for the considered system at time , and is the size of .furthermore , let then the steady state probability vector is obtained from transition probabilities as while the probability that new connections at time are successfully admitted through a control channel while pairs of users actively exchange data on the virtual channel at time is defined as ( * ? ? ?* eq . ( 15 ) ) where , without loss of generality , the common control channel is assumed to be absent from the pu activity .if one wants to assume that dcc is also randomly occupied by the pu , then ( [ eq : arrangement ] ) can be easily replaced with ( * ? ? ?* eq . ( 15 ) and ( 16 ) ) to for .this would mean that every new connection request involves only a single user ( instead of two ) , e.g. a single user requesting connection to an access point .obviously , other definitions of can be used in our work that would reflect the operation of infrastructure - based networks better .for example , one can use ( * ? ? ? * eq .( 3 ) ) , i.e. definition of a prach - like control channel used in 3gpp - based networks . ] .the probability that connections at time finish transmission out of connections using -bonded virtual channels at time , is defined as ( * ? ? ? * eq .( 14 ) ) the probability that existing su connections are preempted by pus is defined as \nonumber\\&\qquad\times q_{c}^{z}\left(1-q_{c}\right)^{m - z } , \label{eq : pxyt}\end{aligned}\ ] ] where ( * ? ? ?* eq . ( 13 ) ) is the state of the pu observed by the su network results from a certain energy detection process for a required ; see section [ sec : primary_user_detection ] for details .] , is the total number of pu occupancies on physical channels at time , , with given in ( [ eq : transition ] ) , is the number of pu occupancies on physical channels not used by su pairs , ] .furthermore , let , where is the element of .we consider the following three subcases .first , if and , there are terminations possible due to overlap , i.e. including terminations according to the rows of with both none and an additional excess connection generation , i.e. second , if and and , there are terminations possible due to overlap and although free channels are available , an increase in connection does not occur .that is .\label{eq:3b}\end{aligned}\ ] ] in ( [ eq:3b ] ) the logical argument in the outermost indicator function is used to determine if there are enough pus present for preemption in the event that an additional connection generation occurs and terminations are insufficient .the logical argument in the innermost indicator function is used to determine if terminations for -bonded connections alone are sufficient to reach the desired end state . and finally , if and , there are free channels and there exists one and only one additional connection generation the following transition probability is used , i.e. .\end{aligned}\ ] ] note that as in ( [ eq:3b ] ) the indicator function is used to determine if preemptions combined with terminations are sufficient to reach the desired end state .similar to case described in section [ sec : case_3 ] , this case covers the event in which a termination occurs , however there are no preemptions possible because there are no pus occupying the system in the end state .that is for the purpose of simplifying the notation we introduce the following supporting function .\end{aligned}\ ] ] then ^(1)r_t^(t+1)= [ eq:4_1 ] ( s_a^(0 ) ) , & [eq:4_1a ] + ( s_a^(0)+s_a^(1 ) ) , & [eq:4_1b ] + ( s_a^(1 ) ) , & [eq:4_1c ] + 0 , & [ eq:4_1d ] , where and .the first two conditions , i.e. ( [ eq:4_1a ] ) and ( [ eq:4_1b ] ) , represent the non - edge and edge cases , respectively , for when there are no additional generations in the system , i.e. the terminations subtracted from the current state is equal to the end state . the third condition ,i.e. ( [ eq:4_1c ] ) , represents the case in which there is an additional generation accompanying the one or more terminations in the current connection state .furthermore in ( [ eq:4_2 ] ) there are additional free channels , but there is no increase in any of the bonded channels . in this casean additional connection generation in the -bonded channel is terminated by adding a to the term of the termination vector row . given the steady state matrix we can compute the performance metrics of the system of interest .the system throughput is calculated as note that the channel utilization can be calculated as also note , that in ( [ eq : throughput ] ) and ( [ eq : utilization ] ) both metrics are multiplied by the factor , due to incurred spectrum sensing overhead . to conclude , in the context of our model the metrics of ( [ eq : throughput ] ) and ( [ eq : utilization ] ) are the fundamental descriptors of the channel bonding mac performance in osa . in the next section we will present numerical results to gain insight on the operation of such a protocol based on these two metrics .we present a set of numerical results to evaluate channel bonding for osa from a mac layer perspective .first , in section [ sec : result_pu_impact ] , we assess the impact of pu activity level , introduced in section [ sec : primary_user_detection ] , on the system throughput . secondly , in section [ sec : resutls_ch_pool ] , we investigate the effect of the ratio of channel pool size to the total number of osa network users on the system throughput . in section [ sec : results_virtual_throughput ] , we investigate the effect of individual virtual channel throughput , , introduced in section [ sec : phy_assumptions ] , on total system throughput . then , in section [ sec : results_pu_distruption ] we consider the impact of a virtual channel disruption strategy on system throughput , followed by studies on adaptive channel access control in section [ sec : results_fairness ] . in section [ sec : results_adaptive_bonding ] we discuss the construction of an adaptive channel bonding mac protocol .finally , in section [ sec : non_geometric ] we present results on the effect of non - geometric pu duty cycle distribution on the performance of channel bonding . in the subsequent sections we focus on the throughput metric solely , since channel utilization is directly proportional to . since the issues of random access phase and the effect of spectrum sensing layer were investigated in earlier studies of multichannel mac protocols , we do not explore them here . for the sake of clarity and without loss of generality , we present the results by grouping them into two network scenarios unless otherwise stated : ( a ) pu channel pool , number of users , and frame size kb , representing a small - scale network ; and ( b ) , , kb , representing a large - scale network . the remaining parameters common to both scenarios are ( unless otherwise stated ) : control channel access probability ( as in ) , physical channel throughput kb , slot length , sensing time , throughput reduction function ( perfect bonding scheme ) , pu signal received power at each su detector , detection threshold and bandwidth for which and .we have tested the performance of channel bonding schemes for realistic bonding order , i.e. , as investigated in , e.g. , .we evaluate the impact of pu activity on the system throughput .the results are presented in fig .[ fig : impact_qp ] .the channel bonding scheme considered in the analysis is represented in fig .[ fig : qp_a ] and fig .[ fig : qp_b ] as ` a ' . for comparisonwe present the results for the -only channel bonding scheme ( represented in fig .[ fig : qp_a ] and fig .[ fig : qp_b ] as ` n ' and described in section [ sec : comparison_protocols ] ) , where a new connection is dropped if and only if the number of available pu channels is less than .we observe that the impact of pu activity level on system throughput is strictly non - linear for every channel bonding order and network size .this behavior is due to the increased impact of pu preemptions on the active connections using virtual channels .furthermore , increasing pu activity results in less throughput reduction .this is because for every channel bonding order just one active pu on any physical channel constituting a virtual channel causes disruption .our result closely resemble simulation results presented in ( * ? ? ?11 ) , where with an increase in the number of access points generating background traffic on tv uhf channels the throughput of whitefi platform decreases semi - exponentially .both fig .[ fig : qp_a ] and fig .[ fig : qp_b ] present similar curve shapes .the largest impact is observed for low values of pu activity , i.e. . most importantly , we see that for these two network scenarios , small- and large - scale , the benefit of channel bonding diminishes as the pu activity increases . for the small - scale network , the non - bonded system always outperforms the systems using channel bonding schemes .furthermore , we observe that -only bonding with yields the lowest throughput , irrespective of the pu activity level , and this difference is largest for low values .note that both the flexible and -only schemes exhibit the same performance for for the entire range of pu activity because each scheme bonds the same number of channels regardless of how many of them are available .this is true for any scheme in which is perfectly divisible by . for other bond orders , flexible and -only bonding schemesconverge to the same value with an increase in .in contrast to the small - scale network scenario , we observe a benefit for channel bonding in comparison to a non - bonded system for in the large - scale network scenario .but as increases we observe that the benefit of bonding diminishes just as in the small - scale scenario and eventually the system experiences a slight degradation in performance as compared to the non - bonded system .this degradation is attributed to an increase in pu activity , which leaves more channels unutilized due to more preemption of bonded connections by pus .finally , the analytical results obtained using the proposed mathematical model have been verified via simulations implemented in matlab .we observe a perfect match between simulation and analytical results , as seen in fig .[ fig : qp_a ] and fig .[ fig : qp_b ] , where analysis is marked as ` an ' .this validates the correctness of the analytical model . exploring the performance of the protocol with a wider set of parameters , in contrary to section [ sec : result_pu_impact ] we investigate the performance of the channel bonding protocol with two additional values for time slot length , i.e. ( i.e. time slot length in hsdpa standard ) and ( i.e. time slot length in mobilewimax standard ) ( * ? ? ?iv - a ) , two various pu activity levels , ( which are typical values for moderately used pu channels ) , and two network sizes , small with and large with .other parameters are the same as in section [ sec : result_pu_impact ] .we divide the results into two groups : fig .[ fig : mn_a ] and fig .[ fig : mn_b ] present results for a small - scale network and large pu activity , and fig .[ fig : mn_c ] and fig .[ fig : mn_d ] present results for a larger - scale network and smaller pu activity .interestingly , as the number of users in the system increase , certain channel bonding orders yield better results than others for a fixed pu activity level .this phenomenon is scenario - dependent . in the case of the small - scale network , when the number of users per channel is relatively small , i.e. , the system with outperforms the system with no bonding , but only in the case of time slot and small osa user population size , compare fig .[ fig : mn_a ] with fig .[ fig : mn_b ] .as the number of users exceeds 12 , the throughput from the system without channel bonding begins to exceed that obtained from higher bond orders for both small and large time slot lengths , compare again fig .[ fig : mn_a ] with fig .[ fig : mn_b ] .this is attributed to the increased number of collisions during the random access phase that higher bond order systems experience as compared to the non - bonded system .this occurs because more su pairs remain unconnected . in other words ,a smaller number of connected su pairs , each occupying more resources with higher bond orders , is worse off than more connected su pairs , occupying fewer resources , for the non - bonded system .this phenomenon becomes more exaggerated as the number of users in the system increase . in the case of the large - scale network , see fig .[ fig : mn_c ] and fig .[ fig : mn_d ] , the non - bonded system results in a drastically reduced throughput in comparison to the other bonded systems . with a small number of users in the system and bond order , throughput is maximized but again only for the case of time slot , see fig . [fig : mn_d ] . as the number of users increases , the system with lower bond orders obtains higher throughput , as seen at when curve exceeds ( fig .[ fig : mn_c ] with a small time slot length ) and at ( fig .[ fig : mn_d ] with larger time slot length ) .the intersection of the throughput curves for the non - bonded system and the scheme with occurs at a very large number of users because the rate in which throughput curves saturate is slower for networks with large channel pools .our analytical tool demonstrates the importance of determining channel bonding performance , since the intersection between different channel bond orders is strictly dependent on the scenario and is very difficult to deduce intuitively .again , as in section [ sec : result_pu_impact ] , the analytical results have been verified via simulations . a very good match between simulations and analysisfurther confirms the accuracy of the proposed mathematical model .we investigate the effect of virtual channel throughput on as shown in fig .[ fig : impact_beta ] , where we investigate as a function of the su frame size .it has been suggested that the virtual channel throughput for higher bond orders is less than that of lower bond orders due to the increase in overhead when bonding .we select three functions , as an example , to govern the way that virtual channel capacity is determined .these are : ( a ) perfect virtual channel throughput , for which ( used in earlier two sections ) ;( b ) residual decrease in virtual channel throughput for which , ; and ( c ) large decrease in virtual channel throughput for which , . the function has been selected such that it decreases throughput exponentially and it penalizes channels with higher bond orders more than those with lower bond orders can be selected for evaluation .the particular function in this paper was considered to represent the typical wireless transmission scenario . ] . as the frame size becomes larger , the system throughput saturates .this observation is common for all considered scenarios , channel bond orders and virtual channel reduction factors .note that the impact of frame length on system throughput in the context of non - opportunistic spectrum access multichannel mac has been explained in much larger detail in ( * ? ? ?4.4 ) , with such discussion later extended to opportunistic spectrum access multichannel mac in ( * ? ? ? * sec .5.2.2 ) and ( * ? ?iii - e1 ) .we refer to these papers for a longer explanation of this phenomenon . to summarize these findings , with longer framessus occupy pu channels for a longer time on average resulting in an improved system throughput . furthermore, as frames / packets get longer the sus do not have to contend for resources as often via the control channel , therefore the number of collisions on the control channel become smaller , which again translates into improved system throughput .the increase in system throughput with an increase in frame size occurs despite the increase in collision probability between pu and su transmissions ( note that the per time slot probability of collision between pu and su transmission is independent of su frame / packet size ) . when frame sizes are relatively small , in the range of [ 10,20)kb, the impact of a residual penalty for virtual channels on system throughput does not drastically affect the performance as expected. however , with a larger penalty , system throughput performance greatly deteriorates , proving no additional benefit of channel bonding in an osa network .the difference between system throughput with and is more profound for the small - scale than for the large - scale network scenario , compare fig .[ fig : beta_a ] and fig .[ fig : beta_b ] . irrespective of the scenario , a bond order of ( i.e. non - bonded system ) outperforms the other bonding schemes for all su frame lengths for every penalty factor. please refer to the earlier explanation in section [ sec : result_pu_impact ] . in another set of experiments ,we investigate the effect of increasing the severity of the penalty in on virtual channels by observing the overall throughput for different channel bond orders and two different pu activity levels , with results presented in fig .[ fig : impact_aincr ] .we observe an approximately linear decrease in system throughput with an increase in the penalty factor . for a small - scale network ,see fig .[ fig : anicr_a ] , as well as another network which is twice the size of the small - scale network , see fig .[ fig : aincr_b ] , even a very small penalty for the virtual channel results in system throughput loss as compared to the non - bonded system , for example compare and for and both values of . in the small - scale scenario , the throughput of higher bond orders is worse than that of the non - bonded system , as shown in fig .[ fig : qp_a ] . however , in the case of the larger network , see fig . [fig : aincr_b ] , the non - linearities of virtual channel throughput are better accommodated when pu activity is low ( again , in our case ) .hence the bonded systems provide for an improvement in throughput in the range of .this proves that there is a benefit of channel bonding even when the osa network is unable to exploit the full theoretical capacity provided by the virtual channel .finally , we add that ( not shown in the fig .[ fig : impact_aincr ] due to space constraints ) as the pu activity increases , the effect of the penalty function becomes marginal as preemption dominates incoming su connections .thus far we assume that once the pu is detected on any of the virtual channels the transmitted frame is lost .this approach serves as a lower bound on the channel bonding mac protocol performance , refer to section [ sec : primary_user_detection ] for details . in this sectionwe will relax this assumption and consider an example of a more flexible frame disruption resolution strategy .specifically , we extend our simulation platform and consider a strategy denoted as channel switching ( s ) , where the osa network frame on the arrival of the pu ( on any of the physical channels ) is being switched to other free pu channels being a function of bonding size is kept constant throughout the whole network operation . ] . in other words , any frame transmission that uses -bonded channels must be switched to another set of unoccupied physical channels .thus , the frame is lost only when not enough physical channels are found for switching . as s strategy is equivalent to b in we refer to ( * ? ? ? * theorem 1 ) proving that distributed channel switching can be performed by the osa network without additional signaling transmission .furthermore , ( * ? ? ?4.3.2 ) describes in detail how the osa multichannel mac protocol with dcc resolves collisions when multiple sender / receiver pairs decide to switch to the same pool of physical channels . for notational convenience , in the remainder of this section the fundamental frame disruption resolution strategy ( considered in the analysis )is denoted as a. + in this section , in addition to throughput we investigate collision probability , , defined as the probability of using a physical channel when the pu is present .note that due to the channel switching process , to keep the comparison with flexible osa channel bonding mac fair , we virtually prolong the frame length by the channel switching delay such that .the results are presented in fig .[ fig : impact_m_col ] where for fixed , , and two values of and we observe the performance metrics of interest .it is immediately observed that with the s strategy the osa network with the channel switching performs much better than with the a strategy , see fig .[ fig : m_thr_qp01 ] and fig .[ fig : m_thr_qp03 ] ( the increase is in orders of magnitude in comparison to strategy a ) .but on the other hand we also observe a large increase in the respective collision probabilities with the introduction of strategy s , see fig .[ fig : m_col_qp01 ] and fig .[ fig : m_col_qp03 ] .this result clearly shows that there is a trade - off between frame disruption resolvability and collisions induced to pus by osa operation . the larger the osa frame and and the larger the pu activity per channel , the larger the collision rate , see fig . [fig : m_col_qp03 ] where almost 2% collision probability is reached ( which is again in the orders of magnitude larger than in the a strategy ) .interestingly , with increase in the collision probability rate decreases . this is because with a large channel bonding pool , there is less probability that free channels can be found for switching and the frame is dropped , in - turn reducing probability of further collisions with the pu .we want to emphasize that with the s strategy , the conclusions obtained for osa channel bonding mac with the a strategy still hold .mainly , the order of bonding preference is preserved and in most cases outperforms other bonding schemes in terms of throughput .in previous sections we have assumed that the channels are used equiprobably by each pu . in this case, the osa network channel selection strategy does not have an effect on the mac protocol performance as each selected channel has the same probability of being disrupted by the pu .however , as the channels become non - uniformly used by pus , the selection of a successive channel for each new bond might have a profound effect on the osa network performance .therefore in this section we investigate the effect of channel selection strategies on the performance of the channel bonding mac protocol with a non - uniform distribution of pu activity over the pu channel set .intelligent channel bonding strategies , assuming side - note information on pu channel properties were investigated in ( * ? ? ?5.2 ) ( denoted therein as frequency bundling ) , based on a large data set of measured pu channels .we will also consider such a system in the context of our work .specifically , we will assume that the osa network is aware of non - uniform pu channel usage and knows the first moment of pu channel distribution for every channel . as an example , for mathematical tractability and without loss of generality , we assume that channel is used by pu with probability where denotes the per channel pu activity imbalance factor . note that denotes equal channel utilization by the pu for each channel and denotes an exponential increase in channel utilization with increasing .equation ( [ eq : chan_imb ] ) allows for a fair comparison of different channel selection strategies against uniform pu channel usage , as the mean of set for any is equal to the mean channel utilization in the case of uniform pu activity over all pu channels , i.e. .note , however , that in ( [ eq : chan_imb ] ) must be selected such that it does not violate pu channel usage properties , i.e. . in the simulation experiment we consider two channel selection strategies : random ( r ) channel selection and least - used ( l ) channel selection . in case of the l channel selection strategy the osa network , knowing how channels are used by the pu , always selects channels that are on average lowest used by the pu for each new bond ( to minimize probability of collision with the pu and prolong transmission time ) .apart from observing the osa network throughput , for a considered distribution of channel usage by the pu , we also observe how uniformly channels were used by the sus in the osa network .for that we adopt a common fairness indicator denoted as jain s index , defined as ( * ? ? ?3 ) where is the usage of channel by the osa network .we present the results in fig .[ fig : impact_aq ] .the l channel selection strategy significantly outperforms the r channel selection in terms of obtained throughput , see fig .[ fig : aq_thr_small ] and fig .[ fig : aq_thr_large ] .the gain from using the l channel selection increases as becomes larger .this phenomenon holds irrespective of the channel bonding size and is more evident as network size increases .an obvious result is that for the l channel selection strategy the fairness is much lower than that of the r strategy since the r strategy has almost a perfect fairness of 1 , especially for the small network scenario ( see fig .[ fig : aq_fai_small ] and compare with fig .[ fig : aq_fai_large ] ) .however , a more interesting ( non - obvious ) result is that with an increase of bonding size the fairness of the l strategy increases . this is due to more channels being used by the mac protocol simulatneously with a larger bonding order .furthermore , for fairness decreases as increases .as the pu activity distribution becomes skewer , i.e. becomes large , the fairness of both the l and r strategies converge .this is especially visible in fig .[ fig : aq_fai_large ] in the case of a large osa network because most of the pu activity is concentrated over a small pool of channels , while in other cases the channels are sporadically used by pus .interestingly , varying the channel selection strategy does not change the conclusions obtained earlier , e.g. reaches highest throughput followed by and then .+ in addition , we consider a fundamentally different channel bonding osa mac protocol , where priorities within the su network are allowed .in particular , we consider a flexible channel bonding mac , where each su is able to send two types of frames : ( i ) regular and ( ii ) high priority .the priority - enabling flexible channel bonding mac works as follows .each newly generated su frame is a high priority frame occurring with probability .an active , regular su connection is halted and displaced on the arrival of a high priority su connection when all channels are already occupied by active su connections .a set of halted connections in total is allowed , where refers to the size of the buffer .note that while in the real - life the most realistic setup for the osa network is ( as buffering occurs by halting the existing connection of one su transmitter / receiver pair by means of observing connection requests on the control channel ( * ? ? ?4.3.3 ) ) in numerical evaluation we will also consider the effect of on the system throughput .furthermore , for consistency with earlier results presented in this paper we do not consider buffering of su connections on the event of preemption by pu connection , noting that the detailed consideration of multichannel ( non - bonding ) osa mac protocols with frame buffering , considering pu preemptions , has already been considered in ( * ? ? ?if more than one channel is freed from pu occupancy or su transmission ( of any kind ) , then one of the buffered connections resumes transmission until either the frame is successfully transmitted or preempted by the presence of a pu .the connections to be resumed are selected randomly from the pool in the buffer and per time slot one connection reconnect is possible ( for consistency with the connection arrangement policy on the control channel ) .if during a single time slot new connection wants to access the channel pool and there is an existing connection to be resumed , the new connection gets priority over the buffered connection only if it is a high - priority connection . for regular connections ,the buffered connection gets priority to connect to the channel .furthermore , note that existing connection in the buffer does not contend again for channel access through the control channel .when one of the su connections needs to be buffered , it is also selected randomly from the set of existing connections . in practical osathis can be done by , e.g. , implementing a rule of buffering first the connections of a su sender that have been most buffered so far ( each su can easily track which connections were buffered by observing the control channel , refer to ( * ? ? ?* theorem 1 ) by analogy ) .we consider fairness by observing simultaneously , for one ( small scale ) network setup : ( i ) throughput obtained for the complete osa network and ( ii ) average number of low priority su frames ( as a fraction of total su frames ) that are displaced to the buffer , both as a function of .the results are presented in fig .[ fig : impact_priority ] for three bonding orders , and two different buffer sizes , .observing the first metric , we see that the system throughput has a hyperbolic shape , maximized around and equal to system throughput with at , since it is at these two points that buffering is not effective .that is , when there are no high - priority connections ( ) or when every connection is high - priority and can not be displaced ( ) .we observe that the buffer size does not help in obtaining higher throughput for given value of .furthermore , an increase in results in a decline in throughput due to a decrease in the bond adaptivity of each generated connection . the second metric , i.e. the fraction of total packets buffered presented in fig .[ fig : ph_prm_small ] , is an important indicator of performance trade - off for the system .as the fraction of total buffered connections ( ) increases , more connection disruptions occur for regular su connections .furthermore , we observe a trade - off with respect to since , for each curve , the highest fraction of buffered packets occurs approximately in the range of , causing sus to occur maximum delay in reconnection .when observing frame buffering rate as a function of , we see that the considered mac protocol is the most fair for demonstrating that the overall increase in system throughput incurs a penalty to regular connections .the shape of the packet buffering function is different for different combinations of and .that is , the smaller the , the more skewed towards higher the function is .with higher the effect of increasing and its related increase in fraction of packet buffers becomes more profound ( conversely , observe that curves for ` , ' and ` , ' completely overlap ) .the main message is to keep for the same value of , as the proposed mac protocol obtains the same throughout for the same irrespective of ( see again fig .[ fig : ph_thr_small ] ) , while the packet buffering rate increases as tends to .mean waiting times for the buffered connections are approximately equal for each case considered , specifically : ( i ) 4.59 slots for , , ( ii ) 4.55 slots for , , ( ii ) 4.73 slots for , , ( iii ) 4.49 slots for , , and ( iv ) 4.65 slots for , . in the following studywe extend the operation of the principal ( flexible ) channel bonding multichannel mac protocol and -only channel bonding multichannel mac protocol with specialized channel adaptation features . in other words ,the bonding order is adapted in an effort to maximize su throughput .such an observation can be made while observing fig .[ fig : impact_mn ] , where depending on the user pool size , , certain bonding orders obtain higher throughput than others .let us assume that within a time interval pu / su traffic parameters are stationary .then we can define the following optimization function for time interval , necessary to find the the optimal bonding scheme : where refer to the optimal bonding order at a specific value of for time interval , represents the maximum bonding level allowed , while and denote the osa network - wide required probability of false alarm and mis - detection , respectively . note that is a function of all parameters considered in ( [ eq : optimization2])([eq : optimization3 ] ) , while the framework to derive is presented in section [ sec : analytical_model ] . to explore the effectiveness of the proposed optimization framework , we consider two network scenarios identical to those considered in section [ sec : result_pu_impact ] , i.e. , a small and a large network scenario , with , and .all other parameters remain the same as in section [ sec : result_pu_impact ] except that pus change their activity level in the course of the simulation from in increments of to .a small range of values were chosen in accordance with fig .[ fig : impact_qp ] where it is observed that higher bond orders have better performance at .furthermore , as a bonding - dependent parameter we consider pu activity . during each of the above - mentioned time intervals the bonding order is assumed to remain constant .we assume that denotes the number of time intervals considered , while each in the bonding scheme is the bonding order for an associated value of .note that we do not adapt on su traffic conditions as they are more difficult to estimate in real - life network operation than pu statistics .for example , length of su frames are not known a priori to the receiver ( such information is not sent in the rts frame in our system model ) , thus other nodes can not estimate for how long a virtual channel will be occupied by a certain sender / receiver pair just by listening to a dcc . in fig [ fig : adapt_a ]we present the su throughput performance for and three different bonding schemes for a small network scenario and corresponding values : an optimal bonding scheme of , obtained via solving ( [ eq : optimization1])([eq : optimization2 ] ) , and for comparison and .we immediately notice that bonding , i.e. for any value of , does not provide any gains and is in fact more harmful in terms of su throughput .this is attributed to the bonding order being nearly equal to the channel capacity of the system , , therefore many incoming su connections that are granted access through the control channel are blocked .furthermore , the collision resolution strategy used in the system makes it unfavorable for higher bond orders to maintain a connection because if a pu happens to occupy any single physical channel in a bonded virtual channel the entire frame is preempted , as described in section [ sec : primary_user_detection ] . in fig [ fig : adapt_b ]we represent the su throughput performance in a large network scenario , for the respective values , considering optimal bonding scheme of , again obtained via solving ( [ eq : optimization1])([eq : optimization2 ] ) , and for comparison and bonding schemes . in the large network scenario , therefore a few bonded connections do not occupy the entire system bandwidth making it possible for multiple bonded connections to be present .this phenomenon allows the adaptive bonding scheme to utilize higher bond orders in time intervals in which in order to maximize the usage of channels .this is clearly visible as the adaptive bonding schemes and outperform the single - bonded scheme .furthermore , outperforms as it is able to reduce the bonding order with an increase in pu activity level .in addition , as concluded in section [ sec : result_pu_impact ] , the -only bonding scheme performs worse than the flexible channel bonding scheme for small - scale networks .this is because of the possibility of using a fixed number of channels , i.e. in this particular case , for each connection is smaller then that of a large - scale network ( notice the large difference in performance for the two protocol types in fig .[ fig : adapt_a ] ) .however , as the system size increases , the difference between the flexible and the -only channel bonding mac diminishes because there is a higher probability -order connections are made due to increased contentions on the control channel . in the case considered in this paper ,the difference between the two protocol options is almost negligible , see fig .[ fig : adapt_b ] .so far we have assumed that the pu traffic is distributed according to the geometric distribution . in this sectionwe relax this assumption by considering the impact of a heavy - tailed distribution on the channel bonding mac protocol performance , in the same way as it was considered in ( * ? ? ?5.2.6 ) for the non - bonding multichannel mac protocols . in particular, we assess how the long - tail distribution of the pu activity affects the performance of the flexible channel bonding mac protocol . as an example of a long - tail distribution we use the log - normal distribution , as it was concluded in (* table 3 and 4 ) that measured pu occupancies can be described by such a distribution in approximately 40% of the cases for gsm 900 uplink , gsm 1800 downlink , dect and 2.4ghz unii channels .we present the protocol performance for four different combinations of `` on '' and `` off '' time distributions , considering geometric and log - normal distribution , i.e. all possible combinations of `` on '' and `` off '' times obtained in ( * ? ? ?* table 3 and 4 ) .they are denoted symbolically as : ( i ) ee , ( ii ) el , ( iii ) le and ( iv ) ll , where ` e ' and ` l ' denote the geometric and the log - normal distribution , respectively , while the first and second position within the two - letter expression denote `` off '' and `` on '' times , respectively .just like in ( * ? ? ?5.2.6 ) , the parameter of the log - normal distribution was selected such that its mean value was equal to for the `` on '' time and for the `` off '' time .because the log - normal distribution is continuous , it was rounded to the nearest integer , with the scale parameter and location parameter , where , is the mean and variance of the resulting discretized log - normal distribution .note , as in ( * ? ? ?5.2.6 ) , that the variance of the resulting discretized log - normal distribution is equal to the variance of the geometric distribution for the same mean value .the results are presented for two types of networks denoted symbolically as small and large ( see fig [ fig : impact_qp ] for details ) , respectively in fig [ fig : logn_m4n12 ] and fig .[ fig : logn_m12n40 ] .the values of pu activity , , represent low activity rates , which are of the most interest for the network designer .furthermore , for the sake of clarity of the presentation we consider only two bonding orders , i.e. .first , we observe that the impact of log - normal pu channel occupancy distribution on the system throughput is not large as one might expect ( for example for and the large network , fig . [fig : logn_m12n40 ] , the difference between the ee and le cases is less than 7% ) .if one wants to select the best combination of pu distribution that maximizes system throughput , it would be le followed by ll .the impact of the non - geometric distribution on throughput is more visible for large networks than for smaller ones , compare fig [ fig : logn_m4n12 ] with fig .[ fig : logn_m12n40 ] as in large networks there are more opportunities to exploit larger number of channels for transmission .all bonding orders are effected equally by the non - geometric process of the pu occupancy , thus all conclusions drawn for the channel bonding mac protocol using the geometrically distributed pu traffic will also hold for other combinations of pu traffic .in this paper we have developed a detailed analytical model for assessing system throughput of an ad - hoc opportunistic spectrum access ( osa ) network with channel bonding .our analysis has led to a set of important conclusions .firstly , we note that there is generally a benefit of channel bonding for osa networks which is most prominent for low primary user ( pu ) channel activity and/or large pu channel pools . on the other hand , in certain cases , channel bonding might result in significant throughput loss in comparison to a classical non - bonded system , i.e. when the pu activity is large or the users per channel ratio is very large . secondly , to be able to exploit channel throughput fully , a medium access control protocol should adaptively change the channel bond order depending on the network conditions ( in particular the number of available resources and currently contending users ) .thirdly , with certain physical layer constraints on maximum channel capacity , e.g. due to the limited transmission power per channel , irrespective of its bandwidth , the benefit of channel bonding for ad - hoc networks is easily lost .however , networks with a small users per channel ratio are still able to obtain higher system throughput than a regular non - bonded system , even when these constraints are present , provided , again , that the pu activity level is small .m. y. arslan , k. pelechrinis , i. broustis , s. v. krishnamurthy , s. adepalli , and k. papagiannaki , `` auto - configuration of ieee 802.11n wlans , '' in _ proc .acm conext _ , philadelphia , pa , usa , nov .30 dec . 3 , 2010 .j. park , p. paweczak , and d. abri , `` performance of joint spectrum sensing and mac algorithms for multichannel opportunistic spectrum access ad hoc networks , '' _ ieee trans .mobile comput ._ , vol . 10 , no . 7 , pp. 10111027 , jul .2011 .t. bansal , d. li , and p. sinh , `` fairness by sharing : split channel allocation for cognitive radio networks , '' ohio state university , tech .department of computer science and engineering tr28 , 2010 .j. park , p. paweczak , p. grnsund , and d. abri , `` analysis framework for opportunistic spectrum ofdma and its application to the ieee 802.22 standard , '' _ ieee trans . veh ._ , vol .61 , no . 5 , pp .22712293 , jun . 2012 .p. paweczak , s. pollin , h .- s .w. so , a. bahai , r. v. prasad , and r. hekmat , `` performance analysis of multichannel medium access control algorithms for opportunistic spectrum access , '' _ ieee trans . veh ._ , vol .58 , no . 6 , pp . 30143031 , jul. 2009 .s. c. jha , u. phuyal , m. m. rashid , and v. k. bhargava , `` design of omc - mac : an opportunistic multi - channel mac with qos provisioning for distributed cognitive radio networks , '' _ ieee trans .wireless commun ._ , vol . 10 , no . 10 , pp .34143425 , oct .2011 .x. zhang and h. su , `` cream - mac : cognitive radio - enabled multi - channel mac protocol over dynamic spectrum access networks , ''_ ieee j. select .topics signal processing _ , vol . 5 , no . 1 ,110123 , feb .2011 .a. t. hoang , y .- c .liang , d. t. c. wong , y. zeng , and r. zhang , `` opportunistic spectrum access for energy - constrained cognitive radios , '' _ ieee trans .wireless commun ._ , vol . 8 , no . 3 , pp .12061211 , mar .2009 .j. gambini , o. simeone , y. bar - ness , u. spagnolini , and t. yu , `` packet - wise vertical handover for unlicensed multi - standard spectrum access with cognitive radios , '' _ ieee trans .wireless commun ._ , vol . 7 , no . 12 , pp51725176 , dec .2008 .h. b. salameh , m. krunz , and d. manzi , `` design and evaluation of an efficient guard - band - aware multi - channel spectrum sharing mechanism , '' university of arizona , tucson , az , usa , tech . rep .tr - ua - ece-2011 - 1 , 2011 .p. zhou , h. hu , h. wang , and h .- h .chen , `` an efficient random access scheme for ofdma systems with implicit message transmission , '' _ ieee trans .wireless commun ._ , vol . 7 , no . 7 , pp . 27902797 , jul. 2008 .v. h. g. e. ien and d. gesber , `` throughput guarantees for wireless networks with opportunistic scheduling : a comparative study , '' _ ieee trans .wireless commun ._ , vol . 6 , no . 12 , pp .42154220 , dec . 2007 .r. k. jain , d .-w. chiu , and w. r. have , `` a quantitative measure of fairness and discrimination for resource allocation in shared computer systems , '' digital equipment corporation , hudson , ma , tech .dec - tr-301 , sep .
transmission over multiple frequency bands combined into one logical channel speeds up data transfer for wireless networks . on the other hand , the allocation of multiple channels to a single user decreases the probability of finding a free logical channel for new connections , which may result in a network - wide throughput loss . while this relationship has been studied experimentally , especially in the wlan configuration , little is known on how to analytically model such phenomena . with the advent of opportunistic spectrum access ( osa ) networks , it is even more important to understand the circumstances in which it is beneficial to bond channels occupied by primary users with dynamic duty cycle patterns . in this paper we propose an analytical framework which allows the investigation of the average channel throughput at the medium access control layer for osa networks with channel bonding enabled . we show that channel bonding is generally beneficial , though the extent of the benefits depend on the features of the osa network , including osa network size and the total number of channels available for bonding . in addition , we show that performance benefits can be realized by adaptively changing the number of bonded channels depending on network conditions . finally , we evaluate channel bonding considering physical layer constraints , i.e. throughput reduction compared to the theoretical throughput of a single virtual channel due to a transmission power limit for any bonding size . opportunistic spectrum access , medium access control , channel bonding , performance analysis .
quantum key distribution ( qkd ) protocols typically involve a two - step procedure in order to generate a secret key .first , the legitimate users ( alice and bob ) perform a set of measurements on effective bipartite quantum states that are distributed to them . as a result, they end up with a classical joint probability distribution , that we shall denote as , describing their outcomes .the second step consists of a classical post - processing of the data .it requires an authenticated classical channel , and it includes post - selection of data , error - correction to reconcile the data , and privacy amplification to decouple the data from a possible eavesdropper ( eve ) . in order to create the correlated data , qkd schemes usually require alice to prepare some non - orthogonal quantum states with a priori probabilities that are sent to bob . on the receiving side ,bob measures each received signal with a _ positive operator value measure _ ( povm ) . generalising the ideas introduced by bennett __ in ref . , the signal preparation process in this kind of schemes can alternatively be thought of as follows : alice produces first bipartite states and , afterwards , she measures the first subsystem in the orthogonal basis corresponding to the measurement operators .this action generates the signal states with a priori probabilities .the reduced density matrix of alice , , depends only on the probabilities and on the overlap of the signals states .this means , in particular , that is always fixed by the preparation process and can not be modified by eve . in order to include this information in the measurement process one can add to the observables , measured by alice and bob , other observables such that the observables form a tomographic complete set of alice s hilbert space . from now on, we will consider that the data and the povm include also the observables .the classical post - processing of can involve either two - way or one - way classical communication .two - way classical communication protocols can tolerate a higher error rate than one - way communication techniques .on the other hand , one - way post - processing methods typically allow to derive simpler unconditional security proofs for qkd than those based on two - way communication . in this last paradigm ,two different cases can be considered : _reverse reconciliation _ ( rr ) refers to communication from bob to alice , and _direct reconciliation _( dr ) permits only communication from alice to bob .( see , for instance , refs . . )an essential question in qkd is to determine whether the correlated data allow alice and bob to generate a secret key at all during the second phase of qkd . herewe consider the so - called _ trusted device scenario _, where eve can not modify the actual detection devices employed by alice and bob , as used in refs .we assume that the legitimate users have complete knowledge about their detection devices , which are fixed by the actual experiment .the case of two - way classical post - processing has been analysed in ref . , where it was proven that a necessary precondition for secure two - way qkd is the provable presence of quantum correlations in .that is , it must be possible to interprete , together with the knowledge of the corresponding observables , as coming _ exclusively _ from an entangled state .otherwise , no secret key can be distilled from . in order to deliver this entanglement proof any separability criteria ( see , for instance , ref . and references therein ) might be employed .the important question here is whether the chosen criterion can provide a necessary and sufficient condition to detect entanglement even when the knowledge about the quantum state is not tomographic complete .it was proven in ref . that entanglement witnesses ( ews ) fulfill this condition .an ew is an hermitian operator with a positive expectation value on all separable states .so , if a state obeys , the state must be entangled . with this separability criterion , refs . analysed three well - known qubit - based qkd schemes , and provided a compact description of a minimal verification set of ews ( _ i.e _ , one that does not contain any redundant ew ) for the four - state and the six - state qkd protocols , and a reduced verification set of ews ( _ i.e. _ , one which may still include some redundant ews ) for the two - state qkd scheme , respectively .these verification sets of ews allow a systematic search for quantum correlations in .one negative expectation value of one ew in the set suffices to detect entanglement . to guarantee that no verifiable entanglement is present in , however , it is necessary to test _ all _ the members of the set .unfortunately , to find a minimal verification set of ews , even for ideal qubit - based qkd schemes , is not always an easy task , and it seems to require a whole independent analysis for each protocol , let alone for higher dimensional qkd schemes .( see also ref . . ) also , one would like to include in the analysis the attenuation introduced by the quantum channel , not considered in refs . , and which represents one of the main limitations for optical realisations of qkd .one central observation of this paper is very simple , yet potentially very useful : given _ any _ qubit - based two - way qkd scheme , one can search for quantum correlations in by just applying the _ positive partial transposition _ ( ppt ) criterion adapted to the case of a quantum state that can not be completely reconstructed .this criterion provides a necessary and sufficient entanglement verification condition for any qubit - based qkd protocol even in the presence of loss introduced by the channel , since , in this scenario , only _ nonpositive partial transposed _( npt ) entangled states exist .moreover , it is rather simple to evaluate in general since it can be cast into the form of a convex optimisation problem known as semidefinite program ( sdp ) .such instances of convex optimisation problems can be solved efficiently , for example by means of interior - point methods .this means , in particular , that this criterion can be applied to any qubit - based qkd scheme in a completely systematic way . one - way qkd schemes can be analysed as well with sdp techniques .it was shown in ref . that a necessary precondition for one - way qkd with rr ( dr ) is that alice and bob can prove that there exists no quantum state having a symmetric extension to two copies of system ( system ) that is compatible with the observed data .this kind of states ( with symmetric extensions ) have been analysed in detail in refs . , where it was proven that the search for symmetric extensions for a given quantum state can be stated as a sdp .( see also refs . . ) here we complete the results contained in ref . , now presenting specifically the analysis for the case of a lossy channel .both qkd verification criteria mentioned above , based on sdp techniques , also provide a means to search for witness operators for a given two - way or one - way qkd protocol in a similar spirit as in refs .any sdp has an associated dual problem that represents also a sdp .this dual problem can be used to obtain a certificate of infeasibility whenever the primal problem is actually infeasible .most importantly , it can be proven that the solution to this dual problem corresponds to the evaluation of an optimal witness operator , that belongs to the minimal verification set of them for the given protocol , on the observed data .a positive expectation value of this optimal witness operator indicates that no secret key can be distilled from the observed data .the paper is organised as follows . in sec .ii we introduce the qkd verification criteria for two - way and one - way qkd in more detail , and we show how to cast them as primal sdps . then , in sec .iii , we present the dual problems associated to these primal sdps , and we show that the solution to these dual problems corresponds to evaluating an optimal witness operator on the observed data for the given protocol .these results are then illustrated in sec .iv , where we investigate in detail the two - state qkd protocol in the presence of loss .the analysis for other qubit - based qkd schemes is completely analogous , and we include very briefly the results of our investigations on other qkd protocols in an appendix . finally , sec .v concludes the paper with a summary .our starting point is the observed joint probability distribution obtained by alice and bob after their measurements .this probability distribution defines an equivalence class of quantum states that are compatible with it , by definition , every can represent the state shared by alice and bob before their measurements . in single - photon qkd schemes in the presence of loss , any state can be described on an hilbert space , with and denoting , respectively , alice s and bob s hilbert spaces , and where the subscript indicates the dimension of the corresponding hilbert space . to see this, we follow the signal preparation model introduced previously , where alice prepares states , and , afterwards , she measures the first subsystem in the orthogonal basis . using neumark s theorem , we can alternatively describe the preparation process as alice producing first bipartite states on and , afterwards , she measures the first subsystem with a povm .( see also ref . . ) to include the loss of a photon in the quantum channel , we simply enlarge bob s hilbert space from to by adding the vacuum state .let us now consider two - way qkd protocols . whenever the observed joint probability distribution , together with the knowledge of the corresponding measurements performed by alice and bob , can be interpreted as coming from a separable state then no secret key can be distilled from the observed data . in only npt entangled states exist and ,therefore , a simple necessary and sufficient criterion to detect entanglement in this scenario is given by the ppt criterion : a state is separable if and only if its partial transpose is a positive operator .partial transpose means a transpose with respect to one of the subsystems .such a result is generally not true in higher dimensions ._ observation 1 _ : consider a qubit - based qkd scheme in the presence of loss where alice and bob perform local measurements with povm elements and , respectively , to obtain the joint probability distribution of the outcomes .then , the correlations can originate from a separable state if and only if there exists such as . _proof_. if can originate from a separable state , then there exists such as .moreover , we have that any separable state satisfies . to prove the other direction , note that if there exists such that then , since , we find that must be separable . to determine whether there exists such as can be solved by means of a primal semidefinite program ( sdp ) .this is a convex optimisation problem of the following form : where the vector represents the objective variable , the vector is fixed by the particular optimisation problem , and where the matrices and are hermitian matrices .the goal is to minimise the linear function subjected to the linear matrix inequality ( lmi ) constraint . if the vector , then the optimisation problem given by eq .( [ primalsdp_marcos ] ) reduces to find whether the lmi constraint can be satisfied for some value of the vector or not . in this case , the sdp is called a _feasibility problem_. remarkably , sdps can be solved with arbitrary accuracy in polynomial time , for example by means of interior - point methods . according to observation , we can find whether there exists a separable state that belongs to the equivalence class just by solving the following feasibility problem : where the objective variable is used to parametrise the density operators .the method used to parametrise is discussed in detail in sec .[ parametr ] .one - way rr ( dr ) qkd schemes require from alice and bob to show that there exists no quantum state with a symmetric extension to two copies of system ( system ) .a state is said to have a symmetric extension to two copies of system if and only if there exists a tripartite state , with , and where , such that : where the swap operator satisfies .this definition can be easily extended to cover also the case of symmetric extensions of to two copies of system , and also of extensions of to more than two copies of system or of system . to find whether has a symmetric extension to two copies of system be solved with the following feasibility problem : =\rho_{ab}({\bf x } ) , \\\nonumber & & \rho_{aba'}({\bf x})\geq{}0.\end{aligned}\ ] ] note that this sdp does not include the constraint because non - negativity of the extension , together with the condition=\rho_{ab}({\bf x}) ] , such that . in ews are dews . in what follows ,we establish this connection explicitly via the dual problem .the sdp given by eq .( [ primalsdp_two_way_b ] ) can be transformed into a slightly different , but completely equivalent , form as follows ( see appendix [ ap_b ] ) , where denotes an auxiliary objective variable . according to eq .( [ dual_marcos ] ) , the dual problem associated with eq .( [ ptprimal ] ) can be written as \\\nonumber \text{subject to } & & z \geq 0 \\ \nonumber & & \text{tr}(z)= 1 \\ & & \text{tr}[(s_{kl } \oplus s_{kl}^{\gamma } ) z]=0\ ; \forall kl \not \in i.\end{aligned}\ ] ] the structure of all the matrices which appear in this dual problem is the direct sum of two different matrices .then , without loss of generality , we can assume that the same block structure is satisfied for , _i.e. _ , .this means , in particular , that the objective function in eq .( [ ptdual ] ) can now be re - expressed as & = & \text{tr}[(z_1 + z_2^{\gamma})\rho_{\text{fix}}]\\ \label{decomp_wit } & \equiv & \text{tr}(w\rho_{\text{fix}}),\end{aligned}\ ] ] where we have used the property and , in the last equality , we defined the operator .next we show that is a dews . for that , note that the semidefinite constraint implies .moreover , the witness is normalised , since implies . to conclude , we use the remaining equality constraints , =0\ \forall kl \not \in i ] . whenever the solution to the dual problem given by eq .( [ ptdual ] ) delivers \equiv{}\text{tr}(w \rho_{\text{fix}})\geq{}0 ] for any hermitian operators , and . with this definition, we can rewrite the objective function in eq .( [ dualext2 ] ) as = \text{tr}[\lambda^{\dag}(z^*)\rho_{\text{fix } } ] \equiv \text{tr}(w_{\text{sym } } \rho_{\text{fix}}),\ ] ] where we defined as the desired witness operator for the symmetric extendibility problem .in the remaining part of this section , we obtain the general structure of the witness operator . for that, we simply apply the adjoint map to the operator , set the resulting operator equal to an operator of arbitrary form , solve the equality constraint , and , finally , formulate the dual problem in terms of this new operator . the map can be written as , .\ ] ] setting equal to an arbitrary hermitian operator we obtain the following equality constraint since we have expressed every operator in terms of an operator basis , the equality constraint can only be fulfilled if the coefficients of , and the coefficients of , are related via : now , instead of considering the matrix as the objective variable of the dual problem , we can equivalently consider the matrix as the free variable . in order to do so , we only need to translate the positive semidefinite constraint together with the normalisation condition included in eq .( [ dualext2 ] ) into new constraints on .this can be done by using eq .( [ connectionzwsym ] ) . this way , we arrive at the following form for the dual problem : where the variable represents a witness operator for the symmetric extendibility problem . moreover , from eq .( [ connectionzwsym ] ) we obtain that can always be expressed as that is , belongs to the minimal verification set of witnesses for the given one - way ( rr ) qkd protocol .like in the previous section , whenever the solution to the dual problem given by eq .( [ dualext3 ] ) delivers then no secret key can be distilled from the observed data with one - way rr .the case of one - way qkd with dr can be analysed in a similar way .in this section we study the two - state qkd protocol , both for the case of two - way and one - way classical communication .the analysis for other qubit - based qkd schemes is completely analogous , and we include very briefly the results of our investigations on other well - known qkd protocols in appendix [ other_qkd ] .we refer here to single - photon implementations of the qubit .the state of the qubit is described , for instance , by some degree of freedom in the polarisation of the photon . in our calculationswe follow the approach introduced in sec .[ verif_criteria ] , although similar results could also be obtained using the witness approach presented in sec .[ sec2a ] .the numerical evaluations are performed with the freely - available sdp solver sdpt3 - 3.02 , together with the input tool yalmip .we shall consider that the observed joint probability distribution originates from alice and bob measuring the following quantum state +p\rho_a\otimes{\mbox{ } } _b{\mbox{ } } , \end{aligned}\ ] ] where ] represents an error parameter ( or depolarising rate ) of the channel , is the identity operator on alice s hilbert space , represents a unitary operator acting on bob s system , denotes the effective bipartite state initially prepared by alice in the given qkd protocol , represents alice s reduced density matrix ( _ i.e. _ , ) , and the operator is given by . the quantum state given by eq .( [ channel_eq ] ) defines one possible eavesdropping interaction .but our analysis can straightforwardly be applied to other quantum channels , as it depends only on the probability distribution that characterises the results of alice s and bob s measurements .we include the operator in eq .( [ channel_eq ] ) to model the collective noise ( or correlated noise ) introduced by the quantum channel ( _ e.g. _ , optical fiber ) .this noise arises from the fluctuation of the birefringence of the optical fiber which alters the polarisation state of the photons .when this fluctuation is slow in time , its effect can be described with a unitary operation . for simplicity , we shall consider that is parametrised only with one real parameter .in particular , we choose with ] .moreover , for simplicity , we take only two different values of the angle .for instance , we choose and .these three parameters , , , and , allow us to evaluate the performance of a qkd protocol when the quantum channel is described by eq .( [ channel_eq ] ) .one could also select other figures of merit in order to evaluate a protocol , such as the quantum bit error rate ( qber ) .this is the rate of events where alice and bob obtain different results .it refers to the sifted key , _i.e _ , it considers only those events where the signal preparation and detection methods employ the same polarisation basis .we include as well an analytic expression for the qber for the given qkd protocol .the two - state protocol is one of the simplest qkd protocols .it is based on the random transmission of only two nonorthogonal states , and .alice chooses , at random and independently every time , a bit value , and prepares a qubit in the state , with and , that is sent it to bob . on the receiving side ,bob measures the qubit he receives in a basis chosen at random within the set .the loss of a photon corresponds to a projection onto the vacuum state .bob could also employ a different detection method defined by a povm with the following operators : with , , and . in this last case ,alice s bit value is associated with the operator , while the operator represents an inconclusive result .this is the approach that we shall consider here .the preparation process can be thought of as alice prepares first the bipartite signal state with .the fact that the reduced density matrix of alice is fixed and can not be modified by eve is vital to guarantee the security of this scheme .otherwise , the joint probability distribution alone does not allow alice and bob to distinguish between the entangled state and the separable one and such as alice has complete tomographic knowledge of .following the approach introduced in sec .[ verif_criteria ] , in fig .[ two - state - fig ] we present an upper bound on the tolerable depolarising rate as a function of the photon loss probability for two different values of the parameter .it states that no secret key can be distilled from the correlations established by the users . in this example , the results obtained coincide when and . to obtain an upper bound on the tolerable qber one can use the following expression \}},\ ] ] with . in particular , for given values of the parameters , , and , one only needs to substitute in eq .( [ qber_2 ] ) the value of given in fig .[ two - state - fig ] as a function of the parameter . remarkably , the cut - off point for two - way qkd presented in fig .[ two - state - fig ] , _ i.e. _ , the value of the photon loss probability that makes and also , coincides with the limit imposed by the unambiguous state discrimination attack . in the two - state protocolthis limit is given by .( see also ref .[ two - state - fig ] also shows a difference between one - way classical post - processing with rr and with dr as a function of the parameter .the reason behind this effect is beyond the scope of this paper and needs further investigations .a fundamental question in quantum key distribution ( qkd ) is to determine whether the legitimate users of the system can use their available measurement results to generate a secret key via two - way or one - way classical post - processing of the observed data . in this paperwe have investigated single - photon qkd protocols in the presence of loss introduced by the quantum channel .our results are based on a simple precondition for secure qkd for two - way and one - way classical communication . in particular , the legitimate users need to prove that there exists no separable state ( in the case of two - way qkd ) , or that there exists no quantum state having a symmetric extension ( one - way qkd ) , that is compatible with the available measurements results .we have shown that both criteria can be formulated as a convex optimisation problem known as a primal semidefinite program ( sdp ) .such instances of convex optimisation problems can be solved efficiently , for example by means of interior - point methods . moreover , these sdp techniques allow us to evaluate these criteria for any single - photon qkd protocol in a completely systematic way .a similar approach was already used in ref . for the case of one - way qkd without losses . herewe complete these results , now presenting specifically the analysis for the case of a lossy channel .furthermore , we have shown that these qkd verification criteria based on sdp provide also a means to search for witness operators for a given two - way or one - way qkd protocol .any sdp has an associated dual problem that represents also a sdp .we have demonstrated that the solution to the this dual problem corresponds to the evaluation of an optimal witness operator that belongs to the minimal verification set of them for the given two - way ( or one - way ) qkd protocol .most importantly , a positive expectation value of this optimal witness operator guarantees that no secret key can be distilled from the available measurements results .finally , we have illustrated our results by analysing the performance of several well - known qubit - based qkd protocols for a given channel model .we gladly acknowledge stimulating discussions with k. tamaki , o. ghne , c .- h . f. fung , and x. ma .we specially thank h .- k .lo and n. ltkenhaus for their critical discussion of this article , and f. just for help on the numerics .financial support from nserc , cipi , crc program , cfi , oit , ciar , prea , dfg under the emmy noether programme , and the european commission ( integrated project secoqc ) are gratefully acknowledged .m.c . also thanks the financial support from a post - doctoral grant from the spanish ministry of science ( mec ) .in this appendix we present some duality properties of a general sdp that guarantee that the solution to the dual problems introduced in sec .[ sec2a ] can be associated with a witness operator .the primal problem given by eq .( [ primalsdp_marcos ] ) is called feasible ( strictly feasible ) if there exists such as ( ) . similarly ,the dual problem given by eq .( [ dual_marcos ] ) is called _ feasible _ ( _ strictly feasible _ ) if there exists a matrix ( ) which fulfills all the desired constraints .the _ weak duality condition _, illustrated in eq .( [ weakduality ] ) , allows to derive simple upper and lower bounds for the solution of either the primal or dual problem .in particular , for every feasible solution of the primal problem and for every feasible solution of the dual problem , the following relation holds : the _ strong duality condition _ certifies whether the optimal solution to the primal and dual problem , that we shall denote as and , respectively , are equal .more precisely , if ( 1 ) the primal problem is strictly feasible , or ( 2 ) the dual problem is strictly feasible .moreover , if both conditions are satisfied simultaneously then it is guaranteed that there is a feasible pair achieving the optimal values .this last condition is known as the _ complementary slackness condition_. the sdp given by eq .( [ primalsdp_marcos ] ) , when , can always be transformed as follows this sdp is always strictly feasible . to see this , note that if and , where denote the eigenvalues of the matrix , then .moreover , it can be shown that eq .( [ primalsdp ] ) is equivalent to the original sdp .let be the solution to eq .( [ primalsdp ] ) . if , the original problem is infeasible since . on the other hand ,if there exists such that , stating that the original problem is feasible .that is , the solution of the sdp given by eq .( [ primalsdp ] ) certifies whether the original problem is indeed feasible or not .the dual problem associated with eq .( [ primalsdp ] ) is given by : if all the matrices are traceless , _i.e. _ , , this dual problem is also always strictly feasible . a trivial strictly feasible solution to this problem is given by , where denotes the dimension of .if we apply the three duality conditions mentioned above to the sdps given by eq .( [ primalsdp ] ) and eq .( [ dualsdp ] ) we find that , if eq .( [ primalsdp ] ) delivers an infeasible solution , this arises from the fact that the strong duality relation guarantees that , and the complementary slackness condition certifies that there is a that achieves the optimal value .when the weak duality condition assures that for every feasible solution of the dual problem .both results together show that the solution to the dual problem given by eq .( [ dualsdp ] ) can be associated to a witness operator .in particular , the ability of to detect , at least , one state , _i.e. _ , such as , can be related with eq .( [ violation ] ) . on the other hand ,the requirement that is positive on all states belonging to a given set of them can be related with eq .( [ requirement ] ) . to achieve the desired equivalence , however , the dual problem must be strictly feasible , otherwise the complementary slackness condition does not hold and the existence of an appropriate witness is not guaranteed .it turns out that all the linear constraints included in the dual problems considered in sec .[ sec2a ] have traceless matrices , such that these dual problems are always strictly feasible .in this appendix we include very briefly the results of our investigations on other well - known qubit - based qkd protocols . like in sec .[ section_example ] , we shall consider that the observed data are generated via measurements onto the state given by eq .( [ channel_eq ] ) . in this scheme, alice prepares a qubit in one of the following six quantum states : , and sends it to bob . on the receiving side ,bob measures each incoming signal by projecting it onto one of the three possible bases .the loss of a photon in the channel is characterised by a projection onto the vacuum state .the qber is given by .\ ] ] for we find , as expected , that whenever ( corresponding to a value of ) no secret key can be distilled by two - way classical post - processing . in the case of one - way classical post - processing ( both for rr and dr ) , and assuming and , we obtain that secure qkd might only be possible for a ( ) . a possible eavesdropping strategy to attain this cut - off pointis , for instance , to use an universal cloning machine to clone every signal sent by alice such as the fidelities of eve s and bob s clones coincide .( see also ref . . )the four - state protocol is similar to the six - state protocol , but now alice sends one of four possible signal states instead of one of six . in particular , she chooses one state within the set and sends it bob .each received signal is projected by bob onto one of the two possible bases , together with a projection onto the vacuum state corresponding to the loss of a photon .the qber is now given by if we obtain the well - known result stating that whenever ( corresponding to a value of ) no secret key can be distilled by two - way classical post - processing .similarly , for the case of one - way classical post - processing , and assuming and , we find that the must be lower than ( ) .this last result coincides with the value of the qber produced by an eavesdropping strategy where eve and bob shannon information are equal .this scheme can be seen as a combination of two two - state qkd protocols .more precisely , alice selects , at random and independently each time , one of the following four signal states , , and sends it to bob . on the receiving side ,bob measures each incoming signal by choosing , at random and independently for each signal , one of two possible povms .each povm corresponds to the one used in the two - state protocol ( see sec . [ sec_two - state ] ) for the signal states , with , and , with , respectively .the qber is given by \sin^2\theta}{2\big[e+(1-e ) [ 2(\alpha^4+\beta^4)\sin^2\theta+4\alpha^2\beta^2\cos^2\theta]\big]}.\ ] ] in the case of two - way classical post - processing , the maximum tolerable value of shown in fig .[ fptwo - state - fig ] starts decreasing as the losses in the channel increase , and , at some point , it becomes constant independently of .interestingly , the value of where this inflexion occurs , corresponds to the point where eve can discriminate unambiguously between the two states in the set , with , or between those states in the set , with .this happens when .this qkd scheme requires alice sending to bob one of the following three quantum states . on the receiving side ,bob projects each incoming signal onto one of the two possible bases used in the four - state protocol ( see sec .[ four_sec ] ) , together with a projection onto the vacuum state .the qber has now the following form .\ ] ] for the quantum channel given by eq .( [ channel_eq ] ) , and assuming or , the maximum value of tolerated by the three - state protocol coincides with the four - state protocol for the cases of two - way and one - way post - processing with dr . in the trine protocol , alice selects , at random and independently each time , a qubit in one of the following three states : , and , and sends it to bob .each received signal is measured by bob with a povm defined by the following operators : , , with , and where , and . for the quantum channel given by eq .( [ channel_eq ] ) , and assuming or , it turns out that the maximum value of tolerated by this scheme , both for two - way and one - way post - processing , coincides with the four - state protocol ( see fig .[ four - state - fig ] ) .the qber , however , is given by in this scheme , alice sends to bob one of the following six states : , and . on the receiving side ,bob measures each incoming signal with one of two possible measurements corresponding to the bases with , that he selects at random and independently for each signal , together with a projection onto the vacuum state .when , the maximum value of tolerated by this protocol , both for two - way and one - way post - processing , coincides with the four - state protocol ( see fig .[ four - state - fig ] ) .the case is illustrated in fig . [ acin1-state - fig ] .99 n. gisin , g. ribordy , w. tittel , and h. zbinden , rev .phys . * 74 * , 145 ( 2002 ) .m. duek , n. ltkenhaus , m. hendrych , to appear in progress in optics * 49 * , edt .e. wolf ( elsevier ) . c. h. bennett , g. brassard , and n. d. mermin , phys .* 68 * , 557 ( 1992 ) .m. curty , o. ghne , m. lewenstein , and n. ltkenhaus , phys .a * 71 * , 022306 ( 2005 ) .d. gottesman , and h .- k .lo , ieee trans .inf . theory * 49 * , 457 ( 2003 ) .d. mayers , in _ advances in cryptology proceedings of crypto96 _ ( springer , berlin , 1996 ) , pp . 343 - 357 , available at quant - ph/9606003 .p. w. shor , and j. preskill , phys .. lett . * 85 * , 441 ( 2000 ) .lo , qic * 1 * , 81 ( 2001 ) . k. tamaki , m. koashi , and n. imoto , phys .lett . * 90 * , 167904 ( 2003 ) .f. grosshans , g. van assche , j. wenger , r. brouri , n. cerf , and p. grangier , nature * 421 * , 238 ( 2003 ) .m. heid , and n. ltkenhaus , phys . rev .a * 73 * , 052316 ( 2006 ) .m. curty , and n. ltkenhaus , phys .a * 69 * , 042321 ( 2004 ) .m. curty , m. lewenstein , and n. ltkenhaus , phys . rev. lett . * 92 * , 217903 ( 2004 ) .m. horodecki , p. horodecki , and r. horodecki , in _ quantum information : an introduction to basic theoretical concepts and experiments _ , ed.g .alber et al .( springer , heidelberg , 2001 ) , pp .151 ; k.eckert , o. ghne , f. hulpke , p. hyllus , j. korbicz , j.mompart , d. bru , m. lewenstein , and a. sanpera , in _ quantum information processing _ , ed .g. leuchs and t. beth , ( wiley - vch , verlag , 2003 ) , pp . 79 .see also second edition 2005 .m. horodecki , p. horodecki , and r. horodecki , phys .a * 223 * , 1 ( 1996 ) .b. m. terhal , phys .a * 271 * , 319 ( 2000 ) .m. lewenstein , b. kraus , j. i. cirac , and p. horodecki , phys .a * 62 * , 052310 ( 2000 ) . c. h. bennett and g. brassard , proc .conference on computers , systems and signal processing , bangalore , india ( ieee press , new york , 1984 ) , 175 .d. bru , phys .lett . * 81 * , 3018 ( 1998 ) . c. h. bennett , phys .lett . * 68 * , 3121 ( 1992 ) .j. eisert , p. hyllus , o. ghne , and m. curty , phys .a * 70 * , 062317 ( 2004 ) .a. peres , phys .lett . * 77 * , 1413 ( 1996 ) .l. vandenberghe , and s. boyd , siam review * 38 * , 49 ( 1996 ) .s. boyd , and l. vandenberghe , _ convex optimization _ ( cambridge university press , 2004 ) .t. moroder , m. curty , and n. ltkenhaus , phys .a * 74 * , 052301 ( 2006 ) .a. c. doherty , p. a. parrilo and f. m. spedalieri , phys .* 88 * , 187904 ( 2002 ) .a. c. doherty , p. a. parrilo and f. m. spedalieri , phys .a * 69 * , 022308 ( 2004 ) .a. c. doherty , p. a. parrilo and f. m. spedalieri , phys .a * 71 * , 032333 ( 2005 ) .b. m. terhal , a. c. doherty , and d. schwab , phys .lett . * 90 * , 157903 ( 2003 ) .p. horodecki , and m. nowakowski , quant - ph/0503070 .a. peres , found .* 12 * , 1441 ( 1990 ) .a. peres , _ quantum theory : concepts and methods _ , ( kluwer academic publishers , 1993 ) . using the schmidt decompositionwe have that the state can be written as for some orthogonal states and where . the orthogonal basis can always be selected such as for , and for , with and denoting two auxiliary systems under alice s control , and where satisfy . on the other hand , the states can always be obtained from the orthogonal states by means of a unitary transformation , _ i.e. _ , . with this notation, we can write the initial state prepared by alice as with . the measurement process in the basis then be described as : alice first prepares the signal state , and then she measures her subsystem with the povm operators with .given an operator and an orthonormal basis , with , the partial transposed of with respect to subsystem in that basis is defined as in the same way , one can also define the partial transposed of with respect to subsystem .note that given lmis constraints , we can always combine them to a single new lmi constraint as : the operator can always be expressed as because of the following reason : suppose , for instance , that a given set of measurement operators do not form a set of operator basis elements .then , one can find the minimal set of linear independent measurement operators and then apply the gram - schmidt orthogonalization to these elements in order to obtain a minimal set of linear independent operator basis elements .moreover , from the original data , it is straightforward to compute the observed probability distribution for these new operators . as a result ,one finds a fixed part of the form . alternatively to this method , the equality constraints given by eq .( [ cond_coeff ] ) can also be included directly in the lmi constraint of the sdp .each of these constraint can be represented by means of two inequality constraints as follows : , and .this approach , however , increases the number of objective variables to be considered .p. hyllus , and j. eisert , new journal of physics * 8 * , 51 ( 2006 ) .s. l. woronowicz , rev .* 10 * , 165 ( 1976 ) .t. moroder , diploma thesis , institut fr theoretische physik i , universitt erlangen - nrnberg ( germany ) , 2005 .k. c. toh , r. h. tutuncu , and m. j. todd , optimization methods and software * 11 * , 545 ( 1999 ) , available from http://www.math.nus.edu.sg/mattohkc/sdpt3.html .j. lfberg , in _ proceedings of the cacsd conference _( taipei , taiwan , 2004 ) , pp .284 - 289 , available from http://control.ee.ethz.ch/ joloef / yalmip.php. t. yamamoto , j. shimamura , s. k. zdemir , m. koashi , and n. imoto , phys .* 95 * , 040503 ( 2005 ) .boileau , r. laflamme , m. laforest , and c. r. myers , phys .lett . * 93 * , 220501 ( 2004 ) .a. chefles , contemporary physics 41 , 401 ( 2000 ) .i. d. ivanovic , phys .a * 123 * , 257 ( 1987 ) .d. dieks , phys .a * 126 * , 303 ( 1988 ) .a. peres , phys .lett . a * 128 * , 19 ( 1988 ) .m. duek , m. jahma , and n. ltkenhaus , phys .a * 62 * , 022306 ( 2000 ) .k. tamaki , m. koashi , and n. imoto , phys .a * 67 * , 032310 ( 2003 ) .b. huttner , n. imoto , n. gisin , and t. mor , phys .a * 51 * , 1863 ( 1995 ) .s. n. molotkov , and s. s. nazin , j. of experimental and theoretical physics letters * 63 * , 924 ( 1996 ) .s. n. molotkov , j. of experimental and theoretical physics letters * 87 * , 288 ( 1998 ) .b .- s . shi , y .- k .jiang , and g .- c .guo , appl .b * 70 * , 415 ( 2000 ) .f. fung , and h .- k .lo , phys .a * 74 * , 042342 ( 2006 ) .j. m. renes , phys .a * 70 * , 052314 ( 2004 ) .a. acn , s. massar , and s. pironio , new j. phys .* 8 * , 126 ( 2006 ) .b. huttner , and a. ekert , j. mod. opt . * 41 * , 2455 ( 1994 ) .h. bechmann - pasquinucci , and n. gisin , phys .a * 59 * , 4238 ( 1999 ) . c. a. fuchs , n. gisin ,r. b. griffiths , c .- s .niu , and a. peres , phys .a * 56 * , 1163 ( 1997 ) .j. i. cirac , and n. gisin , phys .a * 229 * , 1 ( 1997 ) . in its original proposal ,the four - plus - two - state protocol was meant to operate with weak coherent pulses as signal states . herewe refer to the single - photon version of this protocol .the main motivation behind this protocol is that it can be proven to be secure also against an hypothetical eve only limited by the no - signalling principe . in its original proposal ,alice and bob share a noisy quantum channel that distributes pairs of qubits in the maximally entangled state .these pairs are then measured by alice and bob . here, however , we consider a prepare and measure version of this scheme with alice being the source of the signal states that are sent to bob .
we investigate two - way and one - way single - photon quantum key distribution ( qkd ) protocols in the presence of loss introduced by the quantum channel . our analysis is based on a simple precondition for secure qkd in each case . in particular , the legitimate users need to prove that there exists no separable state ( in the case of two - way qkd ) , or that there exists no quantum state having a symmetric extension ( one - way qkd ) , that is compatible with the available measurements results . we show that both criteria can be formulated as a convex optimisation problem known as a semidefinite program , which can be efficiently solved . moreover , we prove that the solution to the dual optimisation corresponds to the evaluation of an optimal witness operator that belongs to the minimal verification set of them for the given two - way ( or one - way ) qkd protocol . a positive expectation value of this optimal witness operator states that no secret key can be distilled from the available measurements results . we apply such analysis to several well - known single - photon qkd protocols under losses .
finite state automata ( fsas ) and weighted finite state automata ( wfsas ) are well known , mathematically well defined , and offer many practical advantages . . they permit , among others , the fast processing of input strings and can be easily modified and combined by well defined operations .both fsas and wfsas are widely used in language and speech processing .a number of software systems have been designed to manipulate fsas and wfsas .most systems and applications deal , however , only with _ 1-tape _ and _2-tape automata _ , also called acceptors and transducers , respectively . _ multi - tape automata _ ( mtas ) offer additional advantages such as the possibility of storing different types of information , used in nlp , on different tapes or preserving intermediate results of transduction cascades on different tapes so that they can be re - accessed by any of the following transductions .mtas have been implemented and used , for example , in the morphological analysis of semitic languages , where the vowels , consonants , pattern , and surface form of words have been represented on different tapes of an mta .this report defines various operations for _ weighted multi - tape automata _ ( wmtas ) and describes algorithms that have been implemented for those operations in the wfsc toolkit .some algorithms are new , others are known or similar to known algorithms .the latter will be recalled to make this report more complete and self - standing .we present a new approach to _ multi - tape intersection _ , meaning the intersection of a number of tapes of one wmta with the same number of tapes of another wmta . in our approach ,multi - tape intersection is not considered as an atomic operation but rather as a sequence of more elementary ones , which facilitates its implementation .we show an example of multi - tape intersection , actually transducer intersection , that can be compiled with our approach but not with several other methods that we analyzed .to show the practical relevance of our work , we include an example of application : the preservation of intermediate results in transduction cascades . for the structure of this report see the table of contents .presented in a survey paper a number of results and problems on finite 1-way automata , the last of which the decidability of the equivalence of deterministic k - tape automata has been solved only recently and by means of purely algebraic methods .rabin and scott considered the case of two - tape automata claiming this is not a loss of generality .they adopted the convention that the machine will read for a while on one tape , then change control and read a while on the other tape , and so on until one of the tapes is exhausted " .in this view , a two - tape or -tape machine is just an ordinary automaton with a partition of its states to determine which tape is to be read .define the notion of `` one - letter -tape automaton '' and the main idea is to consider this restricted form of -tape automata where all transition labels have exactly one tape with a non - empty single letter .then they prove that one can use `` classical '' algorithms for 1-tape automata on a one - letter -tape automaton .they propose an additional condition to be able to use classical intersection .it is based on the notion that a tape or coordinate is _ inessential _iff ( is a regular relation over ) and , . and thus to perform an intersection, they assume that there exists at most one common essential tape between the two operands .define a non - deterministic _-way finite - state transducer _ that is similar to a classic transducer except that the transition function maps to ( with ) .to perform the _ intersection _ between two -tape transducers , they introduced the notion of _ same - length relations _ . as a result, they treat a subclass of -tape transducers to be intersected. defines an -tape finite state automaton and an _ -tape finite - state transducer _ , introducing the notion of _ domain tape _ and _ range tape _ to be able to define a unambiguous composition for -tape transducers .operations on -tape automata are based on , the intersection in particular .in this section we recall the basic definitions of the algebraic structures monoid and semiring , and give a detailed definition of a weighted multi - tape automaton ( wmta ) based on the definitions of a weighted automaton and a multi - tape automaton .a _ monoid _ is a structure consisting of a set , an associative binary operation on , and a _ neutral _ element such that for all .a monoid is called _ commutative _ iff for all . a set equipped with two binary operations , ( _ collection _ ) and ( _ extension _ ) , and two neutral elements , and , is called a _ semiring _ , iff it satisfies the following properties : 1 . is a commutative monoid 2 . is a monoid 3 .extension is _left- _ and _ right - distributive _ over collection : + 4 . is an annihilator for extension : we denote a generic semiring as .some automaton algorithms require semirings to have specific properties .composition , for example , requires it to be commutative and -removal requires it to be _ k - closed _ .these properties are defined as follows : 1 .commutativity : 2 .k - closedness : the following well - known semirings are commutative : 1 . : the boolean semiring , with 2 . : a positive integer semiring with arithmetic addition and multiplication 3 . : a positive real semiring 4 . : a real tropical semiring , with a number of algorithms require semirings to be equipped with an order or partial order denoted by . each idempotent semiring ( i.e. , ) has a natural partial order defined by . in the above examples , the boolean and the real tropical semiring are idempotent , and hence have a natural partial order . in analogy to a weighted automaton and a multi - tape automaton ( mta ), we define a _ weighted multi - tape automaton _ ( wmta ) , also called weighted -tape automaton , over a semiring , as a six - tuple with 20mm42mm90 mm & & being a finite alphabet + & & the finite set of states + & & the set of initial states + & & the set of final states + & & the arity , i.e. , the number of tapes of + & & being the finite set of -tape transitions and + & & the semiring of weights .+ for any state , 20mm42mm90 mm & & denotes its initial weight , with , + & & its final weight , with , and + & & its finite set of out - going transitions .+ for any transition , with , 20mm42mm90 mm & & denotes its source state + & & its label , which is an -tuple of strings + & & its weight , with , and + & & its target state + a _ path _ of length is a sequence of transitions such that for all . a path is said to be _ successful _iff and . in the following we consider only successful paths .the label of a successful path equals the concatenation of the labels of its transitions and is an -tuple of strings if all strings ( with ) of a tuple are equal , we use the short - hand notation on the terminal string . for example : the strings on any transition are not `` bound '' to each other .for example , the string triple can be encoded , among others , by any of the following sequences of transitions : or or , etc .the weight of a successful path is we denote by the ( possibly infinite ) set of successful paths of and by the ( possibly infinite ) set of successful paths for the -tuple of strings we call the -ary or -tape relation of . it is the ( possibly infinite ) set of -tuples of strings having successful paths in : the weight for any -tuple of strings is the collection ( semiring sum ) of the weights of all paths labeled with : by relation we mean simply a co - occurrence of strings in tuples .we do not assume any particular relation between those strings such as an input - output relation .all following operations and algorithms are independent from any particular relation .it is , however , possible to define an arbitrary weighted relation between the different tapes of .for example , of a weighted _ transducer _ is usually considered as a weighted input - output relation between its two tapes , that are called _ input tape _ and _ output tape_. in the following we will not distinguish between a language and a 1-tape relation , which allows us to define operations only on relations rather than on both languages and relations .this section defines operations on string -tuples and -tape relations , taking their weights into account .whenever these operations are used on transitions , paths , or automata , they are actually applied to their labels or relations respectively .for example , the binary operation on two automata , , actually means .the unary operation on one automaton , , actually means .ultimately , we are interested in multi - tape intersection and transduction .the other operations are introduced because they serve as basis for the two .we define the _ pairing _ of two string tuples , , and its weight as pairing is associative ( concerning both the string tuples and their weights ) : we will not distinguish between 1-tuples of strings and strings , and hence , instead of or , simply write .the _ concatenation _ of two string tuples of equal arity , , and its weight are defined as concatenation is associative ( concerning both the string tuples and their weights ) : again , we will not distinguish between 1-tuples of strings and strings , and hence , instead of or , simply write .the relation retween pairing and concatenation can be expressed through a matrix of string tuples \ ] ] where the are horizontally concatenated and vertically paired : note , this equation does not hold for the weights of the , unless they are defined over a commutative semiring . the _ cross - product _ of two -tape relations , ,is based on pairing and is defined as the weight of each string tuple follows from the definition of pairing .the cross product is an associative operation .a well - know special case is the cross - product of two acceptors ( 1-tape automata ) leading to a transducer ( 2-tape automaton ) : the _ projection _ , , of a string tuple is defined as it retains only those strings ( i.e. , tapes ) of the tuple that are specified by the indices , and places them in the specified order .projection indices can occur in any order and more that once .thus the tapes of can , e.g. , be reversed or duplicated : the weight of the -tuple is not modified by the projection ( if we consider not as a member of a relation ) . the projection of an -tape relation is the projection of all its string tuples : the weight of each is the collection ( semiring sum ) of the weights of each leading , when projected , to : the _ complementary projection _ , , of a string -tuple removes all those strings ( i.e. , tapes ) of the tuple that are specified by the indices , and preserves all other strings in their original order .an _ inverse projection _ because it is not the inverse of a projection in the sense : and . ] it is defined as complementary projection indices can occur in any order , but only once .the complementary projection of an -tape relation equals the complementary projection of all its string tuples : the weight of each is the collection of the weights of each leading , when complementary projected , to : we define the _ auto - intersection _ of a relation , , on the tapes and as the subset of that contains all with equal and : the weight of any is not modified . for example( figure [ fig : aint : a11 ] ) auto - intersection of regular -tape relations is not necessarily regular . for example( figure [ fig : aint : a13 ] ) the result is not regular because is not regular .the multi - tape intersection of two multi - tape relations , and , uses tapes in each relation , and intersects them pair - wise . the operationpairs each string tuple with each string tuple iff with for all .multi - tape intersection is defined as : all tapes of that have directly participated in the intersection are afterwards equal to the tapes of , and are removed .all tapes are kept for possible reuse by subsequent operations .all other tapes of both relations are preserved without modification .the weight of each is this weight follows only from pairing ( eq .[ eq : op : pairingweight ] ) .it is not influenced by complementary projection ( eq . [ eq : op : cprojweight ] ) because any two that differ in also differ in , and hence can not become equal when the are removed .the multi - tape intersection of two relations , and , can be compiled by as can been seen from multi - tape intersection is a generalization of classical intersection of transducers which is known to be not necessarily regular : consequently , multi - tape intersection has the same property . in our approachthis results from the potential non - regularity of auto - intersection ( eq . [ eq : op : mltint : proc ] ) .+ we speak about _ single - tape intersection _ if only one tape is used in each relation ( ) .a well - known special case is the intersection of two acceptors ( 1-tape automata ) leading to an acceptor and yielding the relation another well - known special case is the composition of two transducers ( 2-tape automata ) leading to a transducer . here, we need , however , an additional complementary projection : is expressed either by the or the operator . however , equals which corresponds to in functional notation . ] it yields the relation : multi - tape and single - tape intersection are neither associative nor commutative , except for special cases with , such as the above intersection of acceptors and transducers . a wmta , , can be used as a transducer having input tapes , to , and output tapes , to , which do not have to be consecutive or disjoint . to apply to a weighted -tuple of input strings ,the tuple is converted into an input wmta , , having one single path labeled with and weighted with .an output wmta , , whose relation contains all weighted -tuples of output strings , , is then obtained through multitape - intersection and projection : following example of classical transducer intersection of and is regular : could be written as . ] it has one theoretical solution which is + this solution can not be compiled with any of the above mentioned previous approaches ( section [ sec : previous ] ) . it can not be enabled by any pre - transformation of the wmtas that does not change their relations , and .all above mentioned approaches do not exceed the following alternatives .one can start by typing all symbols ( and ) with respect to the tapes , to make the alphabets of different tapes disjoint ( which can be omitted for symbols occurring on one tape only ) : + then , one converts tapes into tape , such that each transition , labeled with symbols , is transformed into a sequence of transitions , labeled with symbol each , which is equivalent to ganchev s approach : + after these transformations , it is not possible to obtain the above theoretical solution by means of classical intersection of 1-tape automata , even not after -removal : alternatively , one could start with synchronizing the wmtas .this is not possible across a whole wmta , but only within `` limited sections '' : in our example this means before , inside , and after the cycles : + then , one can proceed as before by first typing the symbols with respect to the tapes + and then transforming tapes into tape + the solution can not be compiled with this alternative either , even not after -removal : to compile multi - tape intersection according to the above procedure ( eq . [ eq : op : mltint : proc ] ) we proceed in 3 steps .first , we compile in one single step with an algorithm that follows the principle of transducer composition and simulates the behaviour of mohri s -filter ( section [ sec : alg : sgint]).-filter has been shown to work on arbitrary transducers .] for the above example , we obtain + next , we compile using our auto - intersection algorithm ( section [ sec : alg : autoint ] ) + and finally , with a simple algorithm for complementary projection : + this final result equals the theoretical solution .in this section we propose and recall algorithms for the above defined operations on wmtas : cross - product , auto - intersection , single - tape and multi - tape intersection . by convention ,our wmtas have only one initial state , without loss of generality , since for any wmta with multiple initial states there exists a wmta with a single initial state accepting the same relation .we will use the following variables and definitions .the variables ] , etc .serve for assigning temporarily additional data to a state .+ [ cols= " < , < , < " , ] we describe two alternative algorithms to compile the cross product of two wmtas , and .the second algorithm is almost identical to classical algorithms for crossproduct of automata .nevertheless , we recall it to make this report more complete and self - standing .both algorithms require the semirings of the two original automata , and , to be equal ( ) .the second algorithm requires the common semiring to be commutative .* cross product through path concatenation : * the first algorithm pairs the label of each transition with ( producing ) , and the label of each transition with ( producing ) , and then concatenates with .we will refer to it as where the suffix _ pc _ stands for _ path concatenation_. we start with a wmta that is equipped with the union of the alphabets , the union of the state sets , and the union of the transition sets of and .the initial state of equals that , its set of final states equals that of , and its semiring equals those of and ( line [ pc : crosspc : l1 ] ) .first , we ( post- ) pair the labels of all transitions originally coming from with , and ( pre- ) pair the labels of all transition from with .then , we connect all final states of with the initial state of through -transitions , as is usually done in the concatenation of automata .the disadvantages of this algorithm are that the paths of become longer than in the second algorithm below and that each transition of is partially labeled with , which may increase the running time of subsequently applied operations .to adapt this algorithm to non - weighted mtas , one has to remove the weight from line [ pc : crosspc : l7 ] and replace line [ pc : crosspc : l8 ] with : .+ * cross product through path alignment : * the second algorithm pairs each string tuple of with each string tuple of , following the definition ( eq . [eq : op : crprod ] ) .the algorithm actually pairs each path of with each path of transition - wise , and appends -transitions to the shorter of two paired paths , so that both have equal length .we will refer to this algorithm as where the suffix _ pa _ stands for _ path alignment_. we start with a wmta whose alphabet is the union of the alphabets of and , whose semiring equals those of and , and that is otherwise empty ( line [ pc : crosspa : l1 ] ) .first , we create the initial state of from the initial states of and , and push onto the stack ( lines [ pc : crosspa : l3 ] , [ pc : crosspa : l27][pc : crosspa : l33 ] ) . while the stack is not empty , we take states from it and access the states and that are assigned to through ] with the above defined meaning .the delay at a state is ( lines [ pc : autoint : l501 ] , [ pc : autoint : l502 ] ) .the delay of a cycle on is the difference between at the end and at the beginning of the cycle ( line [ pc : autoint : l504 ] ) .then , we compile , the maximal absolute value of delay required to match any two cycles . for example , let , encoded by two cycles . to obtain a match between and of a path of , we have to traverse the first cycle 3 times and the second two times , allowing for any permutation : .this illustrates that in a match between any two cycles of , the absolute value of the delay does not exceed ( line [ pc : autoint : l404 ] ) .next , we compile the first limit , , that will not be exceeded by a construction with bounded delay . in a match of two cyclesthis limit equals , and for any other match it is . in a construction with bounded delay , the absolute value of the delay in does therefore not exceed ( line [ pc : autoint : l405 ] ) .finally , we compile a second limit , , that allows us , in case of potentially unbounded delay , to construct a larger than does .unboundedness can only result from matching cycles in . to obtain a larger , with states whose delay exceeds , we have to unroll the cycles of further until we reach ( at least ) one more match between two cycles .therefore , ( line [ pc : autoint : l405a ] ) .+ * construction : * we start with a wmta whose alphabet and semiring equal those of and that is otherwise empty ( line [ pc : autoint : l102 ] ) . to each state that will be created in , we will assign two variables : \!=\!q_1 ] stating the leftover string of tape ( yet unmatched in tape ) and the leftover string of tape ( yet unmatched in tape ) .then , we create an initial state in and push it onto the stack ( lines [ pc : autoint : l104 ] , [ pc : autoint : l301][pc : autoint : l310 ] ) . as long as the stack is not empty ,we take states from it and follow each of the outgoing transitions of the corresponding state ] of its target in , we concatenate the leftover strings \!=\!(s , u) ] and ] , none of them is coreachable ( and would disappear if the result was pruned ) .the construction is successful .the example is characterized by : )|>\delta_{\rm max } \;\logand\ ; \fct{coreachable}{q } & \rightarrow & { \it successful } \spc{2ex}\rightarrow\spc{2ex } { \it rational}\;\aint{1,2}(\;)\end{aligned}\ ] ] and its successfully constructed auto - intersection .( dashed parts are not constructed .states marked with have )|>\delta_{\rm max} ] . )[ fig : aint : a12 ] ] * example 3 : * in the third example ( figure [ fig : aint : a13 ] ) , the relation of is the infinite set of string tuples .the auto - intersection , , is not rational and has unbounded delay .its complete construction would require an infinite unrolling of the cycles of and an infinite number of states in which is prevented by .the construction is not successful because the result contains coreachable states with )|>\delta_{\rm max} ] . )[ fig : aint : a13 ] ] and its partially constructed auto - intersection .( dashed parts are not constructed .states marked with have )|>\delta_{\rm max} ] ( lines [ pc : intcreps : l5 ] , [ pc : intcreps : l6 ] ) .we intersect each outgoing transition of with each outgoing transition of ( lines [ pc : intcreps : l7a ] , [ pc : intcreps : l7b ] ) .this succeeds only if the -th label component of equals the -th label component of , where and are the two intersected tapes of and respectively , and if the corresponding transition in has target 0 ( line [ pc : intcreps : l8 ] ) .only if it succeeds , we create a transition in ( line [ pc : intcreps : l12 ] ) whose label results from pairing with and whose target corresponds with the triple of targets .if does not exist yet , it is created and pushed onto the stack ( lines [ pc : intcreps : l26][pc : intcreps : l32 ] ) .subsequently , we handle all -transitions in ( lines [ pc : intcreps : l13][pc : intcreps : l18 ] ) and in ( lines [ pc : intcreps : l19][pc : intcreps : l24 ] ) .if we encounter an in and are in state 0 or 1 of , we have to move forward in , stay in the same state in , and go to state 1 in .therefore we create a transition in whose target corresponds to the triple ( lines [ pc : intcreps : l13][pc : intcreps : l18 ] ) .the algorithm works similarly if and is encountered in ( lines [ pc : intcreps : l19][pc : intcreps : l24 ] ) . to adapt this algorithm to non - weighted mtas, one has to remove the weights from the lines [ pc : intcreps : l12 ] , [ pc : intcreps : l18 ] , and [ pc : intcreps : l24 ] , and replace line [ pc : intcreps : l29 ] with : .we propose two alternative algorithms for the multi - tape intersection of two wmtas , and .both algorithms work under the conditions of their underlying basic operations : the semirings of the two wmtas must be equal ( ) and commutative . the second ( more efficient algorithm ) requires all transitions to be labeled with -tuples of strings not exceeding length 1 on ( at least ) one pair of intersected tapes of and of which means no loss of generality : our first algorithm , that we will refer to as , follows the exact procedure of multi - tape intersection ( eq . [ eq : op : mltint : proc ] ) , using the algorithms for cross product , auto - intersection , and complementary projection .the second ( more efficient ) algorithm , that we will call , uses first the above single - tape intersection algorithm to perform cross product and one auto - intersection in one single step ( for intersecting tape with ) , and then the auto - intersection algorithm ( for intersecting all remaining tapes with , for ) .this second algorithm has been used to compile successfully the example of transducer intersection in section [ sec : exm:2tapeintersect ] .many applications of wmtas and wmta operations are possible , such as the morphological analysis of semitic languages or the extraction of words from a bi - lingual dictionary that have equal meaning and similar form in the two languages ( cognates ) . we include only one example in this report , namely the preservation of intermediate results in transduction cascades , which actually stands for a large class of applications .transduction cascades have been extensively used in language and speech processing ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* among many others ) . in a ( classical ) weighted transduction cascade , , a set of weighted input strings , encoded as a weighted acceptor , ,is composed with the first transducer , , on its input tape ( figure [ fig : transcascade ] ) .the output projection of this composition is the first intermediate result , , of the cascade .it is further composed with the second transducer , , which leads to the second intermediate result , , etc .the output projection of the last transducer is the final result , : ] at any point in the cascade , previous results can not be accessed .this holds also if the cascade is composed into a single transducer , .none of the `` incorporated '' sub - relations in can refer to a sub - relation other than its immediate predecessor : in a weighted transduction cascade , , that uses wmtas and multi - tape intersection , intermediate results can be preserved and used by all subsequent transductions .suppose , we want to use the two previous results at each point in the cascade ( except in the first transduction ) which requires all intermediate results , , to have two tapes ( figure [ fig : wmtatranscascade ] ) : the projection of the output - tape of the last wmta is the final result , : ] this augmented descriptive power is also available if the whole cascade is intersected into a single wmta , , although has only two tapes in our example .this can be achieved by intersecting iteratively the first wmtas until reaches : each contains all wmtas from to .the final result is built from : each ( except the first ) of the `` incorporated '' multi - tape sub - relations in will still refer to its two predecessors .in our second example of a wmta cascade , , each wmta uses the output of its immediate predecessor , as in a classical cascade ( figure [ fig : wmtatranscascade2 ] ) .in addition , the last wmta uses the output of the first one : ] as in the previous example , the cascade can be intersected into a single wmta , , that exceeds the power of a classical transducer cascade , although it has only two tapes : wish to thank several anonymous reviewers .
this report defines various operations and describes algorithms for _ weighted multi - tape automata _ ( wmtas ) . it presents , among others , a new approach to _ multi - tape intersection _ , meaning the intersection of a number of tapes of one wmta with the same number of tapes of another wmta , which can be seen as a generalization of transducer intersection . in our approach , multi - tape intersection is not considered as an atomic operation but rather as a sequence of more elementary ones . we show an example of multi - tape intersection , actually transducer intersection , that can be compiled with our approach but not with several other methods that we analyzed . finally we describe an example of practical application , namely the preservation of intermediate results in transduction cascades .
the underlying networks of many biological and social complex systems are distinguished from purely random graphs .these real - world networks often have the small - world property and scale - free ( power - law ) vertex degree profiles ; they have system - specific local structural motifs and often are organized into communities of different connection densities . in recent years models have been proposed to understand the structural properties of real - world complex systems ; among them the - get - richer " mechanism of network growth by preferential attachment gained great popularity . as the connection pattern affects considerably functions of a networked system , there may exist various feedback mechanisms which couple the system s dynamical performance ( efficiency , sensitivity , robustness , ... ) with the evolution of its structure .but the detailed interactions between dynamics and evolution are often unclear for real - world systems , and understanding complex networks from the angle of dynamics - structure interplay is still a challenging and largely unexplored research topic . among the few theoretical works on dynamics - driven network evolutions from the physics and the computer science communities ( see , e.g. , andreview ) , the main focus has been on network evolutionary games for which network dynamics and evolution occur on comparable time scales . a payoff function is defined for the system , and vertices change their local connections to optimize gains . in many complex systems , however , the dynamical performance of a network is a global property which can not be predicted by only looking at the local structures .most structural changes in such systems , on the other hand , take place locally and relatively randomly , without knowing their consequences to the system s dynamical performance .the time - scales of network dynamics and network structural evolution can also be very different .will dynamics - structure coupling mechanisms build highly nontrivial architectures out of random , blind , and local structural mutations ? in this work extensive simulations of dynamics - driven network evolutionare performed on a simple model system , namely the local majority - rule ( lmr ) opinion dynamics of complex networks .there are two main motivations for this study .first , earlier analytical and simulation studies revealed that networks with heterogeneous structural organizations have remarkably better lmr dynamical performances than homogeneous networks .in complementary to these studies , we want to know , in this simple lmr dynamical system , to what extent the dynamical performance of a network can influence the evolution trajectory of the network s structure .second , as lmr - like dynamical processes are frequently encountered in neural and gene - regulation networks and other biological or social systems , it is hoped that a detailed study of dynamics structure coupling in the model lmr system will also shed light on the structural evolution and optimization in real - world complex systems . in the simulation , a fitness valueis assigned to a network to quantitatively measure its efficiency of lmr dynamical performance .a slow ( in comparison with the lmr dynamics ) mutation - selection process is performed on a population of networks , and networks of higher fitness values are more likely to remain in the population .the network population dynamics reaches a steady state after passing through several transient stages .a steady - state network has high clustering coefficient and strong local degree - degree correlations , and the fraction of vertices in the network with degree resembles a power - law distribution of with .interestingly a global hub of degree proportional to network size spontaneously emerges in the network .these results bring new insights on the optimized network organization for lmr dynamics .they are also consistent with the opinion that feedback mechanisms from dynamics to structure could be a dominant force driving complex networks into heterogeneous structures .hopefully this work will stimulate studies on the detailed interactions between dynamics and structure in more realistic complex systems .the local - majority - rule dynamics runs on a network of vertices and undirected links , with being the mean connectivity . the network s adjacency matrix has entries if vertices and are connected by an edge or if otherwise .each vertex has an opinion ( spin ) that can be influenced by its nearest - neighbors . at each time step of the lmr dynamics, every vertex of the network updates its opinion synchronously according to ] .the network evolution trajectory also shows interesting features . starting from an ensemble of random poissonian networks with size andmean degree , fig .[ fig : evolution ] shows that the evolution can be divided into four stages .in the first stage which lasts for about evolution steps for muatation rate , the degree distribution of the networks changes gradually into the form shown by the solid line in the inset of fig . [fig : degreedistribution ] .the fitness values of the networks are small and increase only very slowly , the maximal vertex degrees of the networks are also small , and the degree - degree correlations in the network are weak .this dormant stage is followed by a stage which lasts for about steps for .a global hub emerges and its degree rapidly exceeds those of all the other vertices of the network , the subnetwork assortative index also increases rapidly , and the degree distribution of the network becomes power law - like at the end of this stage .this reforming stage has rapid increase in the mean fitness value ; it is follows by a long fine - tunning stage ( lasts for about steps at ) of slow increase in network fitness , maximal degree , and assortative mixing .finally the network reaches the steady - state in which the network s fitness value saturates but its local structures are being modified continuously .( [ eq : r ] ) ] , and the assortative - mixing index ( of the global hub - removed subnetwork ) as a function of simulation steps .the network size is , mean vertex degree is , and minimal degree is .the mutation rate is ( black and red curves , maximal evolution time to steps ) or ( blue and green curves , maximal evolution time to steps ) .the population dynamics starts from an ensemble of random poissonian networks ( red and green curves , which initially are the two lower curves ) or an ensemble of modified random poissonian networks with a single vertex of degree ( black and blue curves , which initially are the two upper curves ) . ]we have performed simulations with different initial conditions and confirmed that the steady - states are not affected .for example , fig .[ fig : evolution ] demonstrates that the steady - state networks obtained from two different initial conditions share the same dynamical performances and the same structural properties .( if the network initially has a global hub of degree but otherwise is completely random , during the evolution the degree of the global hub decreases but the fitness of the network increases ( fig .[ fig : evolution ] ) .this indicates that the existance of a global hub , heterogeneous degree profile , and strong local degree - degree correlations are all important for high dynamical performance . ) on the other hand , the evolution process is greatly influenced by the network mutation rate .for the same network size and mean connectivity , the steady - state networks obtained with a lower network mutation rate have better dynamical performances ( fig .[ fig : evolution ] ) .as the network topology becomes heterogeneous , most local structural changes will tend to deteriorate the dynamical performance .when the mutation rate is relatively large , in one evolution step the probability for the combination of local and distributed mutations to enhance the network s dynamical performance will decrease rapidly with .the balance between structural entropy ( randomness ) and dynamical performance then makes the system cease to be further optimized . for the dynamics - structure interaction to work most efficiently ,it is therefore desirable that the time scale of network evolution be much slower than the time scale of network dynamics .in this paper , we have studied the evolution and optimization of complex networks from the perspective of dynamics - structure mutual influence . through extensive simulation on a simple prototypical model process, the local - majority - rule dynamics , we showed that if there exist feedback mechanisms from a network s dynamical performance to its structure , the network can be driven into highly heterogeneous structures with a global hub , strong local correlations in its connection pattern , and power law - like vertex - degree distributions .the steady - state networks reached by this dynamics - driven evolution will have better dynamical performance if network evolution occurs much slowly than the dynamical process on the network .for the lmr dynamics specifically , this work confirmed and extended previous studies by showing that scale - free networks with decay exponent indeed are optimal and can be reached without the need of any central planning .besides the scale - free property and strong local structural correlations , a steady - state network also has a global hub which serves a global indicator of the system s state by sampling the opinions of a large fraction of the vertices of the system .real - world complex systems of cause are much more complicated than the simple model systems studied in this paper .different mechanisms may be contributing simultaneously to the evolution of a real - world complex network . the present paper suggested that the interplay between dynamics and structure can be an important driving force for the formation and stabilization of heterogeneous structures which are ubiquitous in biological and social systems .a lot of efforts are needed to decipher the detailed dynamics - structure coupling mechanisms in many complex systems .we thank kang li and jie zhou for helpful discussions , erik aurell and peter holme for their suggestions on the manuscript , and zhong - can ou - yang for support .we benefited from the kitpc 2008 program dynamics in information systems .the state key laboratory for scientific and engineering computing , cas , beijing is kindly acknowledged for computational facilities .
the mutual influence of dynamics and structure is a central issue in complex systems . in this paper we study by simulation slow evolution of network under the feedback of a local - majority - rule opinion process . if performance - enhancing local mutations have higher chances of getting integrated into its structure , the system can evolve into a highly heterogeneous small - world with a global hub ( whose connectivity is proportional to the network size ) , strong local connection correlations and power law - like degree distribution . networks with better dynamical performance are achieved if structural evolution occurs much slower than the network dynamics . structural heterogeneity of many biological and social dynamical systems may also be driven by various dynamics - structure coupling mechanisms .
we consider the infinite - source poisson process with random transmission rate defined by where the arrival times are the points of a unit - rate homogeneous poisson process on the positive half - line , independent of the initial conditions ; and the durations and transmission rates are independent and identically distributed random variables with values in and independent of the poisson process and of the initial conditions .this process was considered by resnick and rootzn and mikosch _ et al . _ , among others .the queue is a special case , for .an important motivation for the infinite - source poisson process is to model the instantaneous rate of the workload going though an internet link .although overly simple models are generally not relevant for internet traffic at the packet level , it is generally admitted that rather simple models can be used for higher - level ( the so - called _ flow level _ ) traffic such as tcp or http sessions , one of them being the infinite - source poisson process ( see barakat _one way to empirically analyse internet traffic at the flow level using the infinite - source poisson process would consist in retrieving all the variables involved in the observed traffic during a given period of time , but this would require the collection of all the relevant information in the packets headers ( such as source and destination addresses ) for the purpose of separating the aggregated workload into transmission rates at a pertinent level ; see duffield _et al . _ for many insights into this problem .it is well known that heavy tails in the durations result in long - range dependence of the process .long - range dependence can be defined by the regular variation of the autocovariance of the process or more generally by the regular variation of the variance of the integrated process : where is a slowly varying function at infinity and is often refered to as the _ hurst index _ of the process . for the infinite - source poisson process , the hurst index is related to the tail index of the durations by the relation .the long - range dependence property has motivated many empirical studies of internet traffic and theoretical ones concerning its impact on queuing ( these questions are studied in the case in parulekar and makowki ) . however , to the best of our knowledge , no statistical procedure to estimate has been rigorously justified .it is the aim of this paper to propose an estimator of the hurst index of the infinite - source poisson process , and to derive its statistical properties .we propose to estimate ( or equivalently ) from a path of the process over a finite interval ] .there exist a real number and positive functions slowly varying at infinity such that , for all and , = l_p(t ) t^{-\alpha } .\ ] ] since , the functions are continuous at zero and ] and <\infty ] depends on the precise behaviour of at infinity .assumption [ hypo : modele ] implies in particular that the tail of the distribution of is regularly varying with index .this in turns implies assumption [ hypo : modele ] if and are independent and < \infty ] for some , and < \infty ] hence is slowly varying and assumption [ hypo : modele ] holds .let denote a poisson point process on a set endowed with a -field with intensity measure , that is , a random measure such that for any disjoint in , are independent random variables with poisson law with respective parameters , .the main property of poisson point processes that we will use is the following cumulant formula ( see , for instance , resnick : chapter 3 ) . for any positive integer and functions such that and for all , the - order joint cumulant of exists and is given by let be the point processes on with points , that is . under assumption [ hypo : modele](i ) , it is a poisson point process with intensity measure , where is the lebesgue measure on . for ,define we can now show that if <\infty ] , then the process is well defined and strictly stationary .it has the point - process representation let , and .then , for all , if , moreover , , then has finite variance and & = & { \mathbb{e}}[u \eta ] , \\ { \operatorname{cov}}({x_s}(0 ) , { x_s}(t ) ) & = & { \mathbb{e } } [ u^2 ( \eta - t)_+ ] = \int_t^\infty h_2(v ) { \,\mathrm{d}}v .\end{aligned}\ ] ] note that if , then < \infty ] . thus is well defined and stationary since is stationary . the number of indices such that is , hence if is the largest of those , it is almost surely finite and hence ( [ eq : xsisxplusinitcond ] ) .the point - process representation ( [ eq : xspprep ] ) and formule ( [ eq : cum ] ) and ( [ eq : momentueta3 ] ) finally yield the given expressions for the mean and covariance .relation ( [ eq : xsisxplusinitcond ] ) shows that the stationary version can be defined by changing the initial condition of the system .more generally , one could consider _ any _ initial conditions , that is , any process defined as on the right - hand side of ( [ eq : xsisxplusinitcond ] ) with and finite . since the initial conditions almost surely vanish after a finite period , they have a negligible impact on the estimation procedure .thus , our result on easily generalizes to any such initial conditions , and , in particular , to the stationary version , when it exists . applying similar arguments as those used for showing proposition [ prop : xsallpropoerties ] , we obtain : [ prop : secondorder ] the process admits a point - process representation where . if assumption [ hypo : modele ] holds with , then the process is non - stationary with expectation and autocovariance function given , for , by & = & { \mathbb{e}}[u ( \eta \wedge t ) ] , \\{ \operatorname{cov}}({x}(s ) , { x}(t ) ) & = & { \mathbb{e } } [ u^2 \{s - ( t-\eta)_+\}_+ ] = \int_{t - s}^t h_2(v ) { \,\mathrm{d}}v .\end{aligned}\ ] ] by the uniform convergence theorem for slowly varying functions , the following asymptotic equivalence of the covariance holds . for any and all , as , with . in accordance with the notation in use in the context of long - memory processes, we can define the hurst index of the process as , because the variance of the process integrated between 0 and increases as . if , then .this case has been considered , for instance , by resnick and rootzn .let be a bounded real - valued function with compact support in ] , the coefficients can be computed for all such that and . according to lemma [ lem : unobs ] ,one may define , for all and , as stated in lemma [ lem : unobs ] , if < \infty ] , the sequence of coefficients at a given scale , , is stationary .moreover , the definition ( [ eq : defunobservable ] ) yields : [ lem : varwavcoefstationnaire ] let assumption [ hypo : modele ] hold with .we have = 0 , \qquad { \operatorname{var}}(d_{j , k}^s ) = \mathcal{l } ( 2^j ) 2^{(2-\alpha)j } , \ ] ] where is slowly varying as .more precisely , we have the asymptotic equivalence with .the proof of ( [ eq : varwavcoeffstationaire ] ) is a straightforward application of ( [ eq : cum ] ) , and the proof of the asymptotic equivalence ( [ eq : mathcallequiv ] ) is obtained by standard arguments on slowly varying functions .a detailed proof can be found in fay __ .let be a bounded function with compact support included in ] . from a computational point of view, it is convenient to chose and to be the so - called father and mother wavelets of a multiresolution analysis ; see , for instance , meyer .the simplest choice is to take and to be associated with the haar system , in which case , and .if the process is observed discretely , we denote its wavelet coefficients by (s ) { \,\mathrm{d}}s .\ ] ] if we observe , for some positive integer , we can compute for all such that . roughly , for , no coefficients can be computed and if the number of computable wavelet coefficients at scale is of order for and large .observe that the choice of time units is unimportant here . indeed , in assumption [ hypo : modele ] , changing the time units simply amounts to adapting the slowly varying functions and the rate of the arrival process .clearly these adaptations do not modify our results since precise multiplicative constants are not considered .we describe now a third observation scheme for which our results can easily be extended .suppose that is a positive integer and that we observe local averages of the trajectory where } ] at all scale and location indices such that . if and is the haar mother wavelet , , then the wavelet coefficients of ] if and only if <\infty \alpha > 1 ] . for all , we have .since is slowly varying , we obtain , for some positive constant , for all , . using lemma [ lem : espaleph ], it follows that \leq c ( j_1-j_2 ) 2^{-\varepsilon(j_2-j_0 ) } = o(2^{-\xi_2 j } ) , \label{eq : bornee2}\ ] ] for some because .this concludes the proof .[ theo : ratecontinustationnaire ] let assumption [ hypo : modele ] hold with and . assume , moreover , that is bounded and that with and .if is observed continuously on ] for some constant .hence , in this case , since , & \leq & c 2^{-j/2 } \sum_{j } |j-\delta| { \mathcal{l}}(2^j ) 2^{-j/2}\nonumber\\ & = & o\bigl(2^{- j/2 + ( \alpha/2 - 1 ) j_0}\bigr ) .\end{aligned}\ ] ] inequalities ( [ e4.26 ] ) and ( [ e4.27 ] ) imply ( [ eq : vartermcase1 ] ) . we now briefly adapt the previous proof to the case of discrete observations. define ] .then one can use the hill estimator for estimating the tail index . since and are asymptotically proportional and are independent and identically distributed , the rates of this estimator are those derived in hall and welsh .in particular , if has a pareto distribution , then a parametric rate can be obtained . on the other hand , in the same situation, our wavelet estimator defined on the observations \} ] and .if , moreover , <\infty ] , and = { \mathbb{e}}[n_s(\check f ) ] = 0 ] , then ( s ) f(s ) { \,\mathrm{d}}s$ ] .[ lem : svfunc ] let be a positive real and .let be a non - increasing function on such that , and let be a function on such that for all .define then , as and for any , barakat , c. , thiran , p. , iannaccone , g. , diot , c. and owezarski , p. ( 2002 ) . a flow - based model for internet backbone traffic .in _ proceedings of the 2nd acm sigcomm workshop on internet measurement _ , pp .new york : acm press .duffield , n.g . ,lund , c. and thorup , m. ( 2002 ) .properties and prediction of flow statistics from sampled packet streams . in _ proceedings of the 2nd acm sigcomm workshop on internet measurement _ , pp . 159171 .new york , acm press .knsch , h.r .statistical aspects of self - similar processes . in yu.a .prohorov and v.v .sazonov ( eds ) , _ proceedings of the first world congres of the bernoulli society _ , vol .1 , pp . 6774 .utrecht : vnu science press .parulekar , m. and makowski , a.m. ( 1997 ) .m / g / infinity input processes : a versatile class of models for network traffic . in _ proceedings of infocom 97 _ , pp .los alamitos , ca : ieee computer society press .
long - range dependence induced by heavy tails is a widely reported feature of internet traffic . long - range dependence can be defined as the regular variation of the variance of the integrated process , and half the index of regular variation is then referred to as the hurst index . the infinite - source poisson process ( a particular case of which is the queue ) is a simple and popular model with this property , when the tail of the service time distribution is regularly varying . the hurst index of the infinite - source poisson process is then related to the index of regular variation of the service times . in this paper , we present a wavelet - based estimator of the hurst index of this process , when it is observed either continuously or discretely over an increasing time interval . our estimator is shown to be consistent and robust to some form of non - stationarity . its rate of convergence is investigated . ,
community identification algorithms have been extensively used as a way to improve the quality perceived by users navigating through the web .search engines have incorporated this kind of technology as a source of information for their ranking algorithms and new applications , such as automatic directory creation .furthermore , community identification studies have proven to be of great value to researchers trying to increase their understanding of the information society . .the use of community identification algorithms to local communities , such as those that interact with portals or use specific services in the web , has not been much studied .the direct application of existing algorithms to local community identification does not yield relevant results .the main reason is the difference between the processes associated with service creation in the two levels : local and global .the creation of services in the web as a whole , global context , is governed by distributed and uncoordinated processes .for instance , someone s decision to reference one page authored by someone else does not have to go through any regulatory agency and does not need its peer s authorization .therefore , the majority of links in the web can be considered to have a semantic of reputation associated with it .differently , portals are created in a centralized and coordinated manner .the structures are created for navigational and business purposes leading to a completely different structure .this is why current community identification algorithms do not provide good results in that type of environment .the availability of user access information in the case of a local context is another important fact that should be noted .the combination of the community identification algorithms with user access information would be very valuable to content providers , that can provide specific services to specific communities .the inclusion of user access patterns on the community discovery process also allows us to infer communities even from a source that does not have explicit relationship information . neither the books of an online bookstore nor the games provided by an isp are explicitly related and , therefore , can take advantage of such technique . as the web evolves , new kinds of services , not explicitly related , are created and made available to the users accentuating the need for algorithms designed to work based on evidences other than link information .examples include streaming media and game services .this work proposes and evaluates a technique for local community identification based on user access patterns .our approach starts from a well - known community identification algorithm , the _ hyperlink - induced topic search _ ( hits ) .we then propose a way of transforming user access information into a graph - based structure to be used jointly with the hits algorithm .a methodology to evaluate communities that takes into account the semantic meaning associated with each community is also supplied . in order to exemplify the benefits of our approach ,we show two case studies based on services available on the internet .the paper is organized as follows : in section [ sec : related - work ] we present the related work . section [ sec : local - comm - ident ] presents the local community identification algorithm and proposes a methodology to evaluate the results .section [ sec : case - studies ] presents two case studies , based on actual logs from real online services .section [ sec : conclusion ] discusses the concluding remarks and future work .a considerable amount of research has been developed on community identification over the web .most of the approaches focus on analyzing text content , considering vector - space models for the objects usually related to information retrieval , hyperlink structure connecting the pages , markup tags associated with the hyperlinks or the combination of the previously cited sources of information .therefore , they are restricted to objects that contain implicit information provided by the authors.our work , on the other hand , is based solely on user access behavior .besides , we are considering community identification applied to a local context instead of the whole web .our approach aims to adapt the graph based community identification algorithm described in .some modifications to that takes into account user information have already been proposed in . however , this work was not focused on the community identification capabilities of and also considered a different representation of user patterns . other relevant aspect of our work is the proposal of a community evaluation methodology that can be applied to other techniques already proposed such as for comparison purposes .most of the comparison methodologies proposed so far are based on disjunction and coverage of the communities not taking into account semantic meaning .the hits algorithm was initially proposed as a method to improve the quality of searches on the web .it takes answers to a query from a text - based search engine and changes the ranking of these web pages considering the underlying hyperlinked structure connecting them .this approach , formerly known as link analysis , was also the base for several other related studies .the links are considered as a way to represent correlations between pages , inducing a certain reputation / quality to a web page pointed to by another .the algorithm identifies pages that provide valuable information for a determined query and also , pages that are sources of good links for the query .these two kinds of pages are respectively called authorities and hubs . the query in the search application is used to limit the scope of the web considered by the algorithm at each execution .therefore , it limits its coverage to a certain subject expressed by a user in terms of his / her query .the idea behind hits is to identify hubs and authorities through a mutually reinforcing relationship existent between the pages .this relationship may be expressed as follows : a good hub is a page that points to good authorities and a good authority is a page that is pointed to by good hubs .this approach is very successful for the search application since it lacks some of the weakness presented by other simple link analysis strategies like indegre and outdegre ranking .an iterative algorithm may be used to break the circularity of the mutually reinforcing relationship and to compute authority and hub weights for each page .thus , each page , has associated with it an authority weight , and a hub weight .these weights form a ranking of the pages ranging from good hubs / authorities , with high / values , to bad ones , with low / .the weights are iteratively evaluated by the following procedure : where indicates the existence of a link link from to .let a denote the adjacency matrix of the web page s subgraph to be considered by the hits algorithm , i.e. , ] , between related nodes is computed by : = \frac{\mid o_p \cap o_q \mid } { \mid o_p \mid } \nonumber\end{aligned}\ ] ] where represents the set of objects accessed in the nth session . after constructing matrix , that expresses the relationship between the user sessions, we identify the communities by applying the hits algorithm exchanging to .the authority weight of a session in a community , given by , is used in order to characterize the communities .the intuition behind this procedure is that the authority weight of a session is related to the authority weight of the objects requested in it .the subject treated by each community is implicitly defined by its members , i.e. , sessions with high authority weights .after user communities have been identified , a way to express their interrelationship must be provided .this comparison , in terms of their similarities / dissimilarities , is of great value to service providers since it is this sort of information that would help them to design better services .for instance , one might decide to provide a personalized service to its users based on information stored in the communities .the weights , associated with each pair session / community , generate a rank of the sessions within the communities .based on the rankings , our analysis tries to identify good and bad sessions for each community .these rankings pass through a series of data analysis techniques in order to provide interrelationships and interpretations for the communities .we use the spearman rank correlation coefficient to compare two communities .this correlation coefficient is a non - parametric ( distribution - free ) rank statistic proposed by spearman as a measure of the strength of the correlation between two variables through the analysis of the rankings imposed by them .the spearman method can be used to calculate the correlation between any communities and by : where is the difference in rank position of corresponding sessions . the value , can be considered to be an approximation to the exact correlation coefficient that could be found if the authority weights for each session were considered .the spearman rank correlation varies from -1 to 1 .completely opposite rankings are indicated by -1 while equal rankings are represented by 1 .we define distance ( i.e. , ) as the separation of two communities and calculate it by the following : the above definition is useful for visualization purposes and for analyzing the communities , as shown in the examples provided in section [ sec : case - studies ] . another important artifact related to community evaluation is the ability to discover the subjects represented by each of them . a simple , yet robust , method is to take into consideration the objects accessed by the users in each session .we split the sessions into three disjoint sets with respect to each community , the set of members , the set of non - members and the rest of them .the set of members is constituted by the top _n _ sessions of the community ranking .the non - members set is formed by the sessions occupying the lowest _n _ positions of the ranking .the remaining sessions are included in a third set not considered through the rest of the evaluation process .the value of should be chosen based on the level of specificity desired or on the information available about the objects .after classifying sessions as members , non - member and indifferent , we proceed by evaluating positively the objects accessed by the sessions belonging to the members set and negatively the objects accessed by the sessions belonging to the non - members set .the weight associated with each pair object / session is calculated by a measure based on a _ tf - idf _ approach , usually employed by information retrieval techniques .the frequency ( _ tf _ ) of an object within a session represents its importance in the scope of the session , while the distinction capability ( _ idf _ ) provided by the object is computed by : where represents the total number of sessions and represents the number of session in which the object was accessed .this section presents the results obtained by the application of the proposed techniques to two different applications : an online bookstore and an audio streaming media server providing content for an online radio .the focus of interest are the books for the online bookstore and songs for the audio streaming media server .the main reason for choosing these applications was the lack of any explicit relationship between objects provided by the service authors .the data comprises one week of accesses to each service .the dataset from the bookstore was collected from august 1st to august 7th of 1999 , while the audio streaming media dataset was collected from january 13th to january 19th of 2002 .the online bookstore considered here is a service specialized on computer science literature and operates exclusively on the internet . throughout the period ,the bookstore received 1.7 million requests , 50,000 of which were requests for information about books , such as : authors , price , category , and reviews . only those types of requestswere considered in this experiment .we used 30 minutes as a threshold for the period of inactivity . as a result, we found 40,000 users sessions .the online radio service provides a web interface to an audio streaming media server that provides songs .users can create personal radios , by specifying the songs they want to listen to , or choose a previous stored radio . in the process of radio creation , users can listen excerpts of the songs before they are inserted in the radio s playlist .the streaming media server received 2.3 million requests , 662,000 of them were requests for full songs .only those requests were considered in this experiment because we only wanted to capture the behavior of users who were already listening established radios .again we used 30 minutes of inactivity period as a threshold .the number of sessions found was 78,000 .c1 & c2 & c3 & c4 & c5 + + certification & networking & programming & programming & databases + databases & programming & networking & operating systems & hardware + reference - education & web development & operating systems & hardware & digital business & culture + + web development & reference - education & microsoft & databases & reference - education + networking & certification central & databases & microsoft & programming + programming & databases & certification central & certification central & certification central + + + c6 & c7 & c8 & c9 & c10 + + certification & programming & programming & databases & certification central + microsoft & operating systems & operating systems & web development & databases + networking & web development & web development & operating systems & reference - education + + home & office & networking & microsoft & programming & web development + programming & microsoft & databases & microsoft & networking + databases & certification central & certification central & networking & programming + the local community identification technique was applied to find the top 10 communities in the online bookstore dataset .the communities were named from c1 to c10 .the qualitative analysis of subjects covered by each community is presented in table [ tab : qualitative : bookstore ] . for this analysis ,the first half of the sessions ranking were considered to be the community members and the second half the non - members .information about the categories that each book belongs to were collected from the amazon online store .the weight of each book , computed as explained in section [ sec : comm - eval - comp ] , were used to find the most and the least important categories for each community .analyzing table [ tab : qualitative : bookstore ] , we can infer the following interpretation for each community : * community c1 is basically formed by users interested in database certification that show less interest in network and programming questions related to web development .* community c2 have users that show some interest in web development , while their main interests are networked applications involved with it , less importance is given to database and certification program by the users of this community .* distributed computing is the main interest treated by community c3 , this community is less related to databases and microsoft platforms / applications . * community c4 aggregates users interested in low level programming basically related to operating system s issues .it is interesting to note that this community is also less related to databases and microsoft , similar to community c3 , because these platforms are , generally , less flexible . *the main interests of users pertaining to community c5 are hardware specification of database systems .the close relationship between this community and management of such systems , induced by the interest in digital business books , also worths mention .* community c6 is formed by users interested in network administration of microsoft systems .the community is mainly related to certification programs about this subject . *low - level programming and scripting for web development are the main interests of users belonging to community c7 .they are are not interested , although , in the network problems related to web development .* community c8 is also related to low - level web development issues .the main difference between c8 and c7 is the fact that in c8 the database category is underprivileged in favor of the ones bottom - ranked in c7 .* community c9 is also related to web development such as c7 and c8 , although the users of this community express some interest in certification programs .this information was gathered by the analysis of the whole set of categories for this community .* certification in database systems is the main concern of users belonging to community c10 .we use two data analysis techniques , sammon s mapping and hierarchical clustering , to increase our understanding of the communities .the sammon s mapping is a nonlinear projection method closely related to metric multi - dimensional scaling ( mds ) .this method tries to optimize a cost function that describes how well pairwise distances in a data set are preserved on the generated projection .the projection derived by the use of sammon s mapping can be seen in figure [ fig : bookstore : graphs].(a ) . figure [ fig : bookstore : graphs].(b ) shows the hierarchical clustering obtained by the use of the the pairwise distance matrix between communities and the complete - linkage method .the complete - linkage method works as follows : 1 .assign to each community its own cluster and consider the distance between clusters to be the same one between the respective communities .2 . find the closest clusters and merge them into a single one .3 . compute the distances between this new cluster and each of the others .the distance between two clusters is calculated as being the longest distance connecting any sessions belonging to each cluster .repeat steps 2 and 3 until all session are grouped into a single cluster containing all the sessions .figure [ fig : bookstore : graphs].(c ) shows the distances between the clusters merged in each step of the algorithm .as expected , both methods give similar results .they group together communities that are closely related like c1 and c10 , and place apart communities that have no relation like c6 and c5 .it is even more interesting to notice that communities like c8 and c9 , that at a first look seem similar , are correctly separated by both methods .the top clusters of the hierarchy shown in figure [ fig : bookstore : graphs].(b ) , separate the dataset in two very distinct groups .the first one , formed by ( c2 , c3 , c4 , c5,c7 ) represents a group where most of the users are interested in low - level questions , like programming and networking , usually related to operating systems .the other one , formed by ( c1 , c6 , c9 , c10 ) , represents a group of users mostly interested in certification programs and their interests vary from network administration to web development .the dispersion of interests found on the latter group was automatically identified .this can be derived from the highest cluster distance considered for this merge , figure [ fig : bookstore : graphs].(c ) , and also by the dispersion of the points on the sammon s mapping projection , figure [ fig : bookstore : graphs].(a ) .the analisys of community clusters can be extended to the whole hierarchy with similar results .the sammon s distribution provides a comprehensive visualization of the relations expressed by the hierarchy and the use of both techniques together is a great start point for analysis of community data .the quality of the results obtained in the analysis is an evidence of the applicability of the distance metric based on spearman correlation .c1 & c2 & c3 & c4 & c5 + + brazilian pop & rock & samba , ax , pagode & soundtracks & forr & forr + samba , ax , pagode & sertanejo & international pop & mpb & sertanejo + soundtracks & mpb & international rock & international rock & brazilian pop & rock + + mpb & world music & brazilian pop & rock & teen pop & world music + world music & international r&b , soul & samba , ax , pagode & samba , ax , pagode & blues + international rock & international rock & sertanejo & sertanejo & mpb + + + c6 & c7 & c8 & c9 & c10 + + forr & brazilian pop & rock & international pop & international rock & international pop + sertanejo & international rock & soundtracks & reggae & international r&b , soul + samba , ax , pagode & samba , ax , pagode & international rock & orchestras and easy listening & soundtracks + + brazilian pop & rock & international r&b , soul & mpb & mpb & sertanejo + world music & world music & forr & samba , ax , pagode & classics + mpb & mpb & brazilian pop & rock & sertanejo & mpb + popular afro - brazilian style from the bahia state , an style closed related to carnival popular brazilian style derived from samba catchy dance music from the northeast of brazil brazilian country music commonly used for brazilian pop coming after the bossa nova style the same methodology used in the previous section was applied to the audio streaming media dataset .the top 10 communities ( c1 to c10 ) existent on the dataset were identified .the qualitative analysis of styles covered by each community and a short explanation of some brazilian music styles are presented in table [ tab : qualitative : radio ] . for this analysis ,only the top and bottom 100 session were considered to be the elements of the members and non - members sets . unlike the online bookstore, we did not have access to a unique identifier for the songs played .the information available was the title of the songs , cds and artists , making the process of categorizing data a time - consuming task .data about the styles of the songs accessed on the considered sessions were collected from amazon and submarino , a major online store in brazil .the sammon s mapping for this dataset is presented in figure [ fig : radio : graphs].(a ) .figure [ fig : radio : graphs].(b ) and figure [ fig : radio : graphs].(c ) the results obtained by the hierarchical clustering method when applied to this dataset . like in section [ sec : online - boookstore ] , we have the following obserrvations for the identified communities . for example , users in communities c9 and c10 represent users interested in international music styles , that do not pay much attention to brazilian music .communities c3 and c10 , that are located apart in both representations , seems to represent different interests of their users .although the same kind of analisys based on similarities of communities and their interests can be done for this dataset , we want to point out other dataset features identified by the algorithm without relying upon any explicit information .one of them is related to the structure of the phonographic industry existing in brazil and the other one is related to the specificity of each dataset .even for an untrained observer , table [ tab : qualitative : radio ] shows that users of the online radio exhibit strong interest in local music .much of the categories cited are of brazilian music , even though all top international albums were also available .this fact is extremely important since it reflects what happens everyday on brazilian streets .the ifpi music piracy report shows that over 50% of the piracy in brazil is domestic and , therefore , many questions concerning the survival of the local phonographic industry production are being raised .the algorithm s capability of confirming a behavior observed in the the society is very interesting since it can shed light on new questions .the specificity level of each dataset is different and the algorithm is able to reflect this fact . the online bookstore is specialized in computer science while the online radio service provides access to different music styles from different nationalities .the slightly higher distance measures used in the each merge step is an evidence of the latter , figures [ fig : bookstore : graphs].(c ) and [ fig : radio : graphs].(c ) .also we can see in the sammon s mapping that the communities found in the audio streaming media dataset , figure [ fig : radio : graphs].(a ) , are more spread than the ones of the bookstore dataset , figure [ fig : radio : graphs].(a ) .the methodology proposed offers several advantages over the graph - based algorithms in their pure form when applied to the context of local community identification . the communities identified represent the user s perception of the information provided by the services , and this understanding gives service providers a great opportunity to service improvement . an evaluation methodology based on data analysis available was also proposed .the evaluation technique is based on _ tf - idf _ ranking of occurrences and the spearman rank correlation .the former is used to provide the focus of each community and the latter , derive a pairwise distance metric .the benefits of these methods are exemplified by the case studies , based on actual data of two real services available in the web .the results obtained in this paper are encouraging and show that the proposed techniques and metrics are promising for characterizing the interests of users accessing a service in the web . yet , this is just an introductory study and we must devote much attention to other possibles metrics , datasets and applicabilities of the proposed technique . the temporal emergence of communities and their evolution is also of great interest .we also intend to compare our results with other methods used for similar purposes .the authors would like to thank the anonymous service owners and operators for enabling this research to proceed by providing us access to their logs .we also would like to thank the e - speed lab team , at the dcc / ufmg , for the continuous help during the development of this research , specially flvia ribeiro for helping with the revision of some of the text presented in this paper .k. bharat and m. r. henzinger .improved algorithms for topic distillation in a hyperlinked environment . in _ proceedings of the 21st international conference on research and development in information retrieval ( sigir ) _ , 1998 .s. chakrabarti , m. m. joshi , and v. b. tawde .enhanced topic distillation using text , markup tags , and hyperlinks . in _ proceedings of the 20th international conference on research and development in information retrieval ( sigir ) _ , 2001 .d. a. menasc , v. a. f. almeida , r. h. riedi , f. p. ribeiro , r. l. c. fonseca , and w. meira jr . in search of invariants for e - business workloads . in _ proceeding of the second acm conference on electronic commerce _ , 2000 .j. c. miller , g. rae , f. schaefer , l. a. ward , t. lofaro , and a. farahat . modifications of kleinberg s hits algorithm using matrix exponentiation and web log records . in _ proceedings of the 24th international conference on research and development in information retrieval ( sigir ) _ , 2001 .g. paliouras , c. papatheodorou , v. karkaletsis , c. spyropoulos , and v. malaveta . learning user communities for improving the services of information providers . in _european conference on research and advanced technology for digital libraries _ , 1998 .e. a. veloso , v. a. f. almeida , w. meira jr ., a. bestavros , and s. jin . a hierarchical characterization of a live streaming media workload . in _ proceedings of the acm internet measurment workshop ( imw02 )
community identification algorithms have been used to enhance the quality of the services perceived by its users . although algorithms for community have a widespread use in the web , their application to portals or specific subsets of the web has not been much studied . in this paper , we propose a technique for local community identification that takes into account user access behavior derived from access logs of servers in the web . the technique takes a departure from the existing community algorithms since it changes the focus of interest , moving from authors to users . our approach does not use relations imposed by authors ( e.g. hyperlinks in the case of web pages ) . it uses information derived from user accesses to a service in order to infer relationships . the communities identified are of great interest to content providers since they can be used to improve quality of their services . we also propose an evaluation methodology for analyzing the results obtained by the algorithm . we present two case studies based on actual data from two services : an online bookstore and an online radio . the case of the online radio is particularly relevant , because it emphasizes the contribution of the proposed algorithm to find out communities in an environment ( i.e. , streaming media service ) without links , that represent the relations imposed by authors ( e.g. hyperlinks in the case of web pages ) .
a simplest example of a heteroclinic cycle is a sequence of saddle type equilibrium points joined in a circle by connecting orbits . generically heteroclinic cycles are not robust under perturbations of the system ( changes of parameters ) , but for special classes of systems they may occur robustly , typically due to the presence of invariant hyperplanes .examples of special structures leading to robust heteroclinic cycles are symmetry , the existence of invariant planes corresponding to extinction of some species in lotka - volterra systems or existence of synchrony subspaces in coupled cell systems .heteroclinic networks are a generalization of heteroclinic cycles to sets of equilibria with more complicated connection structure .more generally , heteroclinic sets may consist of invariant sets connecting periodic or more complicated saddle type dynamics .the study of heteroclinic cycles was motivated by examples in fluid mechanics ( systems with symmetry ) and in population biology .see for an introduction to the subject .more recently , rabinovich and co - workers have proposed applications of robust heteroclinic cycles in neuroscience , see for an early review . among the contexts proposed in where heteroclinic dynamics could be relevant were central pattern generators ( cpgs ) and memory formation .these two applications were validated by some more detailed biological studies .the cpgs are circuits controlling the motoric function , and are known to support a variety of complex oscillations corresponding to different movements of the body . as shown in this paper , by slightly changing the coupling structure in the model one can obtain a variety of heteroclinic cycles and thereby complex periodic solutions .the idea of the memory application is similar modifications of the coupling , arising from the action of the input , lead to the occurrence of periodic orbits , existing near heteroclinic cycles whose properties reflect the structure of the input . the focus of this work is to study hopfield networks , which are the simplest models of memory circuits , with the goal of investigating the presence of heteroclinic cycles .+ hopfield introduced thirty years ago this model for learning sequences and associative memory in neural networks , in their simplest possible form . in the continuous time version , for each neuron in a network of neurons , the activity is modeled by the following equation ( activity model ) : where is the activity variable ( membrane potential ) of neuron and is a constant external input on neuron .the function is strictly increasing and invertible , for example a sigmoid .the quantity is the firing rate of neuron , that is the time rate of spikes which are emitted by the neuron .classically the function is used .the coupling coefficients define a matrix called the _connectivity matrix_. a positive ( resp .negative ) coefficient corresponds to an inhibitory ( resp .excitatory ) input from to .when all neurons have the same state of rest . when coupling is switched on , other equilibria may exist depending on the coefficients .it is often assumed that is a symmetric matrix , implying that the dynamics of the network always converges to an equilibrium .each equilibrium is defined by a sequence of values called a _pattern_. depending upon the inputs , one or another equilibrium will be reached , a process that is interpreted as retrieving a pattern which has been earlier memorized through a tuning of the coupling coefficients in the network ( hebbian rule ) .+ the assumption that is symmetric is unnecessary to storing information , and besides experiments have shown that dynamical patterns are often present in neural circuits and seem to play an important role in various aspects , in particular generating periodic oscillations in cpg . in a recent work chuan et al . developed a method of converting strings of information , in the form of sequences of vectors with entries , into coupling matrices so the hopfield network with the resulting coupling architecture would have _ storing cycles _, i.e. periodic orbits carrying the information of the underlying sequence of vectors .the defining feature of a storing cycle associated to such sequence is that it visits the vicinity of each of its vector , preserving their order in the sequence . in another study chuanet al . used hopf bifurcation analysis to find storing cycles .+ a natural observation is that heteroclinic cycles between equilibria given by the elements of the sequence provide a natural approximation to storing cycles .however the simulations of and gave no evidence of the existence of heteroclinic cycles .in this work we show that after a small modification the systems studied in support heteroclinic cycles .our approach draws on the work of fukai and tanaka , who observed that by of replacing a non - differentiable term in the firing rate equations by a constant one obtains a lotka - volterra system , which supports robust heteroclinic cycles .this approach was subsequently used by and in their study robust heteroclinic cycles in firing rate models . in this workwe continue the approach of , introducing some refinements to their approximation of the firing rate equations .we point out that the original firing rate equations can not support hateroclinic cycles due to the presence of non - smooth terms and introduce two methods of regularizing the equations . when the systems studied in are modified using either of our approaches heteroclinic cycles do exist and there is a direct correspondence between the input string / vector sequence / coupling structure and the resulting heteroclinic cycle . in this workwe carry out a detailed study of this correspondence .system is often transformed to the _ firing rate formulation _ , by letting the firing rates be the dependent variables . in this sectionwe make the same choice of as the authors of , namely in section [ sec - lvapprox ] , where we review some of the work of , and , we make a brief switch to a different but equivalent choice of used by these authors .system transformed to the firing rate variables with given by has the form ^n\ ] ] where and is the coupling matrix .we further decompose as follows : where is the identity matrix , and are non negative coefficients and .+ provided that the equation has a couple of non zero solutions with .therefore when , any vector of the form with is a stable steady - state of . if we think of vectors of the form as information strings in a neural network , then the above steady - states represent stored memory states .however it is well - known that memory states need not be steady ( see and references therein ) . if the steady - states may become unstable or even disappear , but nevertheless information may still be dynamically stored .we now explain the idea of information storage by means of limit cycles of ( storing cycles ) , as explored in and then we introduce our idea to use robust heteroclinic cycles instead .+ the basic question adressed in is the following : given an information string , can it be stored by a hopfield network in the form of dynamic information , more specifically a limit cycle ? concretely , the information is given in the form of a string of binary -vectors ( with components equal to ) .the learning rule , consistent with hebbian learning , is an algorithm specifying how the information string structures the coupling matrix ( we forget from now on the subscript 1 in ) .this learning rule will be described in detail in section [ sec - eqnetarc ] .the main research question of is whether the system with the coupling structure resulting from applying the learning rule supports stable limit cycles that code the original information string in the sense that the periodic orbit passes through the quadrants of corresponding to the elements of the information string , following its order . in this articlewe focus on a different version of such encoding by the dynamics , choosing a robust heteroclinic cycle as the invariant object encoding the information string .the condition we impose is that the cycle should connect equilibrium points located at vertices of the cube ^n ] , so that heteroclinic cycles can not exist .we discuss this problem in more detail and propose a solution in the next section , which also relates to the work of , and .the articles , and consider the question of the existence of robust heteroclinic cycles in the firing rate version of and show that such cycles exist for a lotka - volterra approximation of the system . in this sectionwe use a different combination of and consistent with choice made in these articles .specifically we will use the functions : the coefficients are assumed to be all positive so that the synaptic couplings are all of inhibitory type .we define the firing rate by and transform to the firing rate formulation . after applying a time rescaling we obtain the following system . note that system is well defined and continuous on the cube ^n ] , yet its properties are similar to , in particular its derivative at and equals , thus is very large .if we replace by or in then , depending on the relative size of and or , any of the three possibilities can arise : 1 . the cycle does not exist , 2 .the cycle exists and is unstable , 3 .the cycle exists and is stable .[ fig ] shows simulations of with replaced by .the matrix is as given in , and .the value of is varied showing an example of each of the possible cases .is as given in , and illustrate the three possible cases for the regularized system corresponding to three different values of .panel * a * shows a heteroclinic cycle , with , panel * b * shows a periodic orbit close to an unstable cycle , with , panel * c * shows the dynamics attracted to an equilibrium for the case when no cycle exists , with . ]we now return to the formulation with replaced by its -th order polynomial expansion at as described in section [ sec - mod ] .the equation now reads ^n,\ ] ] with and the power series has a radius of convergence equal to 1 .it follows that given any interval , the approximation of by can be as good as we wish provided that is large enough .we now give a formal description of the information string and introduce the learning rule .+ a _ binary pattern _ ( or simply a _ pattern _ ) is a vector of binary states of neurons : with .let be a sequence of patterns .this matrix is called a _ cycle _ if there exists a connectivity matrix such that the corresponding network of neurons visits sequentially and cyclically the patterns defined by . in other words each column can be associated with a state of the system such that the signs of the cell variables are equal to the signs of the corresponding components of .we shall always assume .+ let be the matrix of the cyclic permutation . the cycle is called _ admissible _ if there exists such that has a solution .this relation expresses a necessary condition for the network to possess a solution that periodically takes the signs defined by the patterns .note that , if is admissible then a solution exists in the form where is the moore - penrose pseudo - inverse of , and if has full rank it is unique . a cycle is called _ simple _ if there exists a vector such that each row of equals , for some .we define by theorem 2 in a simple cycle is admissible if and only if . + if in addition we can write then the simple cycle is called _ consecutive_. the following proposition is essentially contained in sec .5.1.1 of .[ prop : jconsecutivecycle ] if a simple consecutive cycle is admissible , then there exists a satisfying of the form where are rational coefficients .if has full rank then is uniquely defined and ( in this case the cycle is minimal in the sense of ) . by constructionwe can write by admissibility and moreover must be a linear combination of the s . hence follows . the s are rational because the vectors have integer coordinates .if then is non singular , hence .[ ex - basic ] we consider as follows , with : let .note that the rows of are , and .note also that .it follows that the rows of are , and , i.e. the second , the third and the negative of the first row of .hence since the rows of are independent the matrix is invertible .hence has a unique solution which , by , must be given by : note that and satisfies .this matrix provides a simple example of heteroclinic cycle , which we illustrate in fig .[ hetcyc36 ] : the reader can check on this numerical simulation that indeed trajectories follow the pattern defined by . observe that the trajectory closely follows the edges of the cube connecting the equilibria in the pattern .the analysis is easy but it follows directly from proposition in section 4.2 ( see example [ ex : edgecycle ] ) . given by and with , .initial conditions close to . ]suppose that is fixed and note that every consecutive cycle is uniquely determined by the choice of and . if then such a cycle is always admissible . if then there are only very special choices of and such that the cycle is admissible . in this sectionwe will address the question of finding the conditions on so that there exists an such that the cycle determined by and is admissible . in order to avoid confusion with prime numbers we will , throughout this section , use the letter instead of to denote the dimension of .we will return to the original notation of in the subsequent sections .consider a simple cycle as defined in section [ sec - eqnetarc ] , with corresponding to the first row of .if is admissible the there exists in the form given by such that is satisfied .let be the last row of ( see ) and let it follows from that .in this section we use the following result : [ th - charinvsp ] let be a non - trivial invariant subspace of the action of on .then there exists a polynomial , which is a divisor of , such that .moreover , for any the inclusion holds if and only if is a divisor of .theorem [ th - charinvsp ] follows from some classical results of algebra , which we will review in the appendix , thereby providing the proof .we now state two corollaries of theorem [ th - charinvsp ] which we will use to characterize the possible choices of for which .[ cor - weta1 ] if then and is a divisor of . *proof*. it is easy to see that .we will prove that the opposite inclusion holds and that is a divisor of . by theorem [ th - charinvsp ]there exists a divisor of such that and divides .suppose that is a proper divisor of .then which is a contradiction .it follows that .hence the corollary holds .+ for a minimal divisor of ( over ) let and let .the following result leads to a characterization of such that .[ cor - weta2 ] if then , for some a minimal divisor of ( over ) .* proof * by corollary [ cor - weta1 ] there exists , a divisor of , such that . by unique decomposition into prime factors over exists a minimal divisor of which divides .let .clearly divides .it follows from theorem [ th - charinvsp ] that .+ in the remainder of this section we will derive the conditions on needed for for some ( the simplest ) choices of , where is a minimal divisor of .we begin by recalling the decomposition of into irreducible polynomials over . for a positive integer let the polynomials , where is a prime number , are irreducible over . fora prime number and a non - negative integer we define note that .the polynomials are the irreducible factors over of the polynomial .suppose , with , , and let the polynomial is called the cyclotomic polynomial of degree and is irreducible .it now follows that the decomposition of into irreducible factors over given by all the possible factors of over are products of the irreducible factors appearing in , hence all the possible choices of are obtained that way .as announced above we now describe some of the spaces by simple conditions on the components of .[ prop - cond1 ] if , for some prime number then consists of vectors satisfying * proof * we use the following identity : hence ( the indexing of the components of in must be understood modulo . )it follows that the rhs of is equal to the vector if holds .+ we now state the condition on for .we begin with the following elementary lemma ( the proof is left to the reader ) .[ lem - elem ]if divides then [ prop - cond2 ] suppose .then * proof * note that further note that , for each , .hence , for each , moreover , for the result follows . [ rem - oneblockspace ] since the coordinates of are , it follows that must be contained in one of the spaces .[ rem - other ] the conditions for the other minimal factors of are slightly more complicated and we will not state them here .they are , however , not hard to derive .[ ex - prop2 ] we consider with .note that satisfies with .hence , by proposition [ prop - cond1 ] , with hence note that the last row of is equal to the lhs of .if we define as in with then the rhs of equals .hence , by , identity holds with as specified .[ ex - prop3 ] we consider with .note that , or , in other words , arguing as in example [ ex - prop3 ] we conclude that generated by and is admissible with whose last row equals .[ ex - n/2 ] an interesting class of admissible cycles exists for even with . note that in this case , i.e. divides . further note that let be a cycle constructed with some , .then is admissible .moreover , by a similar argument as in example [ ex - prop2 ] we conclude that the last row of equals .in particular example [ ex - basic ] of section [ sec - eqnetarc ] is a special case of this construction .this type of admissible cycle is called _ antisymmetric _ in .[ ex - xm1 ] since the space corresponds to vectors for which with is admissible .moreover note that for s whose entries are this means that the number of coordinates equal to is the same as the number of coordinates equal to .hence must be even . in this casethe last row of is .we now come to the study of heteroclinic cycles for admissible consecutive simple cycles governed by equation ( [ eq - stocyc ] ) , hence with as in ( [ eq : js ] ). then the equation reads as a system following , we also assume that the two coefficients which control the relative contributions of and to each neuron satisfy * ( h ) * and .we aim at studying the existence and stability of heteroclinic cycles connecting vertex equilibria , i.e. equilibria with entries , for this system . by construction ,the edges , faces and simplices of the hypercube are invariant under the dynamics of [ eq - consecutive ] .+ let be a _vertex equilibrium _ : for all .linearizing ( [ eq - consecutive ] ) at leads to a system of equations where : we can express the eigenvalues as follows : note that under the above conditions on and , which we assume from now on , a necessary and sufficient condition for the existence of negative _ and _ positive eigenvalues with is that this is always possible to realize since .if and otherwise . the equation ( hence ) is invariant by the symmetry .therefore any time a cycle admits a heteroclinic cycle , the cycle admits the _ opposite _ heteroclinic cycle obtained by applying .+ in the following we shall always consider s up to this symmetry .[ def : edgecycle ] a heteroclinic cycle is called an `` edge cycle '' if it connects a cyclic sequence of vertex equilibria through heteroclinic orbits lying on the edges of the hypercube .we also request that the unstable manifold at each equilibrium in the cycle has dimension 1 ( therefore is contained in an edge ) .the condition about the unstable manifolds is necessary for asymptotic stability of the edge cycles .if and denote respectively the contracting and expanding eigenvalues along the heteroclinic trajectories , the edge cycle is asymptotically stable if ( see ) provides a simple case of an asymptotically stable edge cycle , see fig .[ hetcyc36 ] .we show below that all asymptotically stable edge cycles have the same simple structure .[ thm : existence edge cycle ] let hypothesis ( h ) hold .an edge cycle exists for ( [ eq - consecutive ] ) if and only if condition holds as well as the following : this edge cycle connects the sequence of equilibria let be an equilibrium in the cycle .note first that according to ( [ eigenvalues ] ) , in order to have one unique positive eigenvalue with , the following must be true : holds and ( i ) all with have the same sign , ( ii ) and ( iii ) has the sign of for .let be the next equilibrium in the cycle , then we must have for all and .observe that we then have .it is straightforward to check that under , there is no equilibrium point lying on the edge joining to and therefore that a heteroclinic connection exists on this edge . + now let s assume that the positive eigenvalue is . then all s , , must be equal and the condition can be written .also we request and , which can be written . as for the case we can check that if these inequalities are satisfied a heteroclinic orbit joins to .+ from the above construction we deduce that the edge cycle must connect the equilibria in the sequence .[ cor : edgecycle ] edge cycles are in one - to - one correspondance with connectivity matrices with and for . moreover , under hypothesis ( h ) , they are asymptotically stable iff .it follows from the above theorem that the matrix for an edge cycle is defined by where is the vector with ( times ) .note that .it follows that the rows of are and .hence with and is solution of since the rows of are independant the matrix is invertible , hence the solution is unique . + it is straightforward to check that is true in this case iff .note that example [ ex - basic ] provides the simplest case of an edge cycle .we have seen in the previous section that in order for a vertex equilibrium of ( [ eq - consecutive ] ) to have a unique positive eigenvalue , a necessary condition was that a change of sign in the sequence of coordinates occurs at most once .the sign of is a special case , it depends on the coefficients .suppose now that has two positive eigenvalues , along directions and .the corresponding two dimensional unstable manifold lies in the face defined by the fixed coordinates when . assuming , the four vertices on this face are , , and .the question which we address now is whether there can exist stable heteroclinic cycles which involve saddle - sink connecting trajectories from to .this situation can of course be generalized to more than two unstable eigenvalues , if there are more than two switches of sign in the s .let us first look at an example in low dimension .+ in all the following we assume hypothesis ( h ) holds .[ ex : non - edge1 ] consider 3 neurons ( ) and 4 equilibria such that .defining as before and , we build to form a consecutive cycle with and : .hence clearly the third row is the opposite of the first one , hence this matrix has rank 2 .nevertheless the cycle is admissible because where is the rank of the matrix ^t ] on the linear transformations on given by . rather than studying specifically the permutation we consider a more general context of cyclic transformations .a transformation is cyclic if there exists a vector such that we will prove the following result .[ th - charinspa ] let be a cyclic transformation .there exists an polynomial of degree ( the minimal polynomial ) with the following property .the invariant spaces for the action of on are in one to one correspondence with non - trivial factors of the polynomial .if is such a factor then is the corresponding invariant space. it will be clear from the arguments below that .hence theorem [ th - charinvsp ] is a direct consequence of theorem [ th - charinspa ] . for each invariant subspace , we get a map\to l(w , w ) , f\mapsto f(t)\upharpoonright w\ ] ] whose kernel is an ideal of ] is a principal ideal domain ( pid ) it is a principal ideal .we denote the monic generator of this ideal by .note that .note also that for , the polynomial is the _ minimal polynomial _ of , and we will denote it by . a few observations in this setting will be useful . if ] , the linear transformations and commute so that : implies and implies . if ] by the euclidean algorithm .it follows that since by definition of the minimal polynomial .also , since divides , there is ] as the cyclic subspace generated by . clearly , every minimal subspace must be of this form. we will prove in lemma [ lem - cyc ] that every invariant subspace has this form .now let and , again from the action of the polynomial ring , we obtain a map \to{{\mathbb{r}}}^n , f\mapsto f(t)(v).\ ] ] again , this map is linear , and its kernel is a principal ideal of ] . [ lem - quo ]let , then } ] .the first statement follows as if annihilates then it also annihilates for any ] is equal to .[ lem - facto ] .let and ] is a unique factorization domain ( ufd ) , which means that each non - zero polynomial ] , if divides then divides or divides ) .[ lem - cyc ] let be an invariant subspace .then there is with . to see this , first note that .( lcm denotes the least common multiple ) .thus , for any irreducible divisor of and for the largest power of that divides , there must be a so that divides .now taking , we see by lemma [ lem - facto ] that where . doing this for each irreducible divisor of , the sum of the resulting is the required element by corollary [ cor - dir ] , since implies .recall that a linear transformation is cyclic if there is a so that ={{\mathbb{r}}}^n ] .then is the minimal polynomial and thus for and )=\deg(f) ]. if is cyclic and , then .for any with we have , so and we have thus .the following result now follows .[ th - charfinal ] if is a cyclic linear operator on and is a cyclic generator , then the invariant subspaces of for are in one - to - one correspondence with the pairs such that .the space corresponding to such a pair is =im(g(t))=\ker(f(t)).\ ] ] in particular , the minimal invariant subspaces correspond to the pairs where is an irreducible factor of in ] .note that theorem [ th - charfinal ] implies theorem [ th - charinspa ] .this work was partially supported by the european union seventh framework programme ( fp7/2007 - 2013 ) under grant agreement no .269921 ( brainscales ) , no . 318723 ( mathemacs ) , and by the erc advanced grant nervi no .227747 .99 p. ashwin , o. karabacak and t. nowotny , criteria for robustness of heteroclinic cycles in neural microcircuits , _ j. math . neurosci_. * 1*:13 ( 2011 ) c. bick c , m. i. rabinovich . dynamical origin of the effective storage capacity in the brain s working memory ._ phys rev lett_. * 103*(21 ) : 218101 , 2009 p.chossat , r. lauterbach ._ methods in equivariant bifurcation and dynamical systems _ , advanced series in nonlinear dynamics * 15 * , world scientific , singapour ( 2000 ) chuan zhang , g. dangelmayr , i. oprea .storing cycles in hopfield - type networks with pseudo inverse learning rule : admissibility and network topology . _neural networks _ * 46 * , 283 - 298 ( 2013 ) .chuan zhang , g. dangelmayr , i. oprea .storing cycles in hopfield - type networks with pseudoinverse learning rule : retrievability and bifurcation analysis . submitted ( 2013 )david s. dummit and richard m. foote ._ abstract algebra _3rd edition , john wiley & sons ( 2003 ) .b. g. ermentrout , d. h. terman .mathematical foundations of neuroscience . _ interdisciplinary applied mathematics _ , vol .35 , springer ( 2010 ) .t. fukai , s. tanaka . a simple neural network exhibiting selective activation of neuronal ensembles : from winner - take - all to winners - share - all . _ neural comput . _ * 9 * : 77 - 97 ( 1997 ) .t. gencic , m. lappe , g. dangelmayr and w. guettinger . storing cycles in analog neural networks .parallel processing in neural systems and computers , r. eckmiller , g. hartmann & g. hause ( eds ) , 445 - 450 , north holland ( 1990 ) .j. hofbauer , k. sigmund ._ evolutionary games and population dynamics _ , cambridge university press ( 1998 ) .j.j.hopfield , neural networks and physical systems with emergent collective computational abilities , _ proc .usa _ * 79*(8 ) : 25542558 , 1982 .robust heteroclinic cycles . _ j. of nonl .sci . _ * 7 * , 129176 ( 1997 ) .l. personnaz , i. guyon & g. dreyfus .collective computational properties of neural networks : new learning mechanisms . _ physical review a _ , * 34*(5 ) 4217 - 4228 ( 1986 ) .m. p. rabinovich , p. varona , a. i. selverston , h. d. i. abarbanel .dynamical principles in neuroscience ._ reviews of modern physics _* 78*(4 ) : 1213 - 1265 ( 2006 ) .a. szucs , r. huerta , m. i. rabinovich , a. i. selverston .robust microcircuit synchronization by inhibitory connections ._ neuron _ , * 61 * : 439 - 453 ( 2009 ) .
learning or memory formation are associated with the strengthening of the synaptic connections between neurons according to a pattern reflected by the input . according to this theory a retained memory sequence is associated to a dynamic pattern of the associated neural circuit . in this work we consider a class of network neuron models , known as hopfield networks , with a learning rule which consists of transforming an information string to a coupling pattern . within this class of models we study dynamic patterns , known as robust heteroclinic cycles , and establish a tight connection between their existence and the structure of the coupling . * keywords * : heteroclinic cycles , hopfield networks , learning rule , network architecture . + * ams classification * : 34c37 , 37n25 , 68t05 , 92b20 .
quantum computation is possibly the up most application goal of quantum mechanics in nowadays .feynmann devised this relation based on use of quantum properties to speed up the simulation of computational problems by physical systems .thus , processing based on quantum algorithms , useful developments using quantum information processing have been effectively developed as theoretically as experimentally : quantum dense coding , quantum key distribution , quantum computation and quantum teleportation .quantum gate array computation is the most common , direct and clear approach in terms of proximity with classical computing .their similitude is based on the use of computer gates replicated from classical programming .this gates are reproduced by several designs in terms of physical resources where they have been carried out : ion traps and electromagnetic cavities , josephson junctions , nuclear magnetic resonance and spins . nevertheless , their translation to theoretical gates is not always immediate , requiring control or iterative procedures .teleportation is a physical process now in the public domain by its attractiveness . in it, a quantum state can be transferred to other using a previously shared entangled pair , with assistance of classical communications and local operations . in , a bell stateis used to this goal together with hadamard , and gates .departing from this development , many proposals and goals have been made to transfer a multi - qubit state , inclusively in the experimental terrain .teleportation has been accomplished among optical systems , photons and a single atomic ensemble and trapped atomic ions using coulomb interaction .therefore , teleportation is commonly considered as one of the most striking progresses of quantum information theory .ising model is used as a simple approach to magnetic interaction between quantm objects ( electronic gases , quantum dots , ions , etc . ) .nielsen was the first reporting studies of entanglement between magnetic systems based on a two spin systems driven with an external magnetic field .one property of this model is that it generates entanglement , one of the more interesting properties of quantum mechanics .this property is a central aspect in the most of quantum applications , because its non local properties improve capacity and speed information processing .control of entanglement is achievable in ising model through of driven magnetic fields being introduced on the physical system .this is the case for teleportation .different models of ising interaction ( depending on interest of each author and physical systems being considered ) are used to reproduce effects related with bipartite or multipartite systems and quantum dots ) .nowadays , quantum gate array computation is being experimentally explored to adapt it to stuff in which it can be settled , particularly in terms of noise control and reproduction of computational gates .it means , the interactions able to be considered to reproduce them .quantum dots and electronic gases are developments towards a scalable spin - based quantum computer , which can be controlled with electromagnetic interactions among neighboring spins to obtain universal quantum operations in terms of divincenzo criteria .the aim of this paper is apply some non local properties recently reported by in the anisotropic ising model and some control procedures on it , which naturally reproduces non local gates useful for teleportation .these gates reduce teleportation algorithm to a driven magnetic interaction based on matter to transfer spin quantum states . several variants of this process are presented , thus as their extension to multiple qubits teleportation .the use of magnetic systems as quantum resources is a basis on which quantum applications could be settled . from quantum memories to quantum processors ,matter susceptible of magnetic control is considered as stuff for quantum computation or quantum information processing .particularly , ising model , is a simple model of interaction bringing an easy basis to generate and manipulate quantum states and entanglement particularly . in this model ,as is shown in the upper part of figure [ fig1 ] , two qubits interact via ising interaction using additional local magnetic fields as driven elements .some results about anisotropic ising model generalize their treatment and suggest an algebraic structure when it is depicted on bell basis . following this work ,we focus on the following hamiltonian for the bipartite anisotropic ising model including an inhomogeneous magnetic field restricted to the -direction ( corresponding to respectively ) : which includes several models considered in the previously cited works . following the definitions and the notation adopted in , we introduce the scaled parameters : \\ \nonumber \\ \nonumber { \rm with : } & & j_{\ { h \}\pm } \equiv { j_{i , j}}_\pm = j_i \pm j_j \\ \nonumber & & b_{h \pm}=b_{1_h}\pm b_{2_h } \\\nonumber & & { r_h}_\pm = \sqrt{{b_h^2}_\pm+{j_{\ { h \}}^2}_\mp}\end{aligned}\ ] ] being a cyclic permutation of and simplified by .additionally , we introduce the same nomenclature introduced in on different subscripts than those inherited by the computational basis : greek scripts for or , for scripts in states and operators ( meaning in mathematical expressions respectively ) ; capital scripts for as in the computational basis ; and latin scripts for spatial directions or .bell states become in last notation : in agreement with last notation , energy levels in ( [ hamiltonian ] ) are denoted by eigenvalues and they are : with their eigenvectors reported in .as there , by introducing the following definitions : the evolution operators in bell basis are in matrix form : \label{mathamiltonian2 } { u_{2}}(t)= & \left ( \begin{array}{c|c|c|c } e^{i { \delta_2}_+^+}{{e_2}_+^+}^ * & 0 & 0 & - e^{i { \delta_2}_+^+}{d_2}_+ \\ \hline 0 & e^{i { \delta_2}_-^+}{{e_2}_-^+}^ * & -e^{i { \delta_2}_-^+}{{d_2}_- } & 0 \\ \hline 0 & e^{i { \delta_2}_-^+}{{d_2}_- } & e^{i { \delta_2}_-^+}{{e_2}_-^+ } & 0 \\ \hline e^{i { \delta_2}_+^+}{d_2}_+ & 0 & 0 & e^{i { \delta_2}_+^+}{{e_2}_+^+ } \end{array } \right ) & \in { \bf s}^*_2 \\[3 mm ] \label{mathamiltonian3 } { u_{3}}(t)= & \left ( \begin{array}{c|c|c|c } e^{i { \delta_3}_-^+}{{e_3}_-^+}^ * & 0 & i e^{i { \delta_3}_-^+}{d_3}_- & 0 \\ \hline 0 & e^{i { \delta_3}_+^+}{{e_3}_+^+}^ * & 0 & i e^{i { \delta_3}_+^+}{d_3}_+ \\\hline i e^{i { \delta_3}_-^+}{d_3}_- & 0 & e^{i { \delta_3}_-^+}{{e_3}_-^+ } & 0 \\ \hline 0 & i e^{i { \delta_3}_+^+}{d_3}_+ & 0 & e^{i { \delta_3}_+^+}{{e_3}_+^+ } \end{array } \right ) & \in { \bf s}^*_3\end{aligned}\ ] ] where clearly have a block structure , having each one the semidirect product structure : .thus , belongs to subgroups : ( defined in ) .these subgroups are characterized by their block structure and properties inherited from ( [ mathamiltonian1]-[mathamiltonian3 ] ) when are fixed .then , these subgroups have a semidirect product structure : .thus , identity and inverses are included in each subgroup , while there are closure in the product . at same time , quantum states in become split in a direct sum of two subspaces generated each one by pairs of bell states .this structure is essential in the current work because it assures the existence of solutions in their control .blocks are elements of as is reported in .their general structure is : being the spatial direction of magnetic field associated ; an ordering label for each block as it appears in the rows of the evolution matrix ; are the labels for its rows in ( by example , in , labels for the rows of second block , , in ) .note that is unitary .last structure lets introduce generation of control operations in terms of factorization of special unitary matrices in .as was reported in , controlled gates on each block could be constructed in two pulses : for that purpose , one block should be diagonalized into and another block antidiagonalized into or (which forms straight assumed wrote for the bell basis instead for the computational basis ) . prescriptions to achieve last forms on each block were obtained by and they are : where corresponds to diagonal block and to antidiagonal one .also , .these prescriptions let to obtain the design parameters in terms of strengths .note that parameters should fulfill positivity restrictions .if are the respective blocks ( diagonal ) and ( antidiagonal ) for . then , last prescriptions generate the combined blocks in : with becoming an integer or a semi - integer in terms of the additional condition : which shows that is a semi - integer for .driven ising interaction generate generalized controlled gates presented in , .first ones are near related with evolution loops combined with phase gates .second ones are more like to gates used in traditional gate array quantum computation . when , they are simply the matrices obtained from combination ( in the structure of ) of exactly each one of blocks in ( [ sectors1]-[sectors2 ] ) with even : last suggests they can be used as natural operations replacing some standard gates if computing is done using non local quantum resources ( remember that last expressions are based on non local bell basis ) .one immediate application is the teleportation algorithm .we begin , as commonly , with the state with the first qubit to teleport and in possession of alice .bell state is first in possession of bob , who send his first part to alice .then , alice drives and ising interaction as in our model , applying by example , operation between the qubits in their current possession . with just this , they almost obtain the standard teleportation algorithm for one qubit : where for simplicity , to identify easily the results in computational basis , we comeback at this point to the traditional notation for bell states .figure 2a and 2b show two alternatives of the algorithm using the gate , which is substituting the traditional operation on computational basis .after to apply ising interaction , alice makes a measurement , by example in the computational basis on their qubits .results are listed in table [ tab1 ] , where are the eigenstates of ( teleportated state column in table 1 ) .clearly a hadamard gate is necessary at the end and in addition , if measurement result is , then it is required to apply to teleportate the original qubit 1 on qubit 3 ( are in the traditional computational basis , not in the notation ) .thus , alice uses classical communication to send their outcomes to bob , who finally applies the adequate gates ( complementary gates column in table 1 ) to obtain exactly ( until unitary factors ) the original state , as is shown in table 1 and figure [ fig2 ] .there , are the traditional gates in computational basis , which should be applied on qubit 3 as is depicted in table 1. we use these symbols to distinguish from previously used ( with exception of hamiltonian ( [ hamiltonian ] ) ) , which are only matrix forms written for the bell states corresponding to each subspace in the direct sum decomposition of .another alternative for alice and bob is to make a measurement in bell basis ( for both notations ) . in such case, bob will need to apply ( with ) or ( in the traditional notation ) .figure 3 shows this procedure ..[tab1 ] measurement outcomes in two different basis , output state and complementary gates for teleportation algorithm based on . [ cols="<,^,^,^",options="header " , ] last example has shown that gate reproduce the traditional teleportation algorithm .nevertheless , it can be achieved with any gate . following the steps in the last subsection ,it is possible show that with any of these gates ( for arbitrary and ) and inclusively if initial bell state used is instead of as before , teleportation procedure can be achieved if we use the following complementary gates ( assuming non local measurements based on bell states , ) : the generalization of last procedure for a multiqubit state is easy . in the figures 2 and 3 ,we define the procedure as a map from on ( here , is a two level hilbert space ) : we can note that it is linear in the original state : while , state is separable from the remaining state . in the last expressions ,the subscripts are the labels for each one of the three qubits involved .then , generalization is obvious .if we have the -qubit state : to teleportate , then , if we use the following state for the ancilla qubits : and we apply the teleportation protocol on qubits ( here , each could be or ) : note in particular that not all single qubit teleportation algorithms can be the same type as well as the bell state being used for each one .equal signs used in ( [ multi ] ) are until unitary factors .these results show that controlled ising model reproduces equivalent standard gates proposed in quantum teleportation .optics has been partially a dominant arena to developments in quantum information and quantum computation . nevertheless , quantum storage and massive processing of information are tasks abler to develop on matter based quantum systems .systems based on trapped ions , e - helium , nuclear magnetic resonance , superconductors , doped silicon and quantum dots are developments on this trend , showing possibilities to set up stable and efficient implementations of those technologies . thus , spin - based quantum computing has been developed in through experimental implementations using magnetic systems mainly and exploiting ising interactions with different approaches , together with control on quantum states , in particular entanglement control .the procedure analyzed in this paper shows how ising interaction could be controlled to generate gates able to produce teleportation . because its linearity , these scheme shows the same generalization that traditional algorithm .in addition , the different paths provided in the analysis , in terms of the bell states used as resources and the field direction stated to generate the controlled gates , open other future research directions .one is the analysis of control in the process and its fidelity in terms of control parameters ( interaction and fields strengths , time intervals ) .one more is about flexibility , as was stated in , in terms of viability to use resources with discriminate them , it means , to achieve teleportation independently the knowledge about non local resource being used .finally , extensions to optimize the economy in terms of non local resources used in the process is open as in the general teleportation trend .
possibly , teleportation is the most representative quantum algorithm in the public domain . nevertheless than this quantum procedure transmits only information than objects , its coverage is still very limited and easily subject to errors . based on a fine control of quantum resources , particularly those entangled , the research to extend its coverage and flexibility is open , in particular on matter based quantum systems . this work shows how anisotropic ising interactions could be used as natural basis for this procedure , based on a sequence of magnetic pulses driving ising interaction , stating results in specialized quantum gates designed for magnetic systems .