article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
fermions at finite density or finite chemical potential is a subject of a wide range of interest .it is relevant to condensed matter physics , such as the hubbard model away from half - filling .the research about nuclei and neutron stars at low and high nucleon density is actively pursued in nuclear physics and astrophysics .the subject of quark gluon plasma is important for understanding the early universe and is being sought for in relativistic heavy - ion collisions in the laboratories .furthermore , speculation about color superconducting phase has been proposed recently for quantum chromodynamics ( qcd ) at very high quark density . although there are models , e.g. chiral models and nambu jona - lasinio model which have been used to study qcd at finite quark density , the only way to study qcd at finite density and temperature reliably and systematically is via lattice gauge calculations .there have been extensive lattice calculations of qcd at finite temperature . on the contrary ,the calculation at finite density is hampered by the lack of a viable algorithm . in this talk, i shall first review the difficulties associated with the finite density algorithm with chemical potentials in sec .i will then outline in sec .3 a proposal for a finite density algorithm in the canonical ensemble which projects out the nonzero baryon number sector from the fermion determinant . in sec . 4 ,a newly developed noisy monte carlo algorithm which admits unbiased estimate of the probability is described .its application to the fermion determinant is outlined in sec .i will discuss an efficient way , the pad - z method , to estimate the of the fermion matrix in sec .the recent progress on the implementation of the kentucky noisy monte carlo algorithm to dynamical fermions is presented in sec .finally , a summary is given in sec .the usual approach to the finite density in the euclidean path - integral formalism of lattice qcd is to consider the grand canonical ensemble with the partition function e^{-s_g[u]},\ ] ] where the fermion fields with fermion matrix has been integrated to give the determinant . is the gauge link variable and is the gauge action .the chemical potential is introduced to the quark action with the factor in the time - forward hopping term and in the time - backward hopping term . here is the lattice spacing .however , this causes the fermion action to be non - hermitian , i.e. . as a result , the fermion determinant ] for a configuration of the lattice with the wilson action with and which is obtained with 500 noises .we see that the it is rather flat in indicating that the fourier transform in eq .( [ projection ] ) will mainly favor the zero baryon sector . on the other hand , at finite temperature, it is relatively easier for the quarks to be excited so that the zero baryon sector does not necessarily dominate other baryon sectors .another way of seeing this is that the relative weighting factor can be at finite temperature .thus , it should be easier to project out the nonzero baryon sector from the determinant .we plot in fig .2 a similarly obtained ] is real in this approach , nevertheless in view of the fact that the fourier transform in eq .( [ projection ] ) involves the quark number the canonical approach may still have the sign problem at the thermodynamic limit when and are very large. however , we think it might work for small such as 3 or 6 for one or two baryons in a finite .this should be a reasonable start for practical purposes . while it is clear what the algorithm in the canonical approach entails ,there are additional practical requirements for the algorithm to work .these include an unbiased estimation of the huge determinant in lattice qcd and , moreover , a monte carlo algorithm which accommodates the unbiased estimate of the probability .we shall discuss them in the following sections .there are problems in physics which involve extensive quantities such as the fermion determinant which require steps to compute exactly .problems of this kind with large volumes are not numerically applicable with the usual monte carlo algorithm which require an exact evaluation of the probability ratios in the accept / reject step . to address this problem , kennedy and kuti proposed a monte carlo algorithm which admits stochastically estimated transition probabilities as long as they are unbiased .but there is a drawback .the probability could lie outside the interval between and since it is estimated stochastically .this probability bound violation will destroy detailed balance and lead to systematic bias .to control the probability violation with a large noise ensemble can be costly .we propose a noisy monte carlo algorithm which avoids this difficulty with two metropolis accept / reject steps .let us consider a model with hamiltonian where collectively denotes the dynamical variables of the system .the major ingredient of the new approach is to transform the noise for the stochastic estimator into stochastic variables .the partition function of the model can be written as \ , e^{-h(u ) } \nonumber \\ & = & \int [ du][d\xi]p_\xi(\xi)\ , f(u,\xi).\end{aligned}\ ] ] where is an unbiased estimator of from the stochastic variable and is the probability distribution for . the next step is to address the lower probability - bound violation .one first notes that one can write the expectation value of the observable as [d\xi]\,p_\xi(\xi ) \,o(u)\,{\rm sign}(f)\,|f(u,\xi)|/z,\ ] ] where is the sign of the function . after redefining the partition function to be [d\xi]p_\xi(\xi)\, |f(u,\xi)|,\ ] ] which is semi - positive definite , the expectation of in eq .( [ o ] ) can be rewritten as as we see , the sign of is not a part of the probability any more but a part in the observable .notice that this reinterpretation is possible because the sign of is a state function which depends on the configuration of and .it is clear then , to avoid the problem of lower probability - bound violation , the accept / reject criterion has to be factorizable into a ratio of the new and old probabilities so that the sign of the estimated can be absorbed into the observable .this leads us to the metropolis accept / reject criterion which incidentally cures the problem of upper probability - bound violation at the same time .it turns out two accept / reject steps are needed in general . the first one is to propose updating of via some procedure while keeping the stochastic variables fixed .the acceptance probability is the second accept / reject step involves the refreshing of the stochastic variables according to the probability distribution while keeping fixed .the acceptance probability is it is obvious that there is neither lower nor upper probability - bound violation in either of these two metropolis accept / reject steps .furthermore , it involves the ratios of separate state functions so that the sign of the stochastically estimated probability can be absorbed into the observable as in eq .( [ onew ] ) .detailed balance can be proven to be satisfied and it is unbiased .therefore , this is an exact algorithm .one immediate application of nmc is lattice qcd with dynamical fermions .the action is composed of two parts the pure gauge action and a fermion action . both are functionals of the gauge link variables .to find out the explicit form of , we note that the fermion determinant can be calculated stochastically as a random walk process this can be expressed in the following integral ,\end{aligned}\ ] ] where is the probability distribution for the stochastic variable . it can be the gaussian noise or the noise ( in this case ) .the latter is preferred since it has the minimum variance . is a stochastic variable with uniform distribution between 0 and 1 .this sequence terminates stochastically in finite time and only the seeds from the pseudo - random number generator need to be stored in practice .the function ( in eq .( [ z ] ) is represented by two stochastic variables and here ) is represented by the part of the integrand between the the square brackets in eq .( [ trln ] ) .one can then use the efficient pad - z algorithm to calculate the in eq .( [ trln ] ) .we shall discuss this in the next section .finally , there is a practical concern that can be large so that it takes a large statistics to have a reliable estimate of from the series expansion in eq .( [ trln ] ) . in general , for the taylor expansion , the series will start to converge when .this happens at .for the case , this implies that one needs to have more than 100 !stochastic configurations in the monte carlo integration in eq .( [ trln ] ) in order to have a convergent estimate . even then, the error bar will be very large . to avoid this difficulty, one can implement the following strategy .first , one notes that since the metropolis accept / reject involves the ratio of exponentials , one can subtract a universal number from the exponent in the taylor expansion without affecting the ratio .second , one can use a specific form of the exponential to diminish the value of the exponent .in other words , one can replace with to satisfy .the best choice for is , the mean of . in this case , the variance of becomes .now we shall discuss a very efficient way of estimating the fermion determinant stochastically . the starting point for the methodis the pad approximation of the logarithm function .the pad approximant to of order ] and a judicious expansion point to cover the eigenvalue domain of the problem. exact computation of the trace inverse for matrices is very time consuming for matrices of size .however , the complex z noise method has been shown to provide an efficient stochastic estimation of the trace .in fact , it has been proved to be an optimal choice for the noise , producing a _ minimum _ variance .the stochastic error of the complex z noise estimate results only from the off - diagonal entries of the inverse matrix ( the same is true for z noise for any n ) .however , other noises ( such as gaussian ) have additional errors arising from diagonal entries .this is why the z noise has minimum variance .for example , it has been demonstrated on a lattice with and for the wilson action that the z noise standard deviation is smaller than that of the gaussian noise by a factor of 1.54 . applying the complex z estimator to the expression for the in eq .( [ tr_apprx ] ) , we find where are the solutions of since are shifted matrices with constant diagonal matrix elements , eq .( [ col_inv1 ] ) can be solved collectively for all values of within one iterative process by several algorithms , including the quasi - minimum residual ( qmr ) , multiple - mass minimum residual ( m r ) , and gmres .we have adopted the m r algorithm , which has been shown to be about 2 times faster than the conjugate gradient algorithm , and the overhead for the multiple is only 8% . the only price to payis memory : for each , a vector of the solution needs to be stored .furthermore , one observes that .this improves the conditioning of since the eigenvalues of have positive real parts .hence , we expect faster convergence for column inversions for eq .( [ col_inv1 ] ) . in the next section ,we describe a method which significantly reduces the stochastic error .we now turn to the question of choosing suitable traceless matrices to use in the modified estimator .one possibility for the wilson fermion matrix is suggested by the hopping parameter expansion of the inverse matrix , this suggests choosing the matrices from among those matrices in the hopping parameter expansion which are traceless : it may be verified that all of these matrices are traceless . in principle , one can include all the even powers which entails the explicit calculation of all the allowed loops in . in this manuscriptwe have only included , , and .our numerical computations were carried out with the wilson action on the ( = 73728 ) lattice with .we use the hmc with pseudofermions to generate gauge configurations . with a cold start , we obtain the fermion matrix after the plaquette becomes stable .the trajectories are traced with and 30 molecular dynamics steps using . is then obtained from by an accepted trajectory run .hence and differ by a continuum perturbation , and \sim { \cal o}(1)$ ] .we first calculate with different orders of pad expansion around and .we see from table 1 that the 5th order pad does not give the same answer for two different expansion points , suggesting that its accuracy is not sufficient for the range of eigenvalues of .whereas , the 11th order pad gives the same answer within errors .thus , we shall choose p[11,11] with to perform the calculations from this point on . with 50 noises as a function of the order of subtraction and compared to that of unimproved estimate with 10,000 noises .the dashed lines are drawn with a distance of 1 away from the central value of the unimproved estimate . ] in table 2 , we give the results of improved estimations for .we see that the variational technique described above can reduce the data fluctuations by more than an order of magnitude .for example , the unimproved error in table 2 for 400 z noises is reduced to for the subtraction which includes up to the matrix .this is 37 times smaller . comparing the central values in the last row ( i.e. the order improved ) with that of unimproved estimate with 10,000 z noises, we see that they are the same within errors . this verifies that the variational subtraction scheme that we employed does not introduce biased errors .the improved estimates of from 50 z noises with errors from table 2 are plotted in comparison with the central value of the unimproved estimate from 10,000 noises in fig .we have recently implemented the above noisy monte carlo algorithm to the simulation of lattice qcd with dynamical fermions by incorporating the full determinant directly .our algorithm uses pure gauge field updating with a shifted gauge coupling to minimize fluctuations in the trace log is the wilson dirac matrix .it gives the correct results as compared to the standard hybrid monte carlo simulation .however , the present simulation has a low acceptance rate due to the pure gauge update and results in long autocorrelations .we are in the process of working out an alternative updating scheme with molecular dynamics trajectory to include the feedback of the determinantal effects on the gauge field which should be more efficient than the pure gauge update .after reviewing the finite density algorithm for qcd with the chemical potential , we propose a canonical approach by projecting out the definite baryon number sector from the fermion determinant and stay in the sector throughout the monte carlo updating .this should circumvent the overlap problem . in order to make the algorithm practical, one needs an efficient way to estimate the huge fermion determinant and a monte carlo algorithm which admits unbiased estimates of the probability without upper unitarity bound violations .these are achieved with the pad - z estimate of the determinant and the noisy monte carlo algorithm .so far , we have implemented the kentucky noisy monte carlo algorithm to incorporate dynamical fermions in qcd on a relatively small lattice and medium heavy quark based on pure gauge updating . as a next step, we will work on a more efficient updating algorithm and project out the baryon sector to see if the finite density algorithm proposed here will live up to its promise .this work is partially supported by the u.s .doe grant de - fg05 - 84er40154 .the author wishes to thank m. faber for introducing the subject of the finite density to him and shao - jing dong for providing some unpublished figures .fruitful discussions with m. alford , i.barbour , u .- j .wiese , and r. sugar are acknowledged .he would also thank the organizer , ge mo - lin for the invitation to attend the conference and his hospitality .0 m. alford , k. rajagopal , and f. wilczek , _ phys .* b422 * , 247 ( 1998 ) ; r. rapp , t. schaefer , e.v .shuryak , and m. velkovsky , _ phys .lett . _ * 81 * , 53 ( 1998 ) ; for a review , see for example , k. rajagopal and f. wilczek , hep - ph/0011333 .i. m. barbour , s. e. morrison , e. g. klepfish , j. b. kogut , and m .-lombardo , _ nucl .( proc . suppl . )_ * 60a * , 220 ( 1998 ) ; i.m .barbour , c.t.h .davies , and z. sabeur , _ phys .* b215 * , 567 ( 1988 ) . | i will review the finite density algorithm for lattice qcd based on finite chemical potential and summarize the associated difficulties . i will propose a canonical ensemble approach which projects out the finite baryon number sector from the fermion determinant . for this algorithm to work , it requires an efficient method for calculating the fermion determinant and a monte carlo algorithm which accommodates unbiased estimate of the probability . i shall report on the progress made along this direction with the pad - z estimator of the determinant and its implementation in the newly developed noisy monte carlo algorithm . * uk/02 - 02 * + jan . 2002 + * finite density algorithm in lattice qcd + a canonical ensemble approach * keh - fei liu _ dept . of physics and astronomy , university of kentucky , lexington , ky 40506 _ |
this paper is devoted to the systematic studies of the asymptotic solutions of advection equation with time - oscillating velocity field .we employ the two - timing method in the form introduced by , which allows us to produce the classification of an infinite number of distinguished limits and solutions .we have obtained several new versions of averaged equations for a passive scalar and have presented several examples .the main purpose of this paper is the introduction of a new systematic and general viewpoint on the hierarchy of distinguished limits , drifts , and pseudo - diffusion .this viewpoint can help to organize and to unify a number of results , including that of .we have chosen the advection equation as the simplest interesting example , with the hope that similar studies can be performed for other interesting equations .the two - timing method has been used by many authors , see , however our analysis goes well beyond the usual calculations of the main approximations in various special cases .our analytic calculations are rigorous and straightforward by their nature , however they include a large number of integration by parts and algebraic transformations ; the performing of such calculations represents a ` champion - type ' result by itself .the details of calculations are fully presented in the appendix .the key point of asymptotic theory for odes or pdes ( with two or more small parameters ) is the choice of asymptotic path in the space of these parameters . in other words ,all available small parameters should be expressed through a single one ; it allows one to compare different terms in asymptotic series .the related literature is very diverse and short of exact mathematical formulations .usually , during the studies of a particular problem , some ` lucky ' asymptotic path is chosen on the basis that it gives the successful derivation of an averaged equation or successful calculation of the main term in asymptotic solution , see _e.g. _ .such ` lucky ' asymptotic paths are called _ distinguished limits _ ( dl s ) , while related solutions are called _ distinguished limit solutions _ ( dls s ) .any studies of the multiplicity of distinguished paths ( as well as the multiplicity of averaged equations and related solutions ) and interrelations between them are always avoided .a different approach is given by , where the concept of ` strong ' , ` moderate ' , and ` weak ' oscillations is introduced .however , this approach overlooks some asymptotic solutions and could be seen as being ` too rigid ' ; therefore we propose its development in this paper .our asymptotic procedure is based on the simultaneous scaling ( with the use of strouhal number ) a slow time - scale and vibrational velocity amplitude .this procedure is flexible , self - consistent , covers broad classes of asymptotic solutions , and leads to a natural classification of solutions .the considered advection equation is a hyperbolic pde of the first order with time - oscillating coefficient , which represents the _ prescribed _ velocity field .we emphasize that the velocity field has no connection to any particular dynamics . indeed, the governing equations for a velocity can be chosen very differently : they could represent an inviscid or viscous fluid , a fluid with any rheology , liquid crystals , blood , elastic and plastic medium , _ etc ._ the advection equation can be also a part of the collisionless boltzmann equation or vlasov equation ( see _ e.g. _ ) .the introduction of dimensional variables into the advection equation and the use of the two - timing assumption leads us to the problem with _ two independent small scaling parameters that represent the ratio of two time - scales and amplitude of oscillations_. the given set of parameters produces the _ strouhal number as a basic large dimensionless parameter _ ; the appearance of means that the scaling is not unique . in agrement with the general _ ansatz _ of asymptotic theory we can use to vary all the characteristic scales ; in our approach we use to introduce the appropriate scales of velocity and slow time .the obtained dimensionless equation has been used for the identifying of distinguished limits .we have found the infinite sequence of the distinguished limits , which are corresponding to the successive degeneration of a drift velocity .the _ order _ of this degeneration is chosen to enumerate the distinguished limits .the derivation of averaged equations produce the successive approximations for a drift velocity as well as a qualitatively new ` diffusion - like ' term , which we call _ pseudo - diffusion_. remarkably , all the coefficients in the obtained averaged equations are universal ; they are the same in dls s of different orders and only ` moving ' to lower approximations with the increasing of the order of dls . as the next important step we discover _ one - parametric _ families of solutions ( infinite number of solutions corresponding to each dls ) ,different parametric solutions contain different slow time - scales and different velocity amplitudes .our results are particularly striking for the case of purely periodic velocity oscillations . herewe address an intriguing question : how to find the related slow time - scale while it does not appear in the coefficients of the equation ?our answer is : the slow time - scale is uniquely determined by the magnitude of a given velocity in terms of . for a given velocitywe obtain the unique slow time - scale .we have established that the solutions corresponding to different orders of velocity are physically different even if they have the same functional form .the central notion which appears in this paper is a _drift velocity_. our study can be seen as a general view on the studies of drift velocities with the emphasis at their different appearances in averaged equations ; it is complemented by an unexpected but inevitable appearance of pseudo - diffusion .we put the studies of drifts into the general context of the asymptotic theory , that has never been done before .in addition , there is a difference between the _ classical drift _ and _ eulerian drift_. the classical ( or lagrangian ) drift appears as an averaged velocity for fixed lagrangian coordinates ; hence the classical drift represents an average velocity of material particles .the classical drift , in its main approximation , is a well - studied phenomenon , see . in our consideration ( as well as in )the average operation takes place for _ fixed eulerian coordinates _ , hence our drift represents an average velocity at a given eulerian position and we call it an _ eulerian drift_. the relation between the classical drift and eulerian drift represents an open question to study , while their leading terms ( in the amplitude series ) are certainly coincide . in our consideration of eulerian driftwe explicitly show its different options , approximations , and appearances .we calculate first three approximations for the eulerian drift velocity that is especially useful in the cases of vanishing leading approximation .situation with _ pseudo - diffusion _ is more intriguing .first of all , the appearance of this ` diffusion - like ' term in averaged equations is surprising by itself .another surprise is the form of the pseudo - diffusion matrix which appears as a _ lie - derivative _ of an averaged _ quadratic displacement tensor_. for different flows , pseudo - diffusion can correspond to diffusion or ` anti - diffusion ' or to some intermediate cases .we make the first step in the understanding of the meaning of pseudo - diffusion by considering some simple example of rigid - body oscillations .we arrive to a surprising conclusion that the combination of the advection due to rigid - body oscillations and the eulerian averaging procedure can produce the evolution of the averaged field , which formally coincides with physical diffusion .we present five examples of flows aimed to illustrate different appearances of drift and pseudo - diffusion .these examples cover the general form of modulated oscillatory fields , stokes drift , a spherical ` acoustic ' wave , a class of velocity fields with vanishing main term of the drift , and a velocity field which provides a chaotic averaged ` lagrangian ' dynamics .it is important , that in this paper we do not use any additional assumptions , except the two - timing hypothesis itself .all the results appear from the straightforward and rigorous calculations and represent mathematical theorems , although they are not formulated explicitly in this way .hence , our results can be seen as the test on the validity and reliability of two - timing method . any failure to interpret our results physically or mathematicallycan be explained only by the insufficiency of two time - scales , that might mean that three time - scales ( or more ) are required .at the same time , following to a high mathematical rigour , we avoid any physical interpretations , while such interpretations could be needed in future .a fluid flow is given by its velocity field , where and are cartesian coordinates and time , asterisks stand for dimensional variables .we suppose that velocity field is sufficiently smooth , but _we do not assume that it satisfies any equations of motion_. the dimensional advection equation for a scalar field is this equation describes the motions of a lagrangian marker in either an incompressible or compressible fluid , and the advection of a passive scalar admixture with concentration in an incompressible fluid .it is also closely relevant to various models , not related to fluid dynamics .the hyperbolic equation ( [ exact-1 ] ) has characteristics curves ( trajectories ) described by an ode where and are eulerian and lagrangian coordinates .the _ classical drift _ motion follows after the averaging of ( [ exact-1a ] ) , which is automatically performed for fixed lagrangian coordinates .in contrast , in this paper we consider only the equation ( [ exact-1 ] ) , which is subject of averaging for fixed eulerian coordinates , the related drift motion can be called _ eulerian drift . _ such an approach significantly simplifies calculations and is reacher on results , however the link between the classical ( lagrangian ) drift and the considered below eulerian drift emerges as a new problem to study .we accept that the field is oscillating in time and possesses the characteristic scales of velocity , length , and frequency .these three parameters give a dimensionless parameter strouhal number , hence the dimensionless variables and parameters ( written below without asterisks ) _ are not unique _ ; in this paper we use the following set while has the ` two - timing ' functional form with constants we note that the scale of velocity is , not just ; such kind of ` combined ' scaling is usual for asymptotic analysis and for fluid dynamics ( for example , for viscous flows the length - scale can be chosen as , where is reynolds number ) .in fact , the accepted scaling for a velocity is absolutely required for the further consideration , otherwise ( say , for ) the asymptotic solutions for some important cases do not exist ( see discussion ) .the appearance of two constants and reflects the fact that we have two small parameters in the equation .different options for the values of and will be introduced later .physically , the restriction makes the variable ` slow ' in comparison with , while gives the smallness of a vibrational spatial amplitude . in the accepted dimensionless variables ( after the use of the chain rule ) our main equation ( [ exact-1 ] )takes the form where subscripts and stand for the related partial derivatives ; and represent two _ mutually dependent _ variables , which are called _ slow time _ and _ fast time_. equation ( [ exact-6 ] ) can be rewritten in the form containing two independent small parameters hence , we study the asymptotic limit in the plane where different asymptotic paths can be prescribed by different choices of and in .each such a path may produce different solutions ; the paths which produce valid asymptotic solutions are called _ distinguished limits_.the key suggestion of the two - timing method is as a result , we convert , from pde with independent variables and ( where and are mutually dependent and expressed in terms of ) into pde with the extended number of independent variables and . then the solutions of , must have a functional form : it should be emphasized , that without the functional form of solutions can be different from ; indeed the presence of allows us to build an infinite number of different time - scales , not just and . in this paperwe accept and analyse the related averaged equations and solutions in the functional form . in order to make further analytical progress, we introduce a few convenient notations and agreements . in this paperwe assume that _ any dimensionless function _ has the following properties : \(i ) and all its - , - , and -derivatives of the required below orders are also ; \(ii ) is -periodic in , _i.e. _ ( about this technical simplification see discussion ) ; \(iii ) has an average given by \(iv ) can be split into averaged and purely oscillating parts where _ tilde - functions _ ( or purely oscillating functions ) are such that and the _ bar - functions _ ( or the averaged functions ) are -independent ; \(v ) we introduce a special notation ( with a superscript ) for the _ tilde - integration _ of tilde - functions , such integration keeps the result in the tilde - class .we notice that the integral of a tilde - function often does not belong to the tilde - class . in order to keep the result of integration in the tilde - classwe should subtract the average , which can be rewritten as calculations have shown that there is a series of _ distinguished limits _ for the equation ( [ exact-6a ] ) with independent variables ; each distinguished limit represents the one - parametric path , ( with an integer ) in the plane ; hence ( [ exact-6a ] ) yields : we denote the distinguished limits as dl( ) ; our calculations have been performed for .the detailed calculations for the most interesting case are given in appendix .other cases are very similar and contain the same coefficients of the averaged equations and the same blocks of calculations as the case ; therefore for we present below some final results only . in all caseswe are looking for solutions in the form of regular series analytical calculations contain the following two steps : ( i ) writing the equations of successive approximations and splitting each such equation into its ` bar ' and ` tilde ' parts , see ; ( ii ) obtaining the closed systems of equations for the ` bar ' parts ; during this step one should perform a large number of integrations by parts and algebraic transformations .the results give full solutions in any considered approximation .the described above steps can be performed only for distinguished limits ; all other asymptotic paths produce controversial systems of equations or they are not leading to closed systems of equations for successive approximations .the first four dl s correspond to the variables and velocity as following : dl(1 ) : ; , where both ` bar ' and ` tilde ' parts of are not zero ; dl(2 ) : ; , where ; dl(3 ) : ; , where , and ; dl(4 ) : ; , where , , and .the accepted notations are ,\quad \overline{{\boldsymbol{v}}}_1\equiv\frac{1}{3}\langle[[\widetilde{{\boldsymbol{u}}},\widetilde{{\boldsymbol{\xi}}}],\widetilde{{\boldsymbol{\xi}}}]\rangle,\quad \widetilde{{\boldsymbol{\xi}}}\equiv\widetilde{{\boldsymbol{u}}}^\tau ; \label{4.18}\end{aligned}\ ] ] where the square brackets stand for a commutator of any two vector - functions and \equiv({\boldsymbol{g}}\cdot\nabla){\boldsymbol{f}}-({\boldsymbol{f}}\cdot\nabla){\boldsymbol{g}}\ ] ] from the above list , one can see that the increasing of corresponds to the successive degenerations of velocity and drift velocity .the case dl(1 ) with corresponds to the advection speed of order one , hence and the averaged equation of zero approximation is one can see that in the main approximation we have the advection of an averaged scalar field with the averaged velocity .however , in this paper we concentrate our attention to the cases when the oscillatory part of a velocity enters the averaged equations of the main approximation .therefore , we do not consider dl(1 ) in detail here , we just state that all the coefficients in the averaged equations are similar to that in dl(2 ) ( see below ) ; more exact : the same coefficients appear in the dl(1 ) averaged equations of the next approximation in ( in comparison with dl(2 ) ) .dl(2 ) is the most instructive and physically interesting case , therefore we consider it in detail . here, we have , hence , the speed of advection is described by a longer ( than in dl(1 ) ) slow time - scale .the averaged equations of the first three successive approximations are ( see the detailed derivation in appendix ) with notations ,\widetilde{{\boldsymbol{\xi}}}]\rangle + \frac{1}{2}\langle[\widetilde{{\boldsymbol{v}}}_0,\widetilde{{\boldsymbol{v}}}_0^\tau]\rangle+ \frac{1}{2}\langle[\widetilde{{\boldsymbol{\xi}}},\widetilde{{\boldsymbol{\xi}}}_s]\rangle + \frac{1}{2}\langle\widetilde{{\boldsymbol{\xi}}}{\mathrm{div}\,}\widetilde{{\boldsymbol{u}}}'+ \widetilde{{\boldsymbol{u}}}'{\mathrm{div}\,}\widetilde{{\boldsymbol{\xi}}}\rangle,\label{4.19 } \\ & & \widetilde{{\boldsymbol{u}}}'\equiv\widetilde{{\boldsymbol{\xi}}}_s-[\overline{{\boldsymbol{v}}}_0,\widetilde{{\boldsymbol{\xi } } } ] , \label{4.20a}\\ & & 2\overline{\chi}_{ik}\equiv\langle\widetilde{u'}_i\widetilde{\xi}_k+\widetilde{u'}_k\widetilde{\xi}_i\rangle= \mathfrak{l}_{\overline{{\boldsymbol{v}}}_0}\langle\widetilde{\xi}_i\widetilde{\xi}_k\rangle,\label{4.20}\\ & & \mathfrak{l}_{\overline{{\boldsymbol{v}}}_0}\overline{f}_{ik}\equiv \left(\partial_s+ \overline{{\boldsymbol{v}}}_0\cdot\nabla\right)\overline{f}_{ik}-\frac{\partial\overline{v}_{0k}}{\partial x_m}\overline{f}_{im}- \frac{\partial\overline{v}_{0i}}{\partial x_m}\overline{f}_{km } \label{4.20b}\end{aligned}\ ] ] where the operator is such that represents the condition for tensorial field to be ` frozen ' into ( is also known as the _ lie derivative _ of a vector field ) .the summation convention is in use everywhere in this paper. three equations ( [ 4.15])-([4.17 ] ) can be written as a single advection-`diffusion ' equation ( with an error ) eqn .( [ 4.21 ] ) shows that the averaged motion represents a drift with velocity and _ pseudo - diffusion _ with matrix coefficients . for dl(3 )we impose a restriction and ( similarly to the calculations in appendix ) derive the equations : ,\widetilde{{\boldsymbol{\xi}}}]\rangle + \frac{1}{2}\langle[\widetilde{{\boldsymbol{v}}}_0,\widetilde{{\boldsymbol{v}}}_0^\tau]\rangle+ \frac{1}{2}\langle[\widetilde{{\boldsymbol{\xi}}},\widetilde{{\boldsymbol{\xi}}}_s]\rangle + \frac{1}{2}\partial_s\langle\widetilde{{\boldsymbol{\xi}}}{\mathrm{div}\,}\widetilde{{\boldsymbol{\xi}}}\rangle,\label{4.19a}\\ & & 2\overline{\chi}_{ik}= \partial_s\langle\widetilde{\xi}_i\widetilde{\xi}_k\rangle\label{4.20bb}\end{aligned}\ ] ] in fact , one can see that the equations - are very similar to that of dl(2 ) ( [ 4.15])-([4.17 ] ) ; the difference is : the same coefficients as in dl(2 ) appear in the previous approximations of dl(3 ) . for dl(4 ) we impose two restrictions and and derive ( by a similar procedure ) the equation with the same and as in dl(3 ) . in general , the comparison between the averaged equations for dl(1)-(4 ) shows that the same coefficients in dl( ) appear in the equations of the next order ( in ) in dl( ) . the higher approximationsdl(5 ) , _ etc ._ can be derived similarly , however the calculations become too cumbersome .one can see , that , in all presented above distinguished limit equations , the meaning of key _parameter is not uniquely defined in terms of or . in dl(2 ) , in ( [ exact-6a ] ) , which imposes the link , and : hence , each distinguished limit solution of ( [ exact-6a ] ) , obtained in terms of , produces an infinite number of _ parametric solutions _, one solution for any real number ( or for any real number ) .those solutions are mathematically similar but physically different , since they correspond to different magnitudes of given velocities and different slow - time variables .there are two ways of choosing and .given : when the slow time - scale is given in the prescribed function , then it defines ; then follows from .given : alternatively , when the velocity amplitude of is given , then it defines and we have to calculate from .some interesting sets of and are : \(i ) if is given as the function of variables and , then , ; hence the velocity ( [ variables ] ) , and in .\(ii ) the most frequently considered case is , then , , , and .\(iii ) rather exotic possibility corresponds to the case . here ,\varepsilon ] .such a scaling is required , for example , if a particular slow time - scale \omega ] equations ( [ 4.18 ] ) , ( [ 4.19 ] ) , yield ,\quad \overline{{\boldsymbol{v}}}_1\equiv 0 , \label{example-1 - 4}\\ & & \overline{{\boldsymbol{v}}}_2=\frac{1}{8}\left([\overline{{\boldsymbol{p}}},\overline{{\boldsymbol{p}}}]+[\overline{{\boldsymbol{q}}},\overline{{\boldsymbol{q}}}]\right ) -\frac{1}{4}\left([\overline{{\boldsymbol{p}}}_s,\overline{{\boldsymbol{p}}}]+[\overline{{\boldsymbol{q}}}_s,\overline{{\boldsymbol{q}}}]\right)+\label{example-1 - 4a}\\ & & + \frac{1}{4}\left(\overline{{\boldsymbol{p}}}\,{\mathrm{div}\,}\overline{{\boldsymbol{p}}}'+\overline{{\boldsymbol{q}}}{\mathrm{div}\,}\overline{{\boldsymbol{q } } } ' + \overline{{\boldsymbol{p}}}'{\mathrm{div}\,}\overline{{\boldsymbol{p}}}+\overline{{\boldsymbol{q}}}'{\mathrm{div}\,}\overline{{\boldsymbol{q}}}\right),\nonumber\\ & & \overline{{\boldsymbol{p}}}\equiv[\overline{{\boldsymbol{v}}}_0,\overline{{\boldsymbol{p}}}],\quad \overline{{\boldsymbol{q}}}\equiv[\overline{{\boldsymbol{v}}}_0,\overline{{\boldsymbol{q}}}],\quad \overline{{\boldsymbol{p}}}'\equiv\overline{{\boldsymbol{p}}}_s-\overline{{\boldsymbol{p}}},\quad \overline{{\boldsymbol{q}}}'\equiv\overline{{\boldsymbol{q}}}_s-\overline{{\boldsymbol{q}}},\nonumber\\ & & \langle\widetilde{\xi}_i\widetilde{\xi}_k\rangle= \frac{1}{2}(\overline{p}_i \overline{p}_k + \overline{q}_i \overline{q}_k)\label{example-1 - 5}\end{aligned}\ ] ] the expression for the _ pseudo - diffusion _matrix follows after the substitution of ( [ example-1 - 5 ] ) into ( [ 4.20 ] ) .the expression ( [ example-1 - 1 ] ) is general enough to produce any given function .indeed , in order to obtain and one has to solve the equation =(\overline{{\boldsymbol{q}}}\cdot\nabla)\overline{{\boldsymbol{p}}}- ( \overline{{\boldsymbol{p}}}\cdot\nabla)\overline{{\boldsymbol{q}}}=2\overline{{\boldsymbol{v}}}_0({\boldsymbol{x}},s)\end{aligned}\ ] ] which represents an undetermined bi - linear pde - problem for two unknown functions and . * _ example 2 . _ _ stokes drift : _ * the dimensionless plane velocity field for an inviscid incompressible fluid is ( see ) where for our consideration one can take and , however we choose and as constants ; however , for our consideration they can be chosen as arbitrary functions of .the fields , ( [ example-1 - 1 ] ) are the calculations of ( [ example-1 - 4 ] ) yield which represent the classical stokes drift and the first correction to it ( which vanishes ) .for brevity , the explicit formula for is not given here .further calculations show that one can see that eigenvalues and correspond to strongly anisotropic pseudo - diffusion . the averaged equation ( [ 4.21 ] ) ( with an error )can be written as where and are -components of corresponding velocities ( their -components vanish ) .this equation has an exact solution where is an arbitrary function , which is not affected by pseudo - diffusion .* _ example 3 . _ _ a spherical ` acoustic ' wave : _ * the main term for velocity potential for an outgoing spherical acoustic wave is where , , and are an amplitude , a wavenumber , and a radius in a spherical coordinate system .the velocity is purely radial and has a form ( [ example-1 - 1 ] ) where , and are radial components of corresponding vector - fields .one can find that the fields and $ ] are also purely radial ; the radial component for the commutator is where is the radial component of and subscript stands for the radial derivative .the drift ( [ example-1 - 4 ] ) is purely radial with it is interesting , that formally coincides with the velocity , caused by a point source in an incompressible fluid , and ( for small ) the value of dominates over , so the series is likely to be diverging at .further calculations yield where stands for the only nonzero -component of .one can see , that in this case pseudo - diffusion appears as ordinary diffusion .* _ example 4 ._ _ -drift : _ * it is interesting to consider such flows for which the classical expression for a drift vanishes : , but .let the velocity field be a superposition of two standing waves of frequencies and : =\frac{1}{2}[\overline{{\boldsymbol{p}}},\overline{{\boldsymbol{q}}}](2\cos\tau\sin 2\tau-\cos 2\tau\sin\tau)\label{example-5 - 3}\end{aligned}\ ] ] hence ( [ 4.18 ] ) yields \rangle\equiv 0,\quad \overline{{\boldsymbol{v}}}_1=\frac{1}{3}\langle[[\widetilde{{\boldsymbol{u}}},\widetilde{{\boldsymbol{\xi}}}],\widetilde{{\boldsymbol{\xi}}}]\rangle= \frac{1}{8}[[\overline{{\boldsymbol{p}}},\overline{{\boldsymbol{q}}}],\overline{{\boldsymbol{p}}}]\end{aligned}\ ] ] these expressions produce infinitely many examples of the flows with -drift .at the same time shows that for a single standing wave the -drift is absent .* _ example 5 . __ chaotic averaged ` lagrangian ' dynamics for : _ * let the solenoidal / incompressible velocity ( [ example-1 - 1 ] ) be where are cartesian coordinates ; are constants .either of these fields , taken separately , produces simple integrable dynamics of particles .the calculations yield the computations of the averaged ` lagrangian ' dynamics for ( based on ( [ exact-1a ] ) for dl(2 ) ) were performed by prof .a.b.morgulis ( private communications ) .he has shown that this steady averaged flow exhibits chaotic dynamics of particles .in particular , positive lyapunov exponents have been observed .hence , the drift created by a simple oscillatory field can produce complex_ averaged ` lagrangian ' dynamics_._ struggle for natural notations : _ the introducing of convenient and logical notations represents an important part of this paper .this topic is always underestimated , however natural notations are especially helpful in the presence of cumbersome calculations . _ averaged and oscillatory parts of solutions : _ the analysis , presented in the paper , produces not only the averaged parts of solutions but also full solutions .indeed , in appendix one can find the oscillatory parts of solutions in all considered approximations , see , , , , ._ does three - timing method really needed ? _ this paper is aimed to create a general viewpoint on distinguished limits , drifts , and pseudo - diffusion based on the rigorous implementation of the two - timing method .we do not use any additional suggestions and assumptions , hence any contradictions ( either mathematical or physical ) in further results ( which follow from our averaged equations and solutions ) can be explained by insufficiency of two time - scales only .indeed , the presence of scaling parameters , such as , , and , allows one to introduce an infinite number of additional time - scales .from this perspective the considered problem can be seen as a test for sufficiency of the two - timing method . in particular , due to the presence of pseudo - diffusion , some secular ( in ) terms could appear in solutions of , and .if it is proven , then one may suggest that the two - timing method fails at the orders of approximations , where such a secular growth appears . in this case further time - scales ( additional to and ) can be introduced , which requires the systematic development of three - timing _ etc . _methods . however , any mathematically systematic method , which allows to derive the averaged equations with three or more time - scales from an original pde , is still unknown . _mathematical justification of the two - timing method : _ one can rewrite the approximate solution ( along with its tilde - parts given in appendix ) back to the original time variable and substitute it into the exact original equation ( [ exact-1 ] ) : then a small residual ( a nonzero right - hand - side in ( [ exact-1 ] ) ) appears .the two - timing method ( if it used formally ) allows to produce an approximate solution with a residual as small as required by an user .however , the next logical step is more challenging : one has to prove that the solution of the equation with a small residual ( instead of zero in the right - hand side ) is close to the exact solution .for the two - timing procedure such proofs had been performed for the leading approximation of solution by in the problem of vibrational convection .similar justifications for other equations with oscillatory coefficients are not known yet . _the interplay between a velocity amplitude and slow time - scale : _ a different ( from the presented in this paper ) approach is given in where _ an inspection procedure _ for calculation of distinguished paths is proposed and the concept of ` strong ' , ` moderate ' , and ` weak ' oscillations ( or vibrations ) is introduced . the advantage of the approach presented in this paper is the additional possibility to vary the slow time - scale .it makes the structure of asymptotic solutions more flexible and allows to consider broader classes of asymptotic solutions .however , from a ` physical viewpoint ' some of our results can appear as ` unexpected ' and ` paradoxical ' .say , in the case in , which looks ` physically natural ' , one should take , , , and .then the appearance of the variable may not be seen as ` natural ' . from another side ,if one takes ( which also looks ` physically natural ' ) , then it must be , that , again , could be viewed as physically ` artificial ' .such an interplay between the scale of velocity and slow time - scale does create some ample confusions and misunderstandings . _ the most striking case purely periodic oscillations : _ the results for dl(2 ) are particularly striking for the case of purely periodic velocity oscillations , when is independent of . herethe most intricate question is : how to choose any particular scale of ?we have an answer : the slow time - scale is uniquely determined by the magnitude of . for every value of obtain the related dl(2 ) slow time .it should be accepted that solutions with different are physically different , since they correspond to different orders of the prescribed velocity field .an additional ` naive ' question , which could be asked here , is : why do we need to introduce any slow time - scale at all ?the answer to this question is : if for the coefficient we consider only a solution of similar structure , then we can overlook a broad class of asymptotic solutions .a well - known and clarifying example here is the solutions of mathieu equation , which represents an ode with purely periodic coefficient , see .the slow time - scale , which appears there ( say , in the slowly growing solutions , related to the parametric resonance ) , is proportional to the amplitude of frequency modulation .our case looks somehow similar : the slow time - scale appears to be related to the amplitude of velocity oscillations .we believe that such a relation between the amplitude of purely oscillating coefficients and the slow time - scale represents a general property of related pdes and odes . _ on the term ` pseudo - diffusion ' : _ this term has been in a diverse use in various applied disciplines , some of them remote from fluid dynamics and mathematical physics , say in chemistry , geology , _etc_. for example , its definition from geology is : `` mixing of thin superposed layers of slowly accumulated marine sediments by the action of water motion or subsurface organisms '' ( see mcgraw - hill dictionary of scientific and technical terms , 6e , 2003 by the mcgraw - hill companies , inc . ) .however , there is one purely mathematical definition , where ` pseudo - diffusion ' appears as the ` hyperbolic type ' spatial operator with , see .this definition is qualitatively close to one case ( stokes drift ) which appears in this paper . at the same time , we have extended the meaning of ` pseudo - diffusion ' by including all possible cases such as with the function changing its sign in space and time . _averaged ` lagrangian ' dynamics vs. exact lagrangian dynamics ?_ _ example 5 _ demonstrates that a drift can produce chaotic averaged dynamics which in this case can not be directly affiliated with material particles .. this result brings on a number of questions , such as : ( i ) what is the relationship between the lagrangian chaotic motions for the original dynamical system and ` lagrangian ' chaotic motions for the averaged one ? ( ii )how can chaotic averaged ` lagrangian ' motion and pseudo - diffusion complement each other ?( iii ) how can ` averaged chaos ' , induced by a drift , be used in the theory of mixing ? ( iv ) let the averaged ` lagrangian ' dynamics be chaotic , then how can the related results by , and many others be used in applications .another related example could be the abc - flow by , however , to make it relevant to the topic of this paper we need to find such an oscillating flow which has the abc - flow as its drift ._ additional reading materials : _ for an interested reader : all the presented in this paper calculations as well as the calculations for different distinguished limits and a number of additional ( to sect.7 ) examples are given in the arxiv papers by , which are quoted below as i and ii ._ is our set of distinguished limits dl(n ) complete ?_ the key question can be asked : does the described in this paper set of distinguished limits and parametric solution represent a full set of asymptotic solutions of , obtainable by the two - timing method ?the answer is unknown , but most likely some other solutions do exist ; a few attempts to identify additional solutions are given in i and ii . _ simplification of -periodicity : _ all results of this paper have been obtained for the class of -periodic functions , which is self - consistent .one can consider more general classes of quasi - periodic , non - periodic , or chaotic solutions .the discussion on this topic is given in i , ii .however , it is worth to understand the properties of -periodic oscillations , in order not to relate these properties exclusively to more general solutions . at the same time, the relative simplicity of calculations for the -periodic solutions allows to obtain some advanced results which can serve as a guidance for making assumptions on the properties of more general solutions ._ lagrangian drift vs. eulerian drift ._ it is worth to calculate the classical ( lagrangian ) drift directly by solving ( [ exact-1a ] ) with the use of the same two - timing method .the drift , obtained in this paper , appeared as the result of eulerian average operation ; hence , its comparison with the classical drift represents an open problem .this topic requires a separate study which has been started in i , ii ._ advection of vectorial admixture . _the averaged equations for a passive vectorial admixture are presented in i , ii . these equations are closely linked to the problem of _kinematic -dynamo _ , see .it is physically apparent , that for the majority of shear drift velocities the averaged stretching of ` material lines ' produces the linear in growth of a magnetic field . at the same time , for the averaged flows with exponential stretching of averaged ` material lines ' these examples will inevitably show the exponential growth .another closely related research topic is the ` advection ' of an active vectorial admixture ( vorticity ) .the vortex dynamics of oscillating flows has been studied in .an interesting phenomenon here is the _langmuir circulations _ , see , which has been recently analyzed from a new perspective by .all these topics are worth to be studied by the approach presented in this paper .the author is grateful to prof .ilin for the checking of calculations and to prof .morgulis for the computing example 5 .the author wants to express special thanks to profs .craik and h.k .moffatt for reading this manuscript and making useful critical remarks , also many thanks to profs .mcintyre , t.j .pedley , m.r.e .proctor , and d.w .hughes for helpful discussions .this research is supported by the grant ig / sci / doms/16/13 from sultan qaboos university , oman .first , we list the properties of -differentiation and tilde - integration which are intensely used in the calculations below . for -derivativesit is clear that the product of two tilde - functions and forms a general -periodic function : , say . separating the tilde - part from we write where the introducing of braces for the tilde parts of a function is aimed to avoid the cumbersome two - level tilde notation .as the average operation represents the integration over , then for products containing tilde - functions and their derivatives we have which can be seen as different versions of integration by parts .similarly , for the commutators we have \rangle=-\langle[\widetilde{{\boldsymbol{a}}}_\tau,\widetilde{{\boldsymbol{b}}}]\rangle=- \langle[\widetilde{{\boldsymbol{a}}}_\tau , { \boldsymbol{b}}]\rangle,\ \langle[\widetilde{{\boldsymbol{a}}},\widetilde{{\boldsymbol{b}}}^\tau]\rangle=-\langle[\widetilde{{\boldsymbol{a}}}^\tau,\widetilde{{\boldsymbol{b}}}]\rangle=- \langle[\widetilde{{\boldsymbol{a}}}^\tau , { \boldsymbol{b}}]\rangle \label{oper-15}\end{aligned}\ ] ] now , we describe the obtaining of the solution of equation for the substitution of ( [ basic-4aa ] ) into this equation produces the equations of successive approximations the separation of the tilde - parts , for produces the explicit recurrent expressions further calculations and transformation with the use of - show that the bar - parts satisfy the equations - .let us present the derivations of - and - .( [ abasic-5 ] ) is : the substitution of into ( [ a01-appr-1 ] ) gives .its tilde - integration produces the unique ( inside the tilde - class ) solution . at the same time , ( [ a01-appr-1 ] ) does not impose any restrictions on , which must be determined from the next approximations .thus the results derivable from ( [ a01-appr-1 ] ) are : ( [ abasic-6 ] ) is the use of and reduces ( [ a01-appr-3 ] ) to the equation .its tilde - integration gives the unique solution hence , where are not defined .( ( [ abasic-7 ] ) for ) is the use of ( [ a01-appr-1a ] ) and transforms ( [ 2-appr-1 ] ) into its bar - part is where we have used , , and .the substitution of ( [ 01-appr-7 ] ) into ( [ 2-appr-4 ] ) produces the equation one may expect that the right hand side of ( [ 2-appr-5 ] ) contains both the first and the second spatial derivatives of , however _ all the second derivatives vanish_. in order to prove it , we introduce the commutator = ( \widetilde{{\boldsymbol{u}}}\cdot\nabla)\widetilde{{\boldsymbol{\xi}}}-(\widetilde{{\boldsymbol{\xi}}}\cdot\nabla)\widetilde{{\boldsymbol{u}}},\label{app1 - 2}\\ & & { \boldsymbol{k}}\cdot\nabla=(\widetilde{{\boldsymbol{u}}}\cdot\nabla)(\widetilde{{\boldsymbol{\xi}}}\cdot\nabla)-(\widetilde{{\boldsymbol{\xi}}}\cdot\nabla)(\widetilde{{\boldsymbol{u}}}\cdot\nabla ) \label{app1 - 1}\end{aligned}\ ] ] the bar - part of ( [ app1 - 1 ] ) is at the same time , the integration by parts over gives combining ( [ app1 - 3 ] ) with ( [ app1 - 4 ] ) we obtain which reduces ( [ 2-appr-5 ] ) to the advection equation ( [ 4.15 ] ) with \rangle=-\frac{1}{2}\overline{{\boldsymbol{k } } } \label{2-appr-8}\end{aligned}\ ] ] which also gives the main term in drift velocity ( [ 4.18 ] ). the tilde - part of ( [ 2-appr-2 ] ) appears after subtracting ( [ 2-appr-4 ] ) from ( [ 2-appr-2 ] ) : its tilde - integration with the use of ( [ 01-appr-7 ] ) gives ( [ a4.12 ] ) : hence , can be written as where and are given by ( [ 4.15 ] ) , ( [ 2-appr-11 ] ) , while are not defined .( ( [ abasic-7 ] ) for ) is : its bar - part is the substitution of ( [ 2-appr-11 ] ) into ( [ 3-appr-2 ] ) , the use of , and the integration by parts yield where has been already simplified in ( [ app1 - 5 ] ) . the second term in right hand side of ( [ 3-appr-3 ] )formally contains the third , the second , and the first spatial derivatives of ; however _ all the third and the second derivatives vanish_. to prove it , first , we use ( [ oper-9a ] ) : then we use ( [ app1 - 2 ] ) , ( [ app1 - 1 ] ) to transform the sequence of operators and in each term in right hand side of ( [ app2 - 1 ] ) into their sequence in left hand side .the result is \label{app2 - 2}\end{aligned}\ ] ] as the result ( [ 3-appr-3 ] ) takes a form ( [ 4.16 ] ) with ( [ 2-appr-8 ] ) and ,\widetilde{{\boldsymbol{\xi}}}]\rangle=-\frac{1}{3}\overline{{\boldsymbol{k } } ' } \label{3-appr-4a}\end{aligned}\ ] ] which gives ( [ 4.18 ] ) .the tilde - part of ( [ 3-appr-1 ] ) after its integration gives where is given by ( [ 2-appr-11 ] ) .hence , can be written as where , , , and are given by ( [ 4.15 ] ) , ( [ 4.16 ] ) , ( [ 2-appr-11 ] ) , and ( [ 3-appr-5 ] ) , while are not defined .( ( [ abasic-7 ] ) for ) is : its bar - part is the substitution of ( [ 3-appr-5 ] ) into ( [ 4-appr-2 ] ) , ( [ 2-appr-11 ] ) into ( [ 3-appr-5 ] ) , the integration by parts ( [ oper-9 ] ) , and the use of ( [ 2-appr-8 ] ) , ( [ 3-appr-4a ] ) yield the ` gothic ' shorthand operator ( as well as operators , , , , and below ) acts on .right hand side of ( [ app3 - 1 ] ) formally contains the fourth , the third , the second , and the first spatial derivatives of ; however _ all the fourth and the third derivatives vanish_. in order to prove it we first rewrite as the use of ( [ oper-9a ] ) and ( [ app1 - 2 ] ) , ( [ app1 - 1 ] ) transforms ( [ app3 - 2 ] ) into us now simplify and .for we use ( [ oper-9a ] ) to change into in the last term we use ( [ app1 - 2 ] ) , ( [ app1 - 1 ] ) that yields : \label{app3 - 4}\end{aligned}\ ] ] the operator is simplified by the version of ( [ oper-9a ] ) with four multipliers the multiple use of commutator ( [ app1 - 2 ] ) , ( [ app1 - 1 ] ) allows us to transform the sequence of operators and in each term in right hand side of ( [ app3 - 5 ] ) to the sequence in its lhs .the result is \label{app3 - 6}\end{aligned}\ ] ] now , ( [ app3 - 3 ] ) , ( [ app3 - 4 ] ) , and ( [ app3 - 6 ] ) yield the substitution of this expression into ( [ app3 - 1 ] ) , ( [ 4-appr-2 ] ) gives additional transformations of the last two operators in ( [ app3 - 8 ] ) yield \rangle\cdot\nabla -\frac{1}{2}\langle(\widetilde{{\boldsymbol{u}}}'\cdot\nabla)\widetilde{{\boldsymbol{\xi}}}+ ( \widetilde{{\boldsymbol{\xi}}}\cdot\nabla)\widetilde{{\boldsymbol{u}}}'\rangle\cdot\nabla - \frac{1}{2}\langle \widetilde{u}'_i\widetilde{\xi}_k+\widetilde{u}'_k\widetilde{\xi}_i\rangle\frac{\partial^2}{\partial x_i\partial x_k}=\nonumber\\ & & = \frac{1}{2}\langle[\widetilde{{\boldsymbol{\xi}}},\widetilde{{\boldsymbol{\xi}}}_s]\rangle\cdot\nabla-\frac{\partial}{\partial x_k}\left(\overline{\chi}_{ik}\frac{\partial}{\partialx_i}\right)+\frac{1}{2}\langle\widetilde{{\boldsymbol{\xi}}}{\mathrm{div}\,}\widetilde{{\boldsymbol{u}}}'+\widetilde{{\boldsymbol{u}}}'{\mathrm{div}\,}\widetilde{{\boldsymbol{\xi}}}\rangle \label{ap - trans}\\ & & \widetilde{{\boldsymbol{u}}}'\equiv\widetilde{{\boldsymbol{\xi}}}_s-[\overline{{\boldsymbol{v}}}_0,\widetilde{{\boldsymbol{\xi}}}],\quad \overline{\chi}_{ik}\equiv\frac{1}{2}\langle \widetilde{u}'_i\widetilde{\xi}_k+\widetilde{u}'_k\widetilde{\xi}_i\rangle \label{v - prime}\end{aligned}\ ] ] the substitution of ( [ ap - trans ] ) into ( [ app3 - 8 ] ) leads to the equation for ( [ 4.17 ] ) where the formula ( [ 4.20 ] ) for is obtained from ( [ v - prime ] ) by the use of definition .the tilde - part of ( [ 4-appr-1 ] ) after its tilde - integration gives ( [ a4.14 ] ) where is given by ( [ 3-appr-5 ] ) .hence , we have solved the equation for the first five approximations and obtained the correspondent required terms in ( [ basic-4aa ] ) .simonenko , i.b .1972 justification of averaging method for convection problem in the field of rapidly oscillating forces and for other parabolic equations . _ math .sbornik _ ,* 87(129 ) * , 2 , 236 - 253 ( in russian ) . | the aim of this paper is to study and classify the multiplicity of distinguished limits and asymptotic solutions for the advection equation with a general oscillating velocity field with the systematic use of the two - timing method . our results are : \(i ) the dimensionless advection equation contains _ two independent small parameters _ , which represent the ratio of two characteristic time - scales and the spatial amplitudes of oscillations ; the scaling of the variables and parameters contains strouhal number ; \(ii ) an infinite sequence of _ distinguished limits _ has been identified ; this sequence corresponds to the successive degenerations of a _ drift velocity _ ; \(iii ) we have derived the averaged and oscillatory equations for the first _ four distinguished limits _ ; derivations are performed up to the forth orders in small parameters ; \(v ) we have shown , that _ each distinguish limit solution _ generates an infinite number of _ parametric solutions _ ; these solutions differ from each other by the slow time - scale and the amplitude of prescribed velocity ; \(vi ) we have discovered the inevitable appearance of _ pseudo - diffusion _ , which appears as a lie derivative of the averaged tensor of quadratic displacements ; we have clarified the meaning of pseudo - diffusion using a simple example ; \(vii ) our main methodological result is the introduction of a logical order into the area and classification of an infinite number of asymptotic solutions ; we hope that it can help to study the similar problems for more complex systems ; \(viii ) since in our calculations we do not employ any additional assumptions , our study can be used as a test for the validity of the two - timing hypothesis ; \(ix ) the averaged equations for five different types of oscillating velocity fields have been considered as the examples of different drifts and pseudo - diffusion . |
suppose that a sorting algorithm , knowingly or unknowingly , uses element comparisons that can err .considering sorting algorithms based solely on binary comparisons of the elements to be sorted ( algorithms such as insertion sort , selection sort , quicksort , and so on ) , what problems do we face when those comparisons are unreliable ?for example , gives a clever algorithm to assure , with probability , that a putatively sorted sequence of length is truly sorted .but knowing the structure of the ill - sorted output would likely make error checking easier .also , in situations in which a reliable comparison is the fruit of a long process , one could chose to interupt the comparison process , thus trading reliability of comparisons ( and quality of the output ) for time . as a first step in order to understand the consequences of errors , we propose to analyze the number of inversions in the output of a sorting algorithm ( we choose quicksort ) subject to errors .we assume throughout this paper that the elements of the sequence to be sorted are distinct .we assume further that the only comparisons subject to error are those made between elements being sorted ; that is , comparisons among indices and so on are always correct .errors in element comparisons are random events , spontaneous and independent of each other , of position , and of value , with a common probability , being the length of the list to be sorted .the number of inversions in the output sequence is denoted we assume that the input list is presented in random order , each of the random orders being equiprobable .finally we denote by the random number of inversions in the output sequence of quicksort subject to errors .our result is , roughly speaking , when , meaning that converges to some nondegenerate probability distribution .the surprise " , not so unexpected after the fact , is that there are phase changes in the limit law , depending on the asymptotic behaviour of .the organization of this paper is as follows : the results are stated in section [ s : results ] . in section [ functequns ], we establish a general distributional identity for . in the remaining sections , we prove convergence results for when : * , , * vanishes more slowly than , * where is a positive constant .the case is different and not treated in detail ; see remark [ r : toosmall ] . in section [ fixedpts ], we establish a general result of convergence using contraction methods ( cf . ) , and we use it in section [ proofth1 ] , for the first two cases .these methods do not apply for case , which requires poissonization ( see section [ proofth3 ] , where we use an embedding of quicksort in a poisson point process ) .set we will always let denote a random variable that is uniformly distributed on ] , then converges in distribution to a random variable whose distribution is characterized as the unique solution with finite mean of the equation ^ 2x_c+[(2c-1)u+1-c]^2\widetilde{x}_c+t(c , u),\end{aligned}\ ] ] in which denotes a copy of , are independent , and furthermore , }= \frac{2-c}{2(1 + 2c-2c^2)},\ ] ] and as usual with laws related to quicksort , see e.g. , is approximately the position of the pivot of the first step of the algorithm . as in standard quicksort recurrences , the coefficients of and of its independent copy are related to the sizes of the two sublists on the left and right of the pivot , sizes respectively asymptotic to and .the toll function is approximately times the number of inversions created in the first step : is approximately the number of inversions of the elements , smaller than the pivot but misplaced on the right of it , with the elements smaller than the pivot , that are placed , as they should be , on the left ; is the number of inversions between misplaced elements from the two sides of the pivot .the toll function depends on only one of the two sources of randomness ( the randomly ordered input list , and the places of the errors ) , viz . , the first one , through .the second source of randomness is killed by the law of large numbers : in the average , each of the misplaced numbers from the right of the pivot produces inversions with one half of the elements smaller than the pivot , that are placed , as they should be , on the left . as opposed to the other values of , the choices and lead to deterministic , without any surprise : for the output sequence is a random uniform permutation , with a number of inversions concentrated around ( * ? ? ?5.1.1 ) ; for the output sequence is decreasing , and has inversions .[ npinfini ] if and , converges in distribution to a random variable whose distribution is characterized as the unique solution with finite mean of the equation in , denotes a copy of and are independent .furthermore , }=1\qquad\text{and}\qquad { \textnormal{var}\left(x\right)}=\frac{1}{12}.\end{aligned}\ ] ] note that equation ( [ p = ode1 ] ) is just ( [ p = ode1 ] ) specialized to , but , as opposed to , an additional condition , , is needed to ensure that the law of large numbers still holds .also , as another difference between ( [ p = ode1 ] ) and ( [ p = ode1 ] ) , for the errors do not change the sizes of the sublists in a significant way .the solution equals half the sum of the squares of the widths of the random intervals ] , meaning that , for each , \right)}\vert ] and independent ( see for a general definition of poisson point processes ) ; * is an array of independent uniform random variables on ] , if , and define , for , ( the sum is a.s .finite by lemma [ l : mean ] ) the variables describe a fragmentation process ( see for historical references ) : we start with and recursively break each interval into two at a random point ( uniformly chosen ) . in the -th generationwe thus have a partition of into intervals , , with .the interval of generation that contains is cut at step at the point .hence in is the distance from to this cut point .[ p=1/n ] if and , then converges in distribution to .the family of random variables satisfies the distributional identity : in which , conditionally given that , , and are independent , and are distributed as and , respectively , and in which is a poisson random variable with mean , the random variables are uniformly distributed on ] with intensity , and let , .( this is a pure jump lvy process with lvy measure } dt ] .thus , the number of inversions caused by this first step is approximately actually we prove a stronger theorem in each of the three cases , as we prove convergence of laws for the wasserstein metric .it entails convergence of the first moment .the convergence of higher moments is an open problem .as we shall see in section [ proofth3 ] , the distribution tail decreases exponentially fast ( theorem [ tail ] ) .[ r : toosmall ] when very slowly , that is , we conjecture that converges in distribution to , with the consequence that , for any positive . actually , the main contribution to comes from the `` first '' error , in some sense .when , the probability that no error occurs has a positive limit : we conjecture that , conditionally given the occurence of at least one error , the situation is similar to the previous case , that is , converges in distribution to a random variable with values in . when , .[ r : filter ] finally , we would like to stress that in the proof of convergence for one the three regimes considered in this section , we have to deal simultaneously with any sequence converging to according to this regime .this can be observed on the key equation , for instance , in which we would like to argue , roughly speaking , that if is close to according to a given regime , then and are also close to according to the same regime , with a large probability : here the same probability is associated to three different integers , , and , that denote the sizes of the input list , and of the two sublists formed at the first step of quicksort , respectively .thus can not be seen as a sequence indexed by . in order to allow such a loose relation between and , filters turn out to be more handy than sequences ( see ( * ? ? ?convergences in the three regimes are thus understood as convergences along the three corresponding filters ( see theorem [ th de convergence ] ) .at the first step quicksort compares all elements of the input list with the first element of the list ( usually called _ pivot _ ) .all items less ( resp .larger ) than the pivot are stored in a sublist on the left ( resp .right ) of the pivot .comparisons are not reliable , therefore items that should belong to the left sublist are wrongly stored in the right sublist , and items larger than the pivot are misplaced in the left sublist .since its items are chosen randomly , the input list is a random permutation and the true rank of the pivot can be written , where is uniformly distributed on ] , and is the position of the pivot , * conditionally given , and are distributed as in proposition [ description ] , * , are two independent sequences with the same ( unknown ) distribution , independent of , and therefore of .the errors having a balancing effect : has the same mean , , and a smaller variance than .we prove this in the following form .[ l:2 ] } & \le { \mathbb{e}\left[({\lceilnu\rceil}-1)^2+(n-{\lceilnu\rceil})^2\right ] } = \frac{(n-1)(2n-1)}{3 } \\ & \le \frac23 n^2.\end{aligned}\ ] ] the left hand side is the expected number of ordered pairs that end up on a common side of the pivot .this happens if and originally are on the same side of the pivot and we either compare both correctly or make errors for both of them , or if they are on opposite sides of the pivot and we make an error for exactly one of them . hence } = { \bigl(p^2+(1-p)^2\bigr ) } { \mathbb{e}\left[({\lceilnu\rceil}-1)^2+(n-{\lceilnu\rceil})^2\right]}\quad&\\ { } + 2p(1-p)2 { \mathbb{e}\left[({\lceilnu\rceil}-1)(n-{\lceilnu\rceil})\right ] } & \\ = { \mathbb{e}\left[({\lceilnu\rceil}-1)^2+(n-{\lceilnu\rceil})^2\right ] } -2p(1-p ) { \mathbb{e}\left[({\lceilnu\rceil}-1-(n-{\lceilnu\rceil}))^2\right]}\qquad & \end{aligned}\ ] ] which proves the first inequality . the rest is a simple calculation .let us say that an element of the list , or the comparison in which plays the rle of pivot , has depth if experiences comparisons before playing the rle of pivot .we assume in this section that any comparison with depth is performed after the last comparison with depth .we call step the set of comparisons with depth , and we let denote the number of inversions created at step , that is , the total number of inversions , in the output , between elements that are still in the same sublist before step , but are not in the same sublist after step . we shall need the following bound : [ l : boundk ] for every , }\le \frac12 { \left(\frac23\right)}^k n^2p.\ ] ] for , , and a simple calculation yields }&= p\frac{(n-1)(n+1)}{3}-p^2\frac{(n-1)(n-2)}{6 } \le \frac13 n^2p.\end{aligned}\ ] ] for we find by induction , conditioning on the partition in the first step , } \le { \mathbb{e}\left [ \frac12 { \left(\frac23\right)}^{k-1 } ( z_{n , p}-1)^2p + \frac12 { \left(\frac23\right)}^{k-1 } ( n - z_{n , p})^2p \right]}\ ] ] and the result follows by lemma [ l:2 ] . [ bound ] set } ] ; * the families , , are i.i.d . and independent of , and and . given such we thus define , for any distributions , when , as above , the families , , are i.i.d . and independent of , and further has the distribution . thus ( [ masterequ1 ] ) can be written for ( [ masterequ2 ] ) we similarly assume * is a given random vector ; * the variables , are i.i.d . and independent of , and .given such we define when the variables , are i.i.d . with distribution and independent of .then ( [ masterequ2 ] ) can be written let be the space of probability measures on such that .the space is endowed with the wasserstein metric in which and denote the distribution functions of and , ( resp . ) denote the generalized inverses of and and , as in previous sections , is a uniform random variable . since ( resp . ) has distribution ( resp . ) , the infimum is attained in relation ( [ metric ] ) .the metric makes a complete metric space .convergence of to in is equivalent to convergence of to in distribution _ and _ }={\mathbb{e}\left[|x|\right]}.\ ] ] therefore convergence in entails }={\mathbb{e}\left[x\right]}.\ ] ] we refer to for an extensive treatment of wasserstein metrics . in what follows , we shall improperly refer to the convergence of to in , meaning the convergence of their distributions .let us take care first of relation ( [ masterequ2 ] ) : [ pointfixe ] if }<1 ] , then is a strict contraction and ( [ masterequ2 ] ) has a unique solution in .let be a coupling of random variables , with laws and , respectively , such that }= d_1(\mu,\nu).\ ] ] let be independent copies of .furthermore , assume that and are independent. then the probability distribution of is ( resp . ) and }\\ & \le&d_1(\mu,\nu)\ \sum_{i=1}^i{\mathbb{e}\left[\big|a^{(i)}\big|\right]}.\end{aligned}\ ] ] thus is a contraction with contraction constant smaller than 1 .since is a complete metric space , this implies that has a unique fixed point in , by banach s fixed point theorem .we prove now a theorem which is a variant of those used by the previously cited authors : the difference is not deep , but here we deal with family of laws , not sequences , as we have two parameters , and . as a consequence , to cover theorems [ p = c ] and [ npinfini ] , it will be convenient in their proofs to consider convergence with respect to a __ on ] is bounded , * }<1, ] then converges in distribution to , the unique solution of the equation in .more precisely , along .we need a lemma before proving theorem [ th de convergence ] .[ lemme rosler ] assume that three families of nonnegative numbers , , and satisfy the inequalities : let be a filter . under the following assumptions : * is nonnegative and bounded , * for some and some , , * , * we have the proof is a variant of the proof of ( * ? ? ?* proposition 3.3 ) .let be a bound for , and let for any , let be such that for , then for we have taking lim sups , we obtain that for any , thus , and so .we can choose and the family in such a way that }=d_1(g_{k , p},f),\ ] ] and we can also choose the families to be i.i.d . then }\\ & \leq\sum_{k=0}^{n-1 } { \mathbb{e}\left[\sum_{i=1}^{i}\big|a_{n , p}^{(i)}\bbone_{z_{n , p}^{(i)}=k}\big|\right ] } { \mathbb{e}\left[\big|x_{k , p}^{(i)}-x^{(i)}\big|\right ] } + b_{n , p}\\ & \leq\sum_{k=0}^{n-1}\gamma_{k , n , p}\,d_1(g_{k , p},f)+b_{n , p}\end{aligned}\ ] ] with }+{\mathbb{e}\left[{\left|t_{n , p}-t\right|}\right]},\\ \gamma_{k , n , p}&=&\sum_{i=1}^{i}{\mathbb{e}\left[\big|a_{n , p}^{(i)}\big| \bbone_{z_{n , p}^{(i)}=k}\right]}.\end{aligned}\ ] ] let be a bound for }\right)}_{n , p} ] and }<\infty ] and }<\infty ] and }+ 2\,{\mathbb{e}\left[x\right ] } { \mathbb{e}\left[t\sum_i a^{(i)}\right ] } + { \mathbb{e}\left[{\left(\sum_i a^{(i)}\right)}^2 - 1\right]}({\mathbb{e}\left[x\right]})^2 } { 1-\sum_i { \mathbb{e}\left[a^{(i)\,2}\right]}}.\ ] ] taking expectations in ( [ masterequ2 ] ) we obtain }= \sum_i { \mathbb{e}\left[a^{(i)}\right ] } { \mathbb{e}\left[x\right ] } + { \mathbb{e}\left[t\right]} ] .it is easy to see that now is a strict contraction in with the metric ; hence has a unique fixed point in . since , this fixed pointmust be , which shows that } < \infty ] .easy computations give ^ 2\right]}+{\mathbb{e}\left[[(2c-1)u+1-c]^2\right]}=\frac23(1-c+c^2)\le \frac 23.\ ] ] we must prove the convergence of , and to , and , in . recall . from proposition [ description ] we know that , conditioned on , and thus hence , taking the expectation , and thus consequently , and , similarly but more sharply , similarly , and from , and follows it follows easily from cauchy schwarz s inequality that multiplication is a continuous bilinear map .hence yields verifying the first assertion .similarly implies too . for first observe that , similarly , from and , moreover , since , and imply and . for the terms and we use proposition [ invb&w ] .we have and thus , uniformly for , which , using cauchy schwarz again , yields moreover , let . then and and thus proposition [ invb&w ] now yields , by and , uniformly for , consequently , using proposition [ description ] , collecting the various terms above , we find .as already noticed at the beginning of the section , the distribution of does not depend on , so in order to prove the two theorems , we only have to check that the fourth assumption holds for , for an arbitrary set in each of the two filters : } = 0 , \qquad \forall v\in\mathcal f_i , \forall i\in\{1,2\ } ; \ ] ] also , the expectation on the left hand side of ( [ equation_filter ] ) is decreasing in , so we need only to check ( [ equation_filter ] ) for typical elements of the filters basis . but for ( resp . for ) , } & \le { \left(\frac{n-1}n\right)}^2,\\ { \mathbb{e}\left[\big|a_{n , p}\big|\textrm { ; } ( z_{n , p},p)\notin \widetilde v_{n,\varepsilon}\right ] } & \le { \left(\frac{n}{np}\right)}^2.\end{aligned}\ ] ] the proof of these theorems is done in four steps : 1 .we prove that defined at ( [ def de i ] ) is almost surely finite , and has exponentially decreasing distribution tail .thus it has moments of all orders .2 . with the help of a poisson point process representation of quicksort, we prove the convergence of certain copies of to a copy of for the norm .this entails the weak convergence .we prove that satisfies the functional equation ( [ functional1 ] ) , and that ( [ functional1 ] ) has a unique solution under the extra assumptions in theorem [ thm : unique ] .4 . we compute the first and second moments of , as required for the proof of theorem [ p=1/n ] , and we also give an induction formula for moments of larger order . in this section ,we prove some properties of the family of random variables defined by .recall that the increasing sequence , defined by the recurrence relation ( [ recurrence_sur_u ] ) , splits ] .the length is the product of independent random variables , each uniform on ] .[ bigg ] for , is a -martingale , and }=1 ] . also : } & = { \left(\frac{1+\alpha}2\right)}^{k+1}\sum_{i=1}^{2^k}{\mathbb{e}\left[w^{\alpha}_{k+1,2i-1 } + w^{\alpha}_{k+1,2i}|\mathcal{f}_k\right]}\\ & = { \left(\frac{1+\alpha}2\right)}^{k+1}\sum_{i=1}^{2^k}w^{\alpha}_{k , i } { \mathbb{e}\left[{u}_{k , i}^{\alpha}+(1-{u}_{k , i})^{\alpha}\right]}\\ & = { \left(\frac{1+\alpha}2\right)}^{k}\sum_{i=1}^{2^k}w^{\alpha}_{k , i}.\end{aligned}\ ] ] let denote the larger real solution of the equation .lemma [ bigg ] entails that [ l : moment ] }\le \rho^{k} ] . set . inspecting, we see that }=\frac1 2\ \sum_{k\ge 1}{\left(\frac23\right)}^{k}f_{k,2},\ ] ] because , conditionally given , the expected number of points of is and each of them has an expected contribution to . as a consequence of lemma [ l :moment ] , we have [ tail ] for each fixed , the distribution tail decreases exponentially fast .equivalently , we prove this result for . since we have where is a poisson random variable with mean .we split the tail of this bound on as follows : in which we have , by the standard chernoff bound for the poisson distribution , the last inequality holding only for .also using a markov first moment inequality to bound both terms . for any in , the choice leads to an exponential decrease of the tail .we assume that the input list for quicksort contains the integers in random order .we model our error - prone quicksort as follows using the variables and in section [ s : results ] , but with the intensity of replaced by : in the first step , we use the pivot and let for each ( except the pivot ) there be an error in the comparison of and the pivot if \neq\emptyset ]. let .let be the position of the pivot after the comparisons ( as in the first step , may differ from because of errors ) ; let if the sublist was empty .set we expect and to converge to as .this procedure ( stopped when there are no more nonempty sublists ) is an exact simulation of the erratic quicksort , so we may assume that is the number of inversions created by it .as in section [ functequns ] , let be the number of inversions created at step , so we will prove that , using the notation of , for each . since also , by lemmas [ l : boundk ] and [ l : erika ] , } + \frac1{{\lambda}(n , p ) } { \mathbb{e}\left[\sum_{j=1}^{2^k } \sum_{x\in\pi_{k , j}}|x - x_{k , j}|\right]}\\ & = \frac 1{n^2p } { \mathbb{e}\left[i^{(k)}(n , p ) \right ] } + { \mathbb{e}\left [ \frac12 \sum_{j=1}^{2^k}w_{k , j}^2\right ] } \le { \left(\frac23\right)}^k,\end{aligned}\ ] ] it follows by dominated convergence that , using , moreover , , and it follows easily from that .hence we have , which proves the convergence .it remains to verify .set relation is equivalent to for simplicity , we write in the sequel instead of .we begin with a lemma .[ ll ] for each and , recall that , so translates to we use induction on . comparing the definitions of and , we see that it suffices to consider an odd , and in that case there are three sources of a difference : 1 . the differences between and and between and .by the induction hypothesis , this contributes at most .2 . the inside ( and the rounding by ) the ceiling function .this contributes at most .3 . the shift of the pivot , from to , caused by the erroneous comparisons .the shift is bounded by the total number of errors at step , so its mean is less than , and the contribution is less than .we return to proving . for , is just studied in section [ functequns ] , and yields let be the set of items such that an error was made in the comparison with .relation entails that we shall denote this last sum .thus , we have furthermore }&=(m-1)p(1-p)+((m-1)p)^2\le np+n^2p^2 . \label{boundagain}\end{aligned}\ ] ] hence , moreover , differs from in in four ways only ( recall that ) : 1 . differs from by at most .since the expected number of terms is not larger than , this gives a contribution .2 . , which by lemma [ ll ] has expectation .thus this too gives a contribution .if there are two or more points in ] is less than , and each point contributes at most 1 to .each point in ] for some , see the case .each point in ] and }\to0 ] as by and dominated convergence . for ,let be a poisson point process of intensity on ] .let and be two solutions of in .let and denote two measurable processes representing respectively and , in the sense of remark [ remmesure ] ( i ) . without loss of generality , we can assume that and share the same underlying probabilistic space , and the same exponent .then , by definition of , for , }\ ] ] is finite .let denote the infimum of over all couples of representations of and , lying on the same probabilistic space , and assume that .let be such a couple of representations , satisfying furthermore consider a probabilistic space on which are defined three _independent _ random variables , and , and being two copies of , being uniform on . finally , for every , set then and are representations of ( resp . ) and satisfy remark [ remmesure ] ( ii ) .moreover , we have , for , } & = \mathbb{e}\left[\lambda^\alpha\left|u^2 y_1({\lambda}u ) + ( 1-u)^2 y_2({\lambda}(1-u))\right.\right.\\ & \hspace{2 cm } \left.\left.- u^2 z_1({\lambda}u ) - ( 1-u)^2 z_2({\lambda}(1-u))\right|\right ] \\ & \leq 2{\mathbb{e}\left[\lambda^\alpha u^2\left|y_1({\lambda}u)-z_1({\lambda}u)\right|\right ] } \\ & \leq 2\int_0 ^ 1 u^{2-\alpha}\ { \mathbb{e}\left[(\lambda u)^\alpha \left|y_1({\lambda}u)-z_1({\lambda}u)\right|\right ] } du \\ &< \delta,\end{aligned}\ ] ] leading to a contradiction .the aim of this section is the computation of moments of , completing the proof of theorem [ p=1/n ] .if one uses directly , the computations of moments by induction are hardly tractable because all three terms on the right of depend on . to circumvent this problem , we consider a new distributional identity in which * is as in section [ s : results ] ; equivalently , * and are independent ; * conditionally , given , and are independent and distributed as and , respectively .the next propositions establish relations between and solutions of , eventually providing an algorithm for the computation of moments of ( see and ) .[ 5to34 ] the family , in which and are assumed independent , is a solution of .[ unique2 ] the -th moment of is a polynomial of degree in the variable . before proving propositions [ 5to34 ] and [ unique2 ], we need a lemma .[ factsaboutw ] the -th moment } ]is equal to .consider and assume that the properties in the lemma hold for .then , for and smaller than , and , the expression is a polynomial with degree and , due to lemma [ factsaboutw ] , vanishes at 0 .thus , in this case , is a polynomial with degree , vanishing at 0 .it is now easy to check that a polynomial satisfies if and only if , for , }p_n=\frac{n+k+1}{n+k-1}\ { \left[{\lambda}^k\right]}\psi_n.\ ] ] also , by the induction assumptions , } & = { \mathbb{e}\left[\big(\xi({\lambda})+uy(u{\lambda})+(1-u)\widetilde{y}((1-u){\lambda})\big)^n\right]}\\ & = \sum_{r+k+\ell = n } \binom{n}{r , k,\ell}g_r({\lambda } ) { \mathbb{e}\left[u^k(1-u)^{\ell}y(u{\lambda})^k\widetilde{y}((1-u){\lambda})^{\ell}\right]}\\ & = 2{\mathbb{e}\left[u^ny({\lambda}u)^n\right]}+\psi_n({\lambda}).\end{aligned}\ ] ] note that for . by remark[ remmesure ] , } ] is a polynomial of degree that vanishes at . since , with independent summands ,we obtain } = \sum_{0\le k\le m}{m\choose k}{\mathbb{e}\left[\xi({\lambda})^{m - k}\right]}{\lambda}^k{\mathbb{e}\left [ x({\lambda})^k\right]}.\ ] ] the result follows by induction . the moments of , and thus of , can be computed up to arbitrary order with the help of and . for the first two moments , the calculations run as follows . expanding in the proof of lemma [ factsaboutw ]we obtain [ l : xi ] }=\frac12{\lambda}\qquad\text{and}\qquad g_2({\lambda})={\mathbb{e}\left[\xi({\lambda})^2\right]}=\frac13{\lambda}+\frac14{\lambda}^2.\ ] ] }={\mathbb{e}\left[\xi({\lambda})\right]}={\lambda}\qquad\text{and}\qquad { \lambda}^2{\textnormal{var}\left(x({\lambda})\right)}={\textnormal{var}\left(\xi({\lambda})\right)}=\frac13{\lambda}+\frac{1}{12}{\lambda}^2.\ ] ] taking in and , we find , using lemma [ l : xi ] , taking , we similarly find since , with independent summands , }={\mathbb{e}\left[\xi({\lambda})\right]}+{\lambda}{\mathbb{e}\left [ x({\lambda})\right]},\ ] ] which by lemma [ l : xi ] yields }={\lambda}$ ] . similarly , } = p_2({\lambda})-{\mathbb{e}\left[\xi({\lambda})^2\right]}-2{\mathbb{e}\left[\xi({\lambda})\right]}{\mathbb{e}\left[{\lambda}x({\lambda})\right ] } = \tfrac13{\lambda}+\tfrac{13}{12}{\lambda}^2,\end{aligned}\ ] ] which yields the variance formula . the formulas for mean and variance of can also be obtained directly from and lemma [ l : xi ] ; we leave this as an exercise .we have presented a probabilistic analysis of quicksort when some comparisons can err .analysing other sorting algorithms such as merge sort , insertion sort or selection is even more intricate .they do not fit into the model presented in this paper and further more involved probabilistic models / arguments are required .we conjecture that the same normalization holds for the number of inversions in the _ output of merge sort _ for , , and that the limit law satisfies }=\sum_{k\ge 0}\ \frac{2^k}{(2^k+2)(2^k+3)}= 0.454674373\dots<{\mathbb{e}\left[x({\lambda})\right]}.\ ] ]a more rigorous formulation and a shorter proof of theorem [ thm : unique ] are born from discussions with uwe rsler .also , we thank two anonymous referees , whose careful reading led to substantial improvements . | we provide a probabilistic analysis of the output of quicksort when comparisons can err . |
first - order temporal logic ( ) has been shown to be a powerful formalism for expressing sophisticated dynamic properties .unfortunately , this power also leads to strong intractability .recently , however , a fragment of , called _ monodic _ , has been investigated , both in terms of its theoretical and practical properties .essentially , monodicity allows for one free variable in every temporal formula .although clearly restrictive , this fragment has been shown to be useful in expressive description logics , infinite - state verification , and spatio - temporal logics .we here develop a new temporal logic , combining decidable fragments of monodic with recent developments in xor temporal logics , and apply this to the verification of parameterised systems .we use a communicating finite state machine model of computation , and can specify not only basic synchronous , parameterised systems with instantaneous broadcast communication , but the powerful temporal language allows us also to specify asynchronously executing machines and more sophisticated communication properties , such as delayed delivery of messages .in addition , and in contrast to many other approaches , not only safety , but also liveness and fairness properties , can be verified through automatic deductive verification .finally , in contrast to work on regular model checking and constraint based verification using counting abstraction , the logical approach is both complete and decidable .the verification of concurrent systems often comes down to the analysis of multiple finite - state automata , for example of the following form . in describing such automata , both automata - theoretic and logical approaches may be used . while _ temporal logic_ provides a clear , concise and intuitive description of the system , automate - theoretic techniques such as _ model checking _ have been shown to be more useful in practice .recently , however , a propositional , linear - time temporal logic with improved deductive properties has been introduced , providing the possibility of practical deductive verification in the future .the essence of this approach is to provide an xor constraint between key propositions .these constraints state that exactly one proposition from a xor set can be true at any moment in time .thus , the automaton above can be described by the following clauses which are implicitly in the scope of a ` ' ( ` always in the future ' ) operator . here ` ' is a temporal operator denoting ` at the next moment ' and ` ' is a temporal operator which holds only at the initial moment in time .the inherent assumption that at any moment in time exactly one of , , or holds , is denoted by the following . with the complexity of the decision problem ( regarding , , etc ) being polynomial , then the properties of any finite collection of such automata can be tractably verified using this _ propositional _ xor temporal logic . however , one might argue that this deductive approach , although elegant and concise , is still no better than a model checking approach , since it targets just _ finite _ collections of ( finite ) state machines .thus , this naturally leads to the question of whether the xor temporal approach can be extended to _ first - order temporal logics _ and , if so , whether a form of tractability still applies . in such an approach, we can consider _ infinite _ numbers of finite - state automata ( initially , all of the same structure ) .previously , we have shown that can be used to elegantly specify such a system , simply by assuming the argument to each predicate represents a particular automaton .thus , in the following is true if automaton is in state : thus , can be used to specify and verify broadcast protocols between synchronous components . in this paperwe define a logic , , which allows us to not only to specify and verify systems of the above form , but also to specify and verify more sophisticated asynchronous systems , and to carry out verification with a reasonable complexity . [ [ section ] ]first - order ( discrete , linear time ) temporal logic , , is an extension of classical first - order logic with operators that deal with a discrete and linear model of time ( isomorphic to the natural numbers , ) . [ [ syntax . ] ] syntax .+ + + + + + + the symbols used in are * _ predicate symbols : _ each of which is of a fixed arity ( null - ary predicate symbols are _ propositions _ ) ; * _ variables : _ ; * _ constants : _ ; * _ boolean operators : _ , , , , , ( ` true ' ) , ( ` false ' ) ; * _ first - order quantifiers : _ ( ` for all ' ) and ( ` there exists ' ) ; and * _ temporal operators : _ ( ` always in the future ' ) , ( ` sometime in the future ' ) , ( ` at the next moment ' ) , ( until ) , ( weak until ) , and ( at the first moment in time ) .although the language contains constants , neither equality nor function symbols are allowed .the set of well - formed -formulae is defined in the standard way : * booleans and are atomic -formulae ; * if is an -ary predicate symbol and , , are variables or constants , then is an atomic -formula ; * if and are -formulae , so are , , , , and ; * if is an -formula and is a variable , then and are -formulae ; * if and are -formulae , then so are , , , , , and . a _ literal _ is an atomic -formula or its negation .[ [ semantics ] ] semantics , + + + + + + + + + + intuitively , are interpreted in _ first - order temporal structures _ which are sequences of _ worlds _ , with truth values in different worlds being connected via temporal operators .more formally , for every moment of time , there is a corresponding _ first - order _ structure , , where every is a non - empty set such that whenever , , and is an interpretation of predicate and constant symbols over .we require that the interpretation of constants is _rigid_. thus , for every constant and all moments of time , we have .a _ ( variable ) assignment _ is a function from the set of individual variables to .we denote the set of all assignments by .the set of variable assignments corresponding to is a subset of the set of all assignments , ; clearly , if . the _ truth _ relation in a structure , is defined inductively on the construction of _ only for those assignments that satisfy the condition . see fig . [fig : sem ] for details . is a _ model _ for a formula ( or is _ true _ in ) if , and only if , there exists an assignment in such that .a formula is _ satisfiable _ if , and only if , it has a model .a formula is _ valid _ if , and only if , it is true in any temporal structure under any assignment in .the models introduced above are known as _ models with expanding domains _ since .another important class of models consists of _ models with constant domains _ in which the class of first - order temporal structures , where formulae are interpreted , is restricted to structures , , such that for all .the notions of truth and validity are defined similarly to the expanding domain case .it is known that satisfiability over expanding domains can be reduced to satisfiability over constant domains with only a polynomial increase in the size of formulae .the set of valid formulae of is not recursively enumerable .furthermore , it is known that even `` small '' fragments of , such as the _ two - variable monadic _ fragment ( where all predicates are unary ) , are not recursively enumerable .however , the set of valid _ monodic _ formulae is known to be finitely axiomatisable .an -formula is called _ monodic _ if , and only if , any subformula of the form , where is one of , , ( or , where is one of , ) , contains at most one free variable .we note that the addition of either equality or function symbols to the monodic fragment generally leads to the loss of recursive enumerability .thus , monodic is expressive , yet even small extensions lead to serious problems .further , even with its recursive enumerability , monodic is generally undecidable .to recover decidability , the easiest route is to restrict the first order part to some decidable fragment of first - order logic , such as the guarded , two - variable or monadic fragments .we here choose the latter , since monadic predicates fit well with our intended application to parameterised systems .recall that monadicity requires that all predicates have arity of at most ` 1 ' .thus , we use monadic , monodic .a practical approach to proving monodic temporal formulae is to use _ fine - grained temporal resolution _ , which has been implemented in the theorem prover temp . in the past, temp has been successfully applied to problems from several domains , in particular , to examples specified in the temporal logics of knowledge ( the fusion of propositional linear - time temporal logic with multi - modal s5 ) . from this workit is clear that monodic first - order temporal logic is an important tool for specifying complex systems .however , it is also clear that the complexity , even of _ monadic _ monodic first - order temporal logic , makes this approach difficult to use for larger applications .an additional restriction we make to the above logic involves implicit xor constraints over predicates .such restrictions were introduced into temporal logics in , where the correspondence with bchi automata was described , and generalised in . in both cases , the decision problem is of much better ( generally , polynomial ) complexity than that for the standard , unconstrained , logic . however , in these papers only _ propositional _ temporal logic was considered .we now add such an xor constraint to .the set of predicate symbols , is now partitioned into a set of xor - sets , , , , , with one _ non - xor _ set such that 1 .all are disjoint with each other , 2 . is disjoint with every , 3 . , and 4 . for each ,exactly _ one _ predicate within is satisfied ( for any element of the domain ) at any moment in time .consider the formula where and .the above formula states that , for any element of the domain , a , then one of or must be satisfied and one of , or must be satisfied . to simplify our description, we will define a _normal form _ into which formulae can be translated . in the following : * denotes a conjunction of negated xor predicates from the set ; * denotes a disjunction of ( positive ) xor predicates from the set ; * denotes a conjunction of non - xor literals ; * denotes a disjunction of non - xor literals .a _ step _ clause is defined as follows : a _ monodic temporal problem in divided separated normal form ( dsnf ) _ is a quadruple , where : 1 . the universal part , , is a finite set of arbitrary closed first - order formulae ; 2 .the initial part , , is , again , a finite set of arbitrary closed first - order formulae ; 3 .the step part , , is a finite set of step clauses ; and 4 .the eventuality part , , is a finite set of eventuality clauses of the form , where is a unary literal . in what follows, we will not distinguish between a finite set of formulae and the conjunction of formulae within the set . with each monodic temporal problem , we associate the formula now , when we talk about particular properties of a temporal problem ( e.g. , satisfiability , validity , logical consequences etc ) we mean properties of the associated formula .every monodic formula can be translated to the normal form in satisfiability preserving way using a renaming and unwinding technique which substitutes non - atomic subformulae and replaces temporal operators by their fixed point definitions as described , for example , in .a step in this transformation is the following : we recursively rename each innermost open subformula , whose main connective is a temporal operator , by , where is a new unary predicate , and rename each innermost closed subformula , whose main connective is a temporal operator , by , where is a new propositional variable . while renaming introduces new , non - xor predicates and propositions , practical problems stemming from verification are nearly in the normal form , see section [ sec : model ] .first - order temporal logics are notorious for being of a high complexity .even decidable sub - fragments of monodic first - order temporal logic can be too complex for practical use . for example , satisfiability of monodic monadic logic is known to be -complete . however , imposing xor restrictions we obtain better complexity bounds .[ th : complexity ] satisfiability of monodic monadic formulae ( in the normal form ) can be decided in time , where , , are cardinalities of the sets of xor predicates , and is the cardinality of the set of non - xor predicates . before we sketch the proof of this result , we show how the xor restrictions influence the complexity of the satisfiability problem for monadic first - order ( non - temporal ) logic .[ lemma : fom ] satisfiability of monadic first - order formulae can be decided in , where is the length of the formula , and , , , are as in theorem [ th : complexity ] . as in , proposition 6.2.9 , the non - deterministic decision procedure first guesses a structure and then verifies that the structure is a model for the given formula .it was shown , , proposition 6.2.1 , exercise 6.2.3 , that if a monadic first - order formula has a model , it also has a model , whose domain is the set of all _ predicate colours_. a predicate colour , , is a set of unary literals such that for every predicate from the set of all predicates , either or belongs to .notice that under the conditions of the lemma , there are at most different predicate colours .hence , the structure to guess is of size. it should be clear that one can evaluate a monadic formula of the size in a structure of the size in deterministic time .therefore , the overall complexity of the decision procedure is . for simplicity of presentation, we assume the formula contains no propositions .satisfiability of a monodic formula is equivalent to a property of the _ behaviour graph _ for the formula , checkable in time polynomial in the product of the number of different predicate colours and the size of the graph , see , theorem 5.15 . for unrestricted formulae ,the size of the behaviour graph is double exponential in the number of predicates .we estimate now the size of the behaviour graph and time needed for its construction for formulae .let be a set of predicate colours and be a map from the set of constants , , to .a couple is called a _colour scheme_. nodes of the behaviour graph are colour schemes . clearly , there are no more than different colour schemes .however , not every colour scheme is a node of the behaviour graph : a colour scheme is a node if , and only if , a monadic formula of first - order ( non - temporal ) logic , constructed from the given formula and the colour scheme itself , is satisfiable ( for details see ) .a similar first - order monadic condition determines which nodes are connected with edges .it can be seen that the size of the formula is polynomial in both cases . by lemma [ lemma : fom ] , satisfiability of monadic first - order formulae can be decided in deterministic time .overall , the behaviour graph , representing all possible models , for an formula can be constructed in time .in previous work , notably a parameterised finite state machine based model , suitable for the specification and verification of protocols over arbitrary numbers of processes was defined .essentially , this uses a family of identical , and synchronously executing , finite state automata with a rudimentary form of communication : if one automaton makes a transition ( an action ) , then it is required that _ all _ other automata simultaneously make a complementary transition ( reaction ) .in we translated this automata model into monodic and used automated theorem proving in that logic to verify parameterised cache coherence protocols .the model assumed not only synchronous behaviour of the communicating automata , but instantaneous broadcast .here we present a more general model suitable for specification of both synchronous and asynchronous systems ( protocols ) with ( possibly ) delayed broadcast and give its faithful translation into .this not only exhibits the power of the logic but , with the improved complexity results of the previous section , provides a route towards the practical verification of temporal properties of infinite state systems .we begin with a description of both the asynchronous model , and the delayed broadcast approach .[ def : protocol - simple ] a protocol , p is a tuple , where * is a finite set of states ; * is a set of initial states ; * , where * * is a finite set of local actions ; * * is a finite set of broadcast actions , + i.e. `` send a message '' ; * * is the set of broadcast reactions , i.e. `` receive a message '' ; * is a transition relation that satisfies the following property i.e. , `` readiness to receive a message in any state '' .further , we define a notion of global machine , which is a set of finite automata , where is a parameter , each following the protocol and able to communicate with others via ( possibly delayed ) broadcast .to model asynchrony , we introduce a special automaton action , , meaning the automaton is not active and so its state does not change . at any momentan arbitrary group of automata may be idle and all non - idle automata perform their actions in accordance with the transition function ; different automata may perform different actions .[ def : glob_mach1 ] given a protocol , , the global machine of dimension _ _ is the tuple , where * * * is a transition relation that satisfies the following property \ , .\end{array}\ ] ] * is a communication environment , that is a set of possible sets of messages in transition .an element is said to be a global configuration of the machine .a run of a global machine is a possibly infinite sequence of global configurations of satisfying the properties ( 1)(6 ) listed below . in this formulationwe assume and .1 . + ( `` initially all automata are in initial states '' ) ; 2 . + ( `` initially there are no messages in transition '' ) ; 3 . + ( `` an arbitrary part of the automata can fire '' ; 4 . + ( `` delivery to all participants is guaranteed '' ) ; 5 . $ ] ( `` one can receive only messages kept by the environment , or sent at the same moment of time '' ) in order to formulate further requirements we introduce the following notation : then , the last requirement the run should satisfy is 1 . [ [ example - asynchronous - floodset - protocol . ] ] example : asynchronous floodset protocol .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we illustrate the use of the above model by presenting the specification of an asynchronous floodset protocol in our model .this is a variant of the _ floodset algorithm with alternative decision rule _ ( in terms of , p.105 ) designed for solution of the consensus problem .the setting is as follows .there are processes , each having an _ input bit _ and an _output bit_. the processes work asynchronously , run the same algorithm and use _ broadcast _ for communication .the broadcasted messages are guaranteed to be delivered , though possibly with arbitrary delays .( the process is described graphically in fig .[ fig : flood ] . ) , scaledwidth=40.0% ] the goal of the algorithm is to eventually reach an agreement , i.e. to produce an output bit , which would be the same for all processes .it is required also that if all processes have the same input bit , that bit should be produced as an output bit .the asynchronous floodset protocol we consider here is adapted from .main differences with original protocol are : * the original protocol was synchronous , while our variant is asynchronous ; * the original protocol assumed instantaneous message delivery , while we allow arbitrary delays in delivery ; and * although the original protocol was designed to work in the presence of crash ( or fail - stop ) failures , we assume , for simplicity , that there are no failures . because of the absence of failures the protocol is very simple and unlike the original one does not require `` retransmission '' of any value. we will show later ( in section [ sec : var ] ) how to include the case of crash failures in the specification ( and verification ) .thus , the asynchronous floodset protocol is defined , informally , as follows .* at the first round of computations , every process broadcasts its input bit .* at every round the ( tentative ) output bit is set to the minimum value ever seen so far .the correctness criterion for this protocol is that , eventually , the output bits of all processes will be the same .now we can specify the asynchronous floodset as a protocol , where ; ; with , , . the transition relation .given a protocol , we define its translation to as follows . for each , introduce a monadic predicate symbol and for each introduce a monadic predicate symbol .for each we introduce also a propositional symbol . intuitively , elements of the domain in the temporal representation will represent exemplars of finite automata , and the formula is intended to represent `` automaton x is in state '' .the formula is going to represent `` automaton performs action '' .proposition will denote the fact `` message is in transition '' ( i.e. it has been sent but not all participants have received it . ) because of intended meaning we define two xor - sets : and .all other predicates belong to the set of non - xor predicates .we define the temporal translation of , called , as a conjunction of the formulae in fig .[ fig : trans ] .note that , in order to define the temporal translation of requirement ( 6 ) above , ( on the dynamics of environment updates ) we introduce the unary predicate symbol for every .we now consider the correctness of the temporal translation .this translation of protocol is faithful in the following sense .[ prop : trans ] given a protocol , , and a global machine , , of dimension , then any temporal model of with the finite domain of size represents some run of as follows : is -th configuration of the run iff , and dually , for any run of there is a temporal model of with a domain of size representing this run . by routine inspection of the definitions of runs , temporal models and the translation .the above model allows various modifications and corresponding version of proposition [ prop : trans ] still holds .[ [ determinism . ] ] determinism .+ + + + + + + + + + + + the basic model allows non - deterministic actions . to specify the case of deterministic actions only, one should replace the `` action effects '' axiom in fig .[ fig : trans ] by the following variant : \ ] ] for all [ [ explicit - bounds - on - delivery . ] ] explicit bounds on delivery .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + in the basic mode , no explicit bounds on delivery time are given .to introduce bounds one has to replace the `` guarantee of delivery '' axiom with the following one : \ ] ] for all and some ( representing the maximal delay ) .[ [ finite - bounds - on - delivery . ] ] finite bounds on delivery . + + + + + + + + + + + + + + + + + + + + + + + + + + one may replace the `` guarantee of delivery '' axiom with the following one \ ] ] for all .[ [ crashes . ] ] crashes .+ + + + + + + + one may replace the `` guarantee of delivery '' axiom by an axiom stating that only the messages sent by normal ( non - crashed ) participants will be delivered to all participants .( see for examples of such specifications in a context . )[ [ guarded - actions . ] ] guarded actions .+ + + + + + + + + + + + + + + + one can also extend the model with guarded actions , where action can be performed depending on global conditions in global configurations . returning to the floodset protocol, one may consider a variation of the asynchronous protocol suitable for resolving the consensus problem in the presence of _ crash failures_. we can modify the above setting as follows .now , processes may fail and , from that point onward , such processes send no further messages .note , however , that the messages sent by a process _ in the moment of failure _ may be delivered to _ an arbitrary subset _ of the non - faulty processes .the goal of the algorithm also has to be modified , so only _ non - faulty _ processes are required to eventually reach an agreement .thus , the floodset protocol considered above is modified by adding the following rule : * at every round ( later than the first ) , a process broadcasts any value _ the first time it sees it_. now , in order to specify this protocol the variation of the model with crashes should be used .the above rule can be easily encoded in the model and we leave it as an exercise for the reader .an interesting point here is that the protocol is actually correct under the assumption that _ only finitely many processes may fail ._ this assumption is automatically satisfied in our automata model , but not in its temporal translation .instead , one may use the above _ finite bounds on delivery _ axiom to prove the correctness of this variation of the algorithm .now we have all the ingredients to perform the verification of parameterised protocols . given a protocol , we can translate it into a temporal formula .for the temporal representation , of a required correctness condition , we then check whether is valid temporal formula .if it is valid , then the protocol is correct for all possible values of the parameter ( sizes ) .correctness conditions can , of course , be described using any legal formula .for example , for the above floodset protocol(s ) we have a liveness condition to verify : or , alternatively \ ] ] in the case of a protocol working in presence of processor crashes .while space precludes describing many further conditions , we just note that , in , we have demonstrated how this approach can be used to verify safety properties , i.e with . since we have the power of , but with decidability results, we can also automatically verify fairness formulae of the form .in the propositional case , the incorporation of xor constraints within temporal logics has been shown to be advantageous , not only because of the reduced complexity of the decision procedure ( essentially , polynomial rather than exponential ; ) , but also because of the strong fit between the scenarios to be modelled ( for example , finite - state verification ) and the xor logic ) .the xor constraints essentially allow us to select a set of names / propositions that must occur exclusively . in the case of verification for finite state automata, we typically consider the automaton states , or the input symbols , as being represented by such sets .modelling a scenario thus becomes a problem of engineering suitable ( combinations of ) xor sets . in this paper, we have developed an xor version of , providing : its syntax and semantics ; conditions for decidability ; and detailed complexity of the decision procedure .as well as being an extension and combination of the work reported in both and , this work forms the basis for tractable temporal reasoning over infinite state problems . in order to motivate this further, we considered a general model concerning the verification of infinite numbers of identical processes .we provide an extension of the work in and , tackling liveness properties of infinite - state systems , verification of asynchronous infinite - state systems , and varieties of communication within infinite - state systems .in particular , we are able to capture some of the more complex aspects of _ asynchrony _ and _ communication _ , together with the verification of more sophisticated _ liveness_ and _ fairness _ properties .the work in on basic temporal specification such as the above have indeed shown that deductive verification can here be attempted but is expensive the incorporation of xor provides significant improvements in complexity .the properties of first - order temporal logics have been studied , for example , in .proof methods for the monodic fragment of first order - temporal logics , based on resolution or tableaux have been proposed in .model checking for parameterised and infinite state - systems is considered in .formulae are translated into to a bchi transducer with regular accepting states .techniques from regular model checking are then used to search for models .this approach has been applied to several algorithms verifying safety properties and some liveness properties .* have theoretically non - primitive recursive upper bounds for decision procedures ( although they work well for small , interesting , examples ) in our case the upper bounds are definitely primitive - recursive ; * are not suitable ( or , have not been used ) for asynchronous systems with delayed broadcast it is not clear how to adapt these methods for such systems ; and * typically lead to undecidable problems if applied to liveness properties .future work involves exploring further the framework described in this paper in particular the development of an implementation to prove properties of protocols in practice .further , we would like to see if we can extend the range of systems we can tackle beyond the monodic fragment .we also note that some of the variations we might desire to include in section [ sec : var ] can lead to undecidable fragments .however , for some of these variations , we have correct although ( inevitably ) incomplete methods , see .we wish to explore these boundaries further .10 p. a. abdulla , b. jonsson , m. nilsson , j. dorso , and m. saksena . .in _ proc .16th international conference on computer aided verification ( cav ) _ , volume 3114 of _ lncs _ , pages 348360 .springer , 2004 .a. artale , e. franconi , f. wolter , and m. zakharyaschev . a temporal description logic for reasoning over conceptual schemas and queries . in _ proc .european conference on logics in artificial intelligence ( jelia ) _ , volume 2424 of _ lncs _ , pages 98110 .springer , 2002 .j. brotherston , a. degtyarev , m. fisher , and a. lisitsa . .in _ proc . international conference on logic for programming , artificial intelligence , and reasoning ( lpar )_ , volume 2514 of _ lncs _ , pages 86101 .springer verlag , 2002 .d. gabelaia , r. kontchakov , a. kurucz , f. wolter , and m. zakharyaschev . on the computational complexity of spatio - temporal logics . in _ proc .16th international florida artificial intelligence research society conference ( flairs ) _ , pages 460464 . aaai press , 2003 .i. hodkinson , r. kontchakov , a. kurucz , f. wolter , and m. zakharyaschev . on the computational complexity of decidable fragments of first - order linear temporal logics . in _ proc .international symposium on temporal representation and reasoning ( time ) _ , pages 9198 .ieee cs press , 2003 . | in this paper we consider the specification and verification of infinite - state systems using temporal logic . in particular , we describe parameterised systems using a new variety of first - order temporal logic that is both powerful enough for this form of specification and tractable enough for practical deductive verification . importantly , the power of the temporal language allows us to describe ( and verify ) asynchronous systems , communication delays and more complex properties such as liveness and fairness properties . these aspects appear difficult for many other approaches to infinite - state verification . |
the interaction of molecular species with surfaces is of critical importance in astrophysical environments .much interstellar chemistry occurs on or in the icy layers which cover dust grains in molecular clouds , the birthplace of stars and planets .these dust grains are believed to be composed of silicates and carbonaceous material .the silicate is at least 95 % amorphous , as determined from the breadth of the 9.7 m band , and although its composition has not been determined precisely , is well approximated by amorphous olivine ( mgfesio ) .the silicates are believed to be fluffy in nature , with a large surface area upon which reactions can occur . at the low temperatures in molecular clouds ( t 20 k ) atoms and molecules freeze out onto these silicate surfaces , forming the icy mantles .amorphous solid water ( asw ) is by far the largest component of this ice , with abundances of 1 10 with respect to the total h column density , equivalent to coverages of up to 100 monolayers ( ml ) .given that the extinction threshold for h mantles is a 3.3 mag , it is reasonable to assume that between the dense pre - stellar cores ( where dust grains are completely coated by this icy mantle ) and the cloud edges ( where competition between ice formation and photodesorption of h yields a population of bare silicate grains ) , there must be a region where icy surfaces and bare silicates co - exist .as interstellar regions evolve , the icy grains are further processed , either by gentle heating or cyclic desorption - deposition events , producing crystalline h .this has been detected in various stellar and pre - planetary environments , including , for example , m giant stars , quaoar in the kuiper belt , trans neptune objects , comets , and in the outer disks around t tauri stars . as gas - grain modelling of interstellar environments becomes more sophisticated ,key questions impeding the full implementation of surface chemistry include : what effect does the underlying grain surface have on the desorption characteristics of key molecules , and to what extent is this desorption affected as we move from the multilayer to sub - monolayer coverage regime ?a third issue is how to realise the transition between the multilayer and monolayer regimes in a gas - grain model without overloading the model with many more layers of complexity .co , o and co are all interstellar molecules that potentially could populate bare interstellar grains at sub - monolayer coverages . on ice , only one previous study focussed on sub - monolayer coverages of co , highlighting the spectroscopic rather than desorption characteristics of the porous asw : co system .to date , no desorption studies of co , o or co have been made on a silicate surface , at either multilayer or sub - monolayer coverages .co is the second most abundant interstellar molecule , and is known to form in the gas phase , then freeze - out on to h covered grains to form overlayers of pure co ice , the layer thickness critically depending on gas density rather than grain temperature .consequently , extensive temperature programmed desorption ( tpd ) studies have been made of multilayer co coverages on various surfaces , including au , h , a meteorite sample and highly oriented pyrolytic graphite ( hopg ) .it is widely accepted that both co and h form on dust grains in molecular clouds , and recent studies suggest that at least some of the co and h is formed concurrently .since co is the key precursor to co formation , such mechanisms would require co freeze - out onto bare grain surfaces long before co ice , or even large quantities of h ice , are detected .although tenuous , spectroscopic observational and experimental evidence exists for such a freeze - out process , but an investigation of the desorption behaviour of co at sub - monolayer coverages on ice and silicate surfaces is vital if we want to be certain that it can reside at an interstellar grain surface long enough to form more complex species . similarly , o is a potential precursor in h formation , and is also a species likely to form in the gas phase then freeze - out onto grain surfaces , rather than forming on the grain itself .however , as a homonuclear diatomic o is infrared inactive , so its detection in interstellar ice , though occasionally claimed via a forbidden transition remains elusive .it is not clear whether this is related to the weak transition probability , lack of a significant o population in the ice , or that the o has rapidly reacted ( upon adsorption ) to exclusively form h .nevertheless , the multilayer desorption behaviour of o has been studied previously alongside co and n , as well as on au , porous asw and tio .these studies show that the multilayer desorption characteristics of o are very similar to co , so it is interesting from both a chemical and astrophysical viewpoint to also investigate the desorption behaviour of o at sub - monolayer coverages on ice and silicate surfaces .the desorption characteristics of co have not been extensively studied on any surface , despite it being one of the most abundant solid phase molecular species in the interstellar medium .recently the desorption characteristics of multilayer co were reported from porous asw , and have been previously studied on hopg , porous asw and au .however , given that the observational and experimental evidence shows that a fraction of co ice must form concurrently with the water ice layer , some co molecules must populate both the bare silicate grains and the ice layers at sub - monolayer coverages . herewe present an experimental study of the desorption of co , o and co from three different surfaces ; non - porous asw , crystalline ice , and amorphous olivine - type silicate . as these are , for the first time , all undertaken in the same experimental set - up formolism we are able to investigate both the individual effect of each surface on the desorption characteristics , as well as determining whether the molecular composition or morphology of the surface is most relevant in determining the desorption behaviour of molecules . for each of the nine combinations of co , o and co on each surface ,our study has encompassed both the multilayer and sub - monolayer regimes .these data are modelled to determine a simple analytical expression which accurately calculates both the sub - monolayer and the multilayer desorption energies . by changing the model to incorporate interstellar , rather than experimental , heating rates ,we are able to address the key questions above , namely : what effect does the underlying surface have on the desorption characteristics of the molecules adsorbed there , and how is this desorption modified in the sub - monolayer coverage regime ?the experiments were conducted using the formolism set - up ( formation of molecules in the interstellar medium ) , described in detail elsewhere . briefly , the set - up consists of an ultra - high vacuum chamber ( base pressure 10 mbar ) , containing a silicate - coated copper sample surface , operating at temperatures between 18 and 400 k. the system is equipped with a quadrupole mass spectrometer ( qms ) , which is used for the tpd experiments . a sample of either o , co or co was deposited onto the surface at 18 k via the triply differentially pumped beam line ; a linear temperature ramp was then applied to the surface , and the qms used to measure the desorption of each species into the gas phase , as a function of temperature ..description of experimental exposures . [ cols= " < ,< , < " , ] full wetting behaviour is seen for molecules which fill all the lowest energy sites on the surface before forming a multilayer . intermediatebehaviour is when the molecules start to fill the surface , but form islands before it is full .non - wetting behaviour is when islands are almost immediately formed by molecules on the surface .( dot - dashed ) and co ( dashed ) from h ( black ) , h ( dark grey ) and sio ( light grey ) at a heating rate of 1 kcentury . for comparison ,the black dotted line shows the modelled desorption characteristics of co on h if only multilayer desorption is included in the model.,scaledwidth=50.0% ] under astrophysical conditions , the heating rate is likely to be on the order of 1 kcentury ( hot core heating rate ) , many orders of magnitude slower than the laboratory value of 10 kmin employed here . by simulating the desorption profiles of the molecule - surface combinations here ,it is possible to investigate whether surface type and coverage is likely to affect desorption characteristics on astrophysically relevant timescales . figure [ fgr : astro ] presents the results of such a simulation for all molecule - surface combinations , assuming a 2 ml coverage of each species on each grain ( i.e. one multilayer and one monolayer ) .this seems reasonable as none of these species ( except possibly co ) form large pure ice overlayers in the ism . in figure[ fgr : astro ] , the dot - dashed line shows the modelled desorption characteristics of 2 ml of co from h if only multilayer desorption is considered .it is evident that without including the sub - monolayer component of the model , the desorption time is vastly underestimated ( by over a few 10 years or 10 15 k ) , particularly for co and co .this is a comparable error to that introduced when gas - grain models treat multilayer desorption as first order , as suggested previously , and illustrates that sub - monolayer desorption characteristics are fundamental to describing the overall desorption profile of species on astrophysical timescales .figure [ fgr : astro ] shows that for o there is the least difference between the time ( or temperature ) at which desorption would be completed in the multilayer only model versus the combined multilayer - monolayer model .however , monolayers of both co and co are able to reside directly at the surface for thousands of years , or at equivalent temperatures 10 15 k higher , than those predicted by multilayer desorption alone , particularly on the amorphous silicate surface . in quiescent regions of the interstellar medium or at the edges of molecular clouds , where grains are warmer , this would suggest that small molecules , like co and co , could easily be present on the grain surfaces , potentially undergoing chemistry to generate larger chemical species , long before detectable ice layers really form .such processes have been postulated previously , and it remains an interesting follow - on from this work to see which chemical species will form in astrochemical models when these sub - monolayer desorption data are considered .what is also clear from figure [ fgr : astro ] is that , while the molecules desorb at different times ( and temperatures ) , there is a clear pattern to the order of desorption from the various substrates .the onset of desorption from the amorphous surfaces , h and sio , occurs almost concurrently for all species , while desorption from h occurs later , at a higher temperature .this delay in desorption from crystalline surfaces indicates that in regions where crystalline water is predicted to dominate , for example on processed ices inside disks around ysos , or even cometary surfaces , volatile species will not start desorbing from grain surfaces for a few thousand years longer ( or over temperature ranges of a few kelvin higher ) than anticipated from zeroth order desorption kinetics models .such differences are subtle , but when one considers that these molecules will be highly mobile at such temperatures , the potential for complex chemistry to occur on the grain surfaces is increased .furthermore , it is exactly these inner disk regions , where crystalline water ice is predicted to be found , from which hot core and hot corino gases are populated by desorbing molecules(e.g . * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?similarly , the simulation results in figure [ fgr : astro ] show the residence time of molecules on a h surface is significantly higher than on the amorphous surfaces . starting from a 2 ml coverage ,the time taken for 50 % of the o on sio or h to desorb is 8000 years , while on h this value is 8700 years , an increase of 8.75 % .similarly , the difference between desorption of 50 % of the molecules from h and from h is 8.82 % for co and 6.97 % for co .a key aim of this work was to address what effect the underlying dust grain surface could have on the desorption characteristics of molecules adsorbed there , and how this desorption is modified in the sub - monolayer coverage regime compared to the multilayer regime .this work used temperature programmed desorption to measure the desorption characteristics of o , co and co from h , h and sio over sub - monolayer to multilayer coverages .the experimental data were modelled using the polanyi - wigner equation , combining different approaches to reproduce the sub - monolayer and multilayer tpd traces , in particular by varying e as a function of coverage in the sub - monolayer regime .the desorption can be categorised as full wetting , intermediate or non - wetting behaviour , and the switching point is well described by n , an empirical measure of the coverage at which pure monolayer desorption stops . on both a laboratory and an astrophysically relevant timescale , the desorption characteristics of molecules from the amorphous substrates h and sio found to be very similar , while on the crystalline surface h , molecules desorbed at higher temperatures and on a longer timescale .it would be relatively trivial to implement the data presented here into astrochemical models ; it would require only a subroutine to describe the sub - monolayer , coverage dependent , e , which can be calculated using the parameters presented in this paper , and the coverage n , at which the true monolayer desorption ends and beyond which a fixed e , and zeroth order desorption can be implemented .previously , concluded that surface type is relatively unimportant , and rather it is the heating rate and grain size that dominate desorption kinetics ; we do not disagree that both these factors are important , but by not considering the sub - monolayer case , this previous work overlooks certain intricacies .firstly , the results presented here suggest that the surface type is the dominant factor controlling desorption under sub - monolayer coverage conditions .this definition of surface type must include both the surface material and the degree of crystallinity , and not simply the material alone .from the data presented here , it is clear that the desorption characteristics of molecular species from h and sio , two amorphous surfaces , are much more similar than those from h and h , which are composed of the same underlying material .secondly , while the size of the grain is critical to desorption characteristics , it is both the surface area and the surface - adsorbate interaction which must be considered . without including information on whether the surface - adsorbate interaction is wetting or non - wetting , it is not possible to know the coverage at which behaviour switches from sub - monolayer ( to intermediate ) to the multilayer regime , i.e. the value of n . as it is notoriously difficult to calculate physisorption surface - adsorbate energies theoretically , this is necessarily an empirically determined value , highlighting the importance of experiments such as those presented here .j.a.n . gratefully acknowledges the financial support of the leverhulme trust , the scottish international education trust , the university of strathclyde and the scottish universities physics alliance , without whom this work would not have been carried out .the research leading to these results has received funding from the european community s seventh framework programme fp7/2007 - 2013 under grant agreement no .we acknowledge the support of the national pcmi programme founded by the cnrs , the conseil regional dile de france through sesame programmes ( contract i-07 - 597r ) , the conseil gnral du val doise and the agence nationale de recherche ( contract anr 07-blan-0129 ) . the authors would like to thank h. mokrane for her assistance in the laboratory , as well as louis dhendecourt and zahia djouadi for preparing the silicate surface used in these experiments .we are also immensely grateful to the anonymous referee , whose insightful comments have greatly improved this manuscript. 99 accolla , m. , et al . 2011 , physical chemistry chemical physics ( incorporating faraday transactions ) , 13 , 8037 acharyya , k. , fuchs , g. w. , fraser , h. j. , van dishoeck , e. f. , & linnartz , h. 2007 , , 466 , 1005 acharyya , k. , hassel , g. e. , & herbst , e. 2011 , , 732 , 73 amiaud , l. , fillion , j. h. , baouche , s. , dulieu , f. , momeni , a. , & lemaire , j. l. 2006 , , 124 , 094702 andersson , p. u. , nagard , m. b. , witt , g. , & pettersson , j. b. c. 2004 , j. phys . chem . a , 108 , 4627 bisschop , s. e. , fraser , h. j. , berg , k. i. , van dishoeck , e. f. , & schlemmer , s. 2006 , , 449 , 1297 bojan , m. j. , & steele , w. a. 1987 , langmuir , 3 , 1123 brown , w. a. , & bolina , a. s. 2007 , , 374 , 1006 burke , d. j. , & brown , w. a. 2010 , , 12 , 5947 cazaux , s. , tielens , a. g. g. m. , ceccarelli , c. , castets , a. , wakelam , v. , caux , e. , parise , b. , & teyssier , d. 2003 , , 593 , l51 collings , m. p. , dever , j. w. , fraser , h. j. , mccoustra , m. r. s. , & williams , d. a. 2003a , , 583 , 1058 collings , m. p. , dever , j. w. , fraser , h. j. , & mccoustra , m. r. s. 2003b , , 285 , 633 collings , m. p. , anderson , m. a. , chen , r. , dever , j. w. , viti , s. , williams , d. a. , & mccoustra , m. r. s. 2004 , , 354 , 1133 collings , m. p. , dever , j. w. , & mccoustra , m. r. s. 2005 , chem ., 415 , 40 cuppen , h. m. , & garrod , r. t. 2011 , , 529 , a151 dohnalek , z. , ciolli , r. l. , kimmel , g. a. , stevenson , k. p. , smith , r. s. , & kay , b. d. 1999 , j. chem .phys . , 110 , 5489 dohnalek , z. , kimmel , g. a. , joyce , s. a. , ayotte , p. , smith , r. s. , & kay , b. d. 2001 , j. phys . chem .b , 105 , 3747 dohnalek , z. , kim , j. , bondarchuk , o. , white , j. m. , & kay , b. d. 2006 , j. phys .b , 110 , 6229 draine , b. t. 2003 , , 41 , 241 elsila , j. , allamandola , l. j. , & sandford , s. a. 1997 , , 479 , 818 ertl , g. , neumann , m. , & streit , k. m. 1977 , surf ., 64 , 393 favre , c. , despois , d. , brouillet , n. , baudry , a. , combes , f. , gulin , m. , wootten , a. , & wlodarczak , g. 2011 , , 532 , a32 fayolle , e. c. , berg , k. i. , cuppen , h. m. , visser , r. , & linnartz , h. 2011 , , 529 , a74 fraser , h. j. , collings , m. p. , mccoustra , m. r. s. , & williams , d. a. 2001 , , 327 , 1165 fraser , h. j. , mccoustra , m. r. s. , & williams , d. a. 2002 , astronomy and geophysics , 43 , 020000 fraser , h. j. , collings , m. p. , dever , j. w. , & mccoustra , m. r. s. 2004 , , 353 , 59 fraser , h. j. , bisschop , s. e. , pontoppidan , k. m. , tielens , a. g. g. m. , & van dishoeck , e. f. 2005 , , 356 , 1283 frenklach , m. , huang , d. , thomas , r. e. , rudder , r. a. , & markunas , r. j. 1993 , appl .lett . , 63 , 3090 fuchs , g. w. , et al .2006 , faraday discussions , 133 , 331 glvez , o. , ortega , i. k. , mat , b. , moreno , m. a. , martn - llorente , b.,herrero , v. j. , escribano , r. , & gutirrez , p. j. 2007 , , 472 , 691 goumans , t. p. m. , uppal , m. a. , & brown , w. a. 2008 , , 384 , 1158 green , s. d. , bolina , a. s. , chen , r. , collings , m. p. , brown , w. a. , & mccoustra , m. r. s. 2009 , , 398 , 357 ioppolo , s. , cuppen , h. m.,romanzin , c. , van dishoeck , e. f. , & linnartz , h. 2008 , , 686 , 1474 ioppolo , s. , van boheemen , y. , cuppen , h. m. , van dishoeck , e. f. , & linnartz , h. 2011 , , 413 , 2281 jewitt , d. c. , & luu , j. 2004 , , 432 , 731 lakhlifi , a. , & killingbeck , j. p. 2010, 604 , 38 lattelais , m. , et al .2011 , , 532 , a12 lemaire , j. l. , vidali , g. , baouche , s. , chehrouri , m. , chaabouni , h. , & mokrane , h. 2010 , , 725 , l156 li , a. , & draine , b. t. 2002 , , 564 , 803 lisse , c. m. , et al .2006 , science , 313 , 635 matar , e. , bergeron , h. , dulieu , f. , chaabouni , h. , accolla , m. , & lemaire , j. l. 2010 , , 133 , 104507 mathis , j. s. 1998 , , 497 , 824 mautner , m. n. , abdelsayed , v. , el - shall , m. s. , thrower , j. d. , green , s. d. , collings , m. p. , & mccoustra , m. r. s. 2006 , faraday discussions , 133 , 103 merlin , f. , guilbert , a. , dumas , c. , barucci , m. a. , de bergh , c. , & vernazza , p. 2007, , 466 , 1185 noble , j. a. , dulieu , f. , congiu , e. , & fraser , h. j. 2011 , , 735 , 121 oba , y. , miyauchi , n. , hidaka , h. , chigai , t. , watanabe , n. , & kouchi , a. 2009 , , 701 , 464 berg , k. i. , van broekhuizen , f. , fraser , h. j. , bisschop , s. e. , van dishoeck , e. f. , & schlemmer , s. 2005 , , 621 , l33 omont , a. , forveille , t. , moseley , s. h. , glaccum , w. j. , harvey , p. m. , likkel , l. , loewenstein , r. f. , & lisse , c. m. 1990 , , 355 , l27 pontoppidan , k. m. , et al .2003 , , 408 , 98 pontoppidan , k. m. 2006 , , 453 , l47 roush , t. l. 2001 , , 106 , 33315 raval , r. , haq , s. , harrison , m. a. , blyholder , g. , & king , d. a. 1990 , chem .lett . , 167 , 391 sandford , s. a. , & allamandola , l. j. 1990 , , 355 , 357 schegerer , a. a. , & wolf , s. 2010 , , 517 , a87 semenov , d. , et al .2010 , , 522 , a42 smith , r. g. , sellgren , k. , & brooke , t. y. 1993 , , 263 , 749 sofia , u. j. , & meyer , d. m. 2001 , , 554 , l221 stirniman , m. j. , huang , c. , smith , r. s. , joyce , s. a. , & kay , b. d. 1996 , , 105 , 1295 terlain , a. , & larher , y. 1983 , surf ., 125 , 304 tielens , a. g. g. m. 2005 , the physics and chemistry of the interstellar medium , cambridge university press ulbricht , h. , zacharia , r. , cindir , n. , & hertel , t. 2006 , carbon , 44 , 2931 van der tak , f. f. s. , van dishoeck , e. f. , & caselli , p. 2000 , , 361 , 327 visser , r. , van dishoeck , e. f. , doty , s. d. , & dullemond , c. p. 2009, , 495 , 881 viti , s. , & williams , d. a. 1999 , , 305 , 755 wakelam , v. , et al .2010 , , 156 , 13 wang , s .- x ., yang , y. , sun , b. , li , r .- w . , liu , s .- j ., & zhang , p. 2009, , 80 , 115434 whittet , d. c. b. , bode , m. f. , longmore , a. j. , adamson , a. j. , mcfadzean , a. d. , aitken , d. k. , & roche , p. f. 1988 , , 233 , 321 williams , d. , & herbst , e. 2002 , surface science , 500 , 823 | the desorption characteristics of molecules on interstellar dust grains are important for modelling the behaviour of molecules in icy mantles and , critically , in describing the solid - gas interface . in this study , a series of laboratory experiments exploring the desorption of three small molecules from three astrophysically relevant surfaces are presented . the desorption of co , o and co at both sub - monolayer and multilayer coverages was investigated from non - porous water , crystalline water and silicate surfaces . experimental data was modelled using the polanyi - wigner equation to produce a mathematical description of the desorption of each molecular species from each type of surface , uniquely describing both the monolayer and multilayer desorption in a single combined model . the implications of desorption behaviour over astrophysically relevant timescales are discussed . [ firstpage ] astrochemistry ism : molecules methods : laboratory . |
classically , wave motion in elastic media is a phenomenon associated with either / both the transverse or longitudinal vibrations ; if there is a wave , then something material should be waving .this notion led 19th century scientists to introduce the concept of the luminiferous medium ( field , aether , etc . ) . the first attempt to explain the propagation of light as a field phenomenon was by cauchy circa 1827 ( see the account in ) , who postulated the existence of an elastic continuum through which light propagates as a shear wave . subsequently came the contributions of faraday and ampere , which eventually led to the formulation of the electromagnetic model as known todaythe crucial advance in understanding the phenomena of electromagnetism phenomena was achieved , however , when added the ` displacement current ' , , to ampere s law .this term was similar to the time - derivative of the stress in his constitutive relation for elastic gases .since the electric field vector is a clear analogue of the stress vector in continuum mechanics , one can say that maxwell postulated an elastic constitutive relation by adding the displacement current to ampere s law . indeed , the new term transformed the system of equations ( already established at that time in electrostatics ) into a hyperbolic system with a characteristic speed of shear wave propagation .maxwell and hertz identified the characteristic speed with the speed of light .however , soon after maxwell formulated his equations , it was discovered that his model was not invariant with respect to translational motion of the coordinate frame . realized that the cause of non - invariance was the use of partial time derivatives .he proposed to use the convective time derivative _ in lieu _ of the former .the maxwell hertz equations ( called also ` progressive wave equations ' ) are clearly the correct model for electromagnetic phenomena in moving bodies .but the primordial question is whether the progressive - wave equations can be construed to hold also _ in vacuo_. the answer is obviously in the affirmative if one accepts that what is currently called ` physical vacuum ' must be regarded as a material continuum .voigt ( see ) , , and independently lorentz , spotted the fact that the wave equation can be made invariant if , in the moving frame , the time variable is changed together with the spatial variables .nowadays , this is known as the lorentz transformation .the success of the latter stems from the fact that it tacitly restores some parts of the convective derivative , i.e. it emulates the material invariance for non - deformable frames in rectilinear motion .although researchers usually speak about lorentz covariance as a general covariance , it has to be pointed out that the lorentz transformation has no meaning for accelerating frames ( nor for generally deforming frames ) .hence the search for the truly covariant formulation should continue .the present paper summarizes the efforts from the last decade and a half towards identifying the mechanical construct behind the phenomenon of what is termed ` electromagnetic field ' .we show here that the maxwell equations and the laws of electromagnetism ( biot savart , oersted ampere , and lorentz force laws ) , all can be derived from the governing equations of maxwell s elastic fluid , that includes relaxation of the stress . in doing so, we also present a concise frame indifferent formulation of the maxwell elastic fluid model .the linearized governing equations of elastic medium are valid only for infinitesimal deformations , when the referential description and spatial configuration coincide .the linear constitutive relationship for an elastic body relates the stress tensor ( we reserve the notation for the deviatoric stress tensor ) to the deformation tensor via the generalized hooke s law ,\ ] ] where is the trace of the deformation tensor , and the superscript denotes the transpose .the above constitutive law yields the so - called navier equations ( see , e.g. , ( * ? ? ?117 ) for the displacement vector : where and are the lam coefficients . hereone can use the ` nabla ' operator because the equations are written in the current configuration .the speeds of propagation of shear and compressional disturbances , are given respectively by where the ratio is introduced for convenience . in a compressible elastic medium, both the shear and the dilational / compressional waves should be observable . since the groundbreaking works of young and fresnel, it is well established that electromagnetic waves ( e.g. , light ) are a purely transverse ( shear ) phenomenon .this observation requires us to reduce the complexity of the model and to find a way to eliminate the dilational modulus .cauchy assumed that and ended up with the theory of so - called ` volatile aether ' ( see ) . upon a closer examination, we found that such an approach can not explain maxwell s equations .let us now assume that the dilational waves are not observable because the other extreme situation is at hand : which is equivalent to or .it is convenient to rewrite eq . in terms of the speed of light , , and parameter , namely and to expand the speed of light , displacement , and velocity into asymptotic power series with respect to , namely introducing into and combining the terms with like powers we obtain for the first two terms where is introduced for convenience and is the coefficient of the spherical part of the internal stresses .the variable has dimension of and plays the same role as the pressure in an incompressible medium : is an implicit function in eq . that provides the necessary degree of freedom to enforce the satisfaction of the ` incompressibility ' condition , eq .. the latter can also be rewritten as which requires that the velocity field be solenoidal within the zeroth - order of approximation of the small parameter . from now on ,the subscript ` ' will be omitted form the variables without fear of confusion .now , eq . can be rewritten as where is the tangential part of the stress vector in the elastic continuum .the normal part of the stress vector is given by the pressure gradient .let us now introduce the following notations [ eq : elmag_definitions ] and call them the ` electric field ' and ` magnetic induction ' vectors .thus the electric field is the negative tangential stress vector , while magnetic field is the ` dynamic ' vorticity .naturally , is the ` kinematic ' vorticity . in the virtue of the above definition, one has the following equation for the magnetic field now taking the of eq . and acknowledging the definitions given in eqs . , we arrive at faraday s law on the other hand , taking the time derivative of eq ., we obtain the second of the dynamic maxwell s equations the fact that the governing equations of any elastic medium in the linear limit admit a maxwell form also can be considered as an indication that the electromagnetic field is in itself an elastic medium . inwhat follows , we shall call the mechanical object equivalent mathematically to the electromagnetic field the _ metacontinuum _ , to distinguish it from continuous media in technical applications , such as fluids and elastic solids . note that the inverse of the elastic shear coefficient plays the role of the electric permittivity _ in vacuo _ , while the density of the metacontinuum acts as the ` magnetic permeability . 'the results of this section unequivocally show that the ` field ' described by maxwell s equations is equivalent to an elastic material .to best of author s knowledge , the connection of maxwell s equation to the equations governing elastic media was first established in .the common theme of these earlier papers is that the metacontinuum is considered as an elastic _ solid_. in such a model , no _ stationary _ magnetic fields can exist , since no steady velocities are possible for a solid continuum without discontinuities .in this section we outline the next decisive step in developing the model : we consider an elastic liquid _ in lieu _ of an elastic solid .this means that for shear deformations , the metacontinuum must be an elastic _ fluid _ for which the time derivative of the deviatoric stress tensor is related to the deviatoric rate of deformation tensor via the relation this constituitve relationship ( rheology ) can be rewritten for the negative stress vector and deformation vector , ) , since both of these vectors are the divergences of the respective tensors involved in the elastic rheology , eq . .then where is called ` elastic viscosity ' by , is the relaxation time of the stresses , and the apparent elastic shear modulus is given by .note that the above elastic - liquid rheology concerns just the shear deformations . for compressional / dilational motions, the metacontinuum can still behave as a virtually incompressible solid .a more general formulation of the shear part of the constitutive relation would be as in the viscoelastic liquid where can be called the ` conductivity ' of the viscoelastic liquid .note that in , the conductivity is set to unity , which precludes treating purely elastic ( non - viscous ) liquids . here, we prefer to keep the flexibility offered by the presence of the coefficient .setting the appropriate terminology is an uneasy task because , when , then does indeed have a meaning of a viscosity coefficient , while for , it loses its independent meaning and enters the picture through the coefficient of apparent shear elasticity . the case gives a viscoelastic rheology , but it does _ not _ lead to a model governed by the navier - stokes equations with additional elasticity , because no retardation term ( time derivative of the deformation tensor / vector ) is present . in this sense , adding the conductivity does not introduce dispersive dissipation , but rather a linear attenuation parameterized by the conductivity coefficient . for the effects connected with the attenuation /conductivity we refer the reader to and . the constitutive relationship given in eq .can be interpreted as ohm s law for _vacuo_. although , this stipulation is made in mainstream texts ( e.g. , ) , the cause of ohm s law in matter is not necessarily the intrinsic resistance of the metacontinuum : it is the result of the thermal fluctuations of the atoms that obstruct the free passage of charges through a conductor .clearly , a more in - depth argument is needed to justify having ohm s law in _vacuo _ , which goes beyond the scope of the present paper .it should be pointed out here that eq .is concerned merely with the rheology for the shear motions . at this pointit is not of importance if for dilational / compressional motions the metacontinuum is solid or liquid , provided that the dilational coefficient is much larger than the shear coefficient .then , the intuitive argument is that if the metacontinuum is a liquid with respect to the dilational motions , it may lose its integrity during the motion . since there is no information on the electromagnetic field being ` ruptured , ' we are guided by the above argument and assume that the solid rheology applies to the dilational / compressional motions .note that in the above notations , is the dilational viscosity coefficient , and is the shear viscosity coefficient defined in eq . .the system , specify the combined constitutive relation for the metacontinuum ( electromagnetic field ) .the closest analogy to an ubiquitous continuous medium is that of jelly or pine pitch .if compression / dilation waves are sent through the metacontinuum , it behaves as a elastic solid with very large dilational modulus , while if a shear deformation is applied , it flows as a incompressible liquid .in terms of the velocity vector , the cauchy balance can be rewritten as follows : where the left - hand side is the material ( convective ) derivative of the velocity vector in the current configuration ( called ` convective ' or ` total ' derivative ) .remember that in the referential description , it is just the partial time derivative . note also that for an incompressible metacontinuum , the density is the same constant in both the referential and spatial descriptions and we denote it by .the concept of frame indifference ( general covariance of the system ) requires that the partial derivative of the stress variable ( in our case the stress vector ) in eq .is replaced by the appropriate invariant rate . since it is a vector density ,see the argument by , the rate has to leave the integral of the stress vector invariant .it is argued in that the pertinent invariant rate is the so - called oldroyd upper - convected derivative , namely here we come to one of the most crucial assumptions of the present work , namely , the way the constitutive relation has to be written when a relaxation of the stress is present .it is usually assumed that the invariant rate to be added to the constitutive law of viscous liquids should be of the stress tensor , i.e. the upper - convected oldroyd derivative ( ) : the problem with this constitutive conjecture is that the tensor does not play an independent role in the cauchy balance equation .rather , the deviatoric stress vector appears there .then , does eq . ensure that the time rate of is invariant ? to find out , we take the operation div of eq . , arriving at which differs from the invariant time rate of by the term , the latter being the contraction of the third rank tensor of the repeated gradient ( the hessian ) of and the second rank stress tensor . in order to establish which constitutive relationis correct ( the one involving the stress tensor or stress vector ) , one has to have data for flows in which the hessian of the velocity vector field is measured .while for the case of elastic liquids it is still possible to devise such an experiment , electromagnetism clearly indicates that the constitutive relation at play is the one involving the stress vector .here we propose an alternative formulation for models involving stress relaxation by replacing the partial time derivative of the stress vector with the invariant rate which was proposed in , and then successfully applied to the generalization of the maxwell cattaneo model of second sound in .apart from providing insight into the possible constitution of the electromagnetic field , the above model has practical significance for the theory of maxwell elastic fluids .note that we add the condition of incompressibility ( recall eq .with the subscript 0 " omitted ) : collectively , we can term the system eqs . , and as the ` frame indifferent incompressible maxwell fluid model ' .the alternative formulation of the constitutive relation based on the stress vector concept proposed here has a very important consequence : it allows the stress vector to be eliminated between eqs . and ( when ) , to obtain a system that does not contain the stress variable : where the following notations are used note that is the upper convected derivative of a vector defined in eq . .. contains the implicit function that has to ensure the satisfaction of the incompressibility condition eq . .the advantage here is that , the only unknown functions are and , and the hyperbolicity of the model is now easily seen .according to the principal of frame indifference , the laws of physics ( including the laws of continuum physics ) must have the same form in any reference frame ( coordinate system ) .unlike what is called ` lorentz covariance ' , the laws in the referential description are _ frame indifferent _ , i.e. they are truly covariant . however , experimental measurements are _ always _ connected with a current frame in the geometric space .this means that an observational frame can not detect the material variables ( the referential description ) , but rather can merely measure their counterparts in the current ( geometric ) frame .this is a typical situation in mechanics of continuous media where the reference configuration is often not related to any measurable frame . for this reasonwe need to reformulate the model from section [ sec : maxwell ] in the current description making use of euler variables .this is the objective of the present section . the cauchy balance , eq ., can be rewritten in the so - called ` gromeka lamb form ' : now , taking the _ curl _ of eq . , and using our definitions eq . , we get : = - { \frac{\partial \bm{b}}{\partial t } } , \label{eq : faraday_lorentz}\ ] ] which is faraday s law with an additional term representing the force exerted by a moving magnetic field on each point of the medium. it is induced by the convective part of the acceleration at that point .the reaction to this force is the force acting on a moving point ( charge ) , , known as the ` lorentz force ' . in other words ,the material invariant version of faraday s law presented here automatically accounts for the physical mechanism that causes the lorentz force .the latter is nothing more than the inertial force given by the convective part of the total derivative .this is a very important result because it tells us that the lorentz force is not an additional , empirically observed force that has to be grafted onto maxwell s equations , but rather , it is connected to the material time derivative , specifical , to its convective part . under the incompressibility condition ,eq . can be recast as which we can call the ` hertz form ' of the faraday lorentz law . note the presence of the third term on the left - hand - side .it is not in hertz s formulation , nor does it appear in .evidently , eq . does not give any special advantage over eq ., but the form of eq . shows that one can not just add the convective part of the derivative to faraday s law to make it frame indifferent .this follows from the fact that the magnetic field is not a primary variable ( primary variables in fluids are the velocity components and the pressure ) .rather , it is proportional to the of the velocity vector , . by using the vector identity ( see , e.g. , ( * ? ? ?180 ) eq .yields the following generalization of the second of the dynamical maxwell s equation : where the definition given in eq .is already acknowledged .now , eqs ., , , form a system which can be termed the _ frame indifferent electromagnetodynamics _ ( fiem ) .this system generalizes hertz s program from 1890 and _ rigorously _ fulfills the requirements for ` general covariance , ' because fiem is frame - indifferent ; it is invariant when changing to another coordinate frame that can accelerate and can even deform .a very simple , limiting case of frame indifference is that of galilean invariance . the vector of absolute velocity is the primary variable , but the absolute velocity in the referential descriptioncan not be measured ( only relative velocities can be observed ) . in principleit can be restored from the magnetic field eq ., provided a boundary condition is known .clearly , in the limit of small velocities the convective and convected terms can be neglected and in the limit one obtains maxwell s system .a remarkable feature of eq .is that it incorporates terms ( with under - braces ) that , collectively , form the progenitor of the biot savart law . indeed , setting aside the possible singularities connected to point charges , then we can neglect the term with .let us also consider the case of stationary electric field , , and no resistance . then , considering a surface inside a closed contour , we can integrate eq . over and use the stokes theorem to obtain \operatorname{d\!}s= 0,\ ] ] where is an areal element on the surface .since the surface is arbitrary , then the only possibility is that the integrand must be equal to zero .thus one arrives at the form of the biot savart law as it is stipulated in relativistic electrodynamics , where is the velocity of the point of the field at which is measured .the underlined terms in eq . give that in _ vacuo _ which is discussed in ( * ? ? ?* ch.15 ) in dimensional form .this can be interpreted as a combination of ohm s and ampere s laws of electromagnetism in _ vacuo_. the terms with hats over them in eq .give as a corollary the ampere law if the following definition of a charge in _ vacuo _ is introduced : in particular , the ` chargedness ' , , of the displacement / velocity field is defined as the divergence of the electric field at the point . in order not to confuse this property of the field _ in vacuo _ with the localized pattern , called electron or a proton , we call the above defined function the _ metacharge_. for the latter , a continuity equation is readily derived upon applying the operation to eq . , namely , for the case , is the standard continuity equation for charge . here , is to be mentioned that eqwas derived by directly from the oldroyd form , eq . , but the derivation here is much more straightforward because of the application of the identity eq . .we shall refer to eq . as the continuity equation for the _metacharge_. in terms of the above introduced __ metacharge _ _ , we obtain which can be called ampere s law in _ vacuo_. the important conclusion formthe frame - indifferent formulation of the displacement current is that similarly to the lorentz force law , the convective / convected terms are related to phenomena that are embodied in ampere s and biot savart s laws , thus unifying them with maxwell s electrodynamics .all three electromagnetic - force laws ( called alternatively the ` laws of motional electromagnetism ' ) are manifestations of the inertial forces in the metacontinuum . put forward the idea that both of these laws may actually follow from a single law , similar to what is presented in eq . .a debate is still ongoing in the literature about wether these two laws are identical ( see , e.g. , ) or independent ( see , e.g. , ) .our results seem to favor whittaker s original idea that both laws have to be interwowen in the correct formulation . in our workthey are merely the corollaries form the inertial terms embodied in the convected derivative .the frame indifferent model of the electromagnetic field ( called here the metacontinuum ) , succeeds in unifying , in a single nexus , all known phenomena of electromagnetism : faraday s law , displacement - current law , lorentz - force law , oersted ampere s law , and biot savart law .it is a significant step forward from maxwell s model , in which only the first two were explained by the equations themselves , and the latter three appeared as additional empirically observed relations between the main characteristics of the field . since in our model these new terms are valid _ in vacuo _ , they are clearly the progenitors of the related phenomena in moving media .as shown above , the model that yields maxwell s equations is one of an incompressible elastic fluid . the obvious way to extend the validity of the model is to assume that the fluid is compressible .this means that it has to be a medium with vanishing compressibility , which happens when the dilational elasticity or viscosity coefficient is extremely large in comparison with the shear viscosity coefficient .as already mentioned , at this stage it is not clear whether the medium behaves as a liquid or solid when the compressional / dilational motions are considered .this question can not be answered without staging an experiment in which the fabric of the metacontinuum could eventually be ` ripped ' in order to settle this question . actually ,if the compressional / dilational motion is oscillatory , then it does not really make much difference if the metacontinuum is a solid or a fluid . as shown in the precedence , it has to be a fluid under shearing , but this does not impose any restriction on its behavior in the oscillatory compression / dilation motions .now , combining the two parts of the constitutive relation as given by eqs .the momentum equation can be written as which is to replace eq . .. remains unchanged , while the incompressibility condition , eq ., has to be replaced by the following : alternatively , one can use the continuity equation in the from ( see ) , where is the determinant of the gradient of deformation tensor and is the _ constant _ density in the referential description . note that , the form chosen above , is consistent with the eulerian description . note also that if the solid rheology is assumed in eq . , then one has to add the defining equation for the velocity components : , eqs . , , and form a coupled system for the compressible metacontinuum , which is highly nonlinear ( even for the linear rheology ) because of the dependence of the density on the motion .there are no conceptual difficulties to limit the model to the case of slight compressibility of the metacontinuum , and look for the propagation of harmonic compression waves therein .this raises the question about speed of the compressional waves ( ` sound ' ) of the metacontinuum . in order to avoid ambiguous terminology we will not use the term _ sound _ when referring to these waves .rather we will borrow a coinage from the ancient school of stoa ( see ) and will call the compressional / dilational motions the _ pneuma_. one obvious implication of the existence of waves of a different kind than the electromagnetic ( shear ) waves is that , in fact , there is more energy in the physical vacuum than that detected from electromagnetic interactions . mechanically speaking ,the _ pneuma _ waves are ` orthogonal ' to the shear ( em ) waves and they may not be detectable by devices based on electromagnetic interactions . indeed , they perfectly fit the bill of what is currently called ` dark energy ' ( see , e.g. , ) .one can begin thinking of how to detect pneuma waves only after some solutions for coupled compressional and shear waves of the system eqs ., , and become available .let us mention in the conclusion of this section that the above derived system is suitable for any ( visco)elastic medium , such as the ager - gelatin base phantom ( see ) .the approach proposed here presents a self - consistent point of view based on the continuum mechanics of the electromagnetic field whose shear deformations are perceived as the phenomena of electromagnetism .it is shown that the linearized governing equations of any incompressible elastic medium admit a ` maxwell form ' .conversely , the maxwell equations of electrodynamics are a _ strict _ mathematical corollary of the linearized governing equations of the incompressible maxwell ( visco)elastic fluid .this idea is further elaborated upon by deriving the frame indifferent formulation of the model of elastic fluids .it is argued that in some cases ( as in the formulation of electromagnetism considered here and many other technologically significant applications ) the constitutive relation can be written for the stress vector rather than for the stress tensor , because what actually enters the momentum equations is the point - wise stress vector , which is the divergence of the stress tensor . from the frame indifferent governing system of elastic fluids , a ` maxwell form 'is derived that includes the terms of the original maxwell equations along with terms stemming from the convective and convected invariant time rates .these inertial terms are the progenitors of the so - called laws of ` motional electrodynamics ' : biot savart , oersted ampere , and lorentz force laws .the latter are usually assumed as additional empirical hypotheses to maxwell s equations .this makes for a unified model of electromagnetism , which is truly covariant , by virtue of the fact that they are frame indifferent . in other words ,the new model is invariant to changes to frames that accelerate and deform .the continuum - mechanics formulation of the electromagnetic field proposed here opens a new avenue of research connected with the possible compressibility of elastic fluids .consequently two kinds of linear propagating waves can co - exist : shear waves ( light , when in the visible spectrum ) and compressional waves ( called _ pneuma _ in this paper ) .it must be stressed that the speed of compressional waves is necessarily much larger than the speed of light . as a consequence , the metacontinuum appears virtually incompressible to observes using tools designed to detect its shear motions ( i.e , electromagnetic phenomena ) .32 natexlab#1#1url # 1`#1`urlprefix[2][]#2 [ 2]#2 , , , . ., , , , . . ,. . , . , . , in : , . pp . ., . , in : ( ed . ) , , ,, . , a. . , . , b. . , . , .. , . . , . ,. , . . , . ,. . , . , ,. . , . , , ,. , . . , . , | we show that the linearized equations of the incompressible elastic medium admit a ` maxwell form ' in which the shear component of the stress vector plays the role of the electric field , and the vorticity plays the role of the magnetic field . conversely , the set of dynamic maxwell equations are strict mathematical corollaries from the governing equations of the incompressible elastic medium . this suggests that the nature of ` electromagnetic field ' may actually be related to an elastic continuous medium . the analogy is complete if the medium is assumed to behave as fluid in shear motions , while it may still behave as elastic solid under compressional motions . then the governing equations of the elastic fluid are re - derived in the eulerian frame by replacing the partial time derivatives by the properly invariant ( frame indifferent ) time rates . the ` maxwell from ' of the frame indifferent formulation gives the frame indifferent system that is to replace the maxwell system . this new system includes terms already present in the classical maxwell equations , alongside terms that are the progenitors of the biot savart , oersted ampere s , and lorentz force laws . thus , a frame indifferent ( truly covariant ) formulation of electromagnetism is achieved from a single postulate : the electromagnetic field is a kind of elastic ( partly liquid , partly solid ) continuum . frame indifference , maxwell s elastic fluid , maxwell s equations of electrodynamics , lorentz force , biot savart law |
multi - object tracking refers to the problem of jointly estimating the number of objects and their trajectories from sensor data .driven by aerospace applications in the 1960 s , today multi - object tracking lies at the heart of a diverse range of application areas , see for example the texts bsf88,bp99 , mah07 , mallickbook12 , mahler2014advances .the most popular approaches to multi - object tracking are the joint probabilistic data association filter , multiple hypothesis tracking bp99 , and more recently , random finite set ( rfs ) .the rfs approach has attracted significant attention as a general systematic treatment of multi - object systems and provides the foundation for the development of novel filters such as the probability hypothesis density ( phd ) filter , cardinalized phd ( cphd ) filter , and multi - bernoulli filters mah07,vvc09,vvps10 .while these filters were not designed to estimate the trajectories of objects , they have been successfully deployed in many applications including radar / sonar , tobiaslanterman08 , , computer vision mtc_csvt08,hvvs_pr12,hvv_tsp13 , cell biology , autonomous vehicle , automotive safety , sensor scheduling rv10,rvc11,hv14sencon , gostar13 and sensor network zhang_tac11,bcf_stsp13,ucj_stsp13 .the introduction of the generalized labeled multi - bernoulli ( glmb ) rfs in has led to the development of the first tractable rfs - based multi - object tracker - the -glmb filter .the -glmb filter is attractive in that it exploits the conjugacy of the glmb family to propagate forward in time the ( labeled ) multi - object filtering density exactly .each iteration of this filter involves an update operation and a prediction operation , both of which result in weighted sums of multi - target exponentials with intractably large number of terms .the first implementation of the -glmb filter truncate these sums by using the -shortest path and ranked assignment algorithms , respectively , in the prediction and update to determine the most significant components . while the original two - staged implementation is intuitive and highly parallelizable , it is structurally inefficient as it requires many intermediate truncations of the -glmb densities . specifically , in the update , truncation is performed by solving a ranked assignment problem for each predicted -glmb component . since truncation of the predicted -glmb sum is performed separately from the update , in general , a significant portion of the predicted components would generate updated components with negligible weights .hence , computations are wasted in solving a large number of ranked assignment problems , each of which has cubic complexity in the number of measurements . in this paper , we present a new implementation by formulating a joint prediction and update that eliminates inefficient truncation procedures in the original approach .the key innovation is the exploitation of the direct relationship between the components of the -glmb filtering densities at consecutive iterations to circumvent solving a ranked assignment problem for each predicted component .in contrast to the original implementation , the proposed joint implementation only requires one truncation per component in the filtering density .naturally , the joint prediction and update allows truncation of the -glmb filtering density ( without explicitly enumerating all the components ) using the ranked assignment algorithm , , .more importantly , it admits a very efficient approximation of the -glmb filtering density based on markov chain monte carlo methods .the key innovation is the use of gibbs sampling to generate significant updated -glmb components , instead of deterministically generating them in order of non - increasing weights .the advantages of the proposed stochastic solution compared to the rank assignment algorithm are two - fold .first , it eliminates unnecessary computations incurred by sorting the components , and reduces the complexity from cubic to linear in the number of measurements .second , it automatically adjusts the number of significant components generated by exploiting the statistical characteristics of the component weights .the paper is organized as follows .background on labeled rfs and the -glmb filter is provided in section [ sec : bg ] .section sec : fast_impl presents the joint prediction and update formulation and the gibbs sampler based implementation of the -glmb filter .numerical results are presented in section [ sec : sim ] and concluding remarks are given in section [ sec : sum ] .this section summarizes the labeled rfs and the glmb filter implementation .we refer the reader to the original work for detailed expositions . for the rest of the paper , single - object statesare represented by lowercase letters , e.g. , while multi - object states are represented by uppercase letters , e.g. , , symbols for labeled states and their distributions are bolded to distinguish them from unlabeled ones , e.g. , , , etc , spaces are represented by blackboard bold e.g. , , , , etc , and the class of finite subsets of a space is denoted by .we use the standard inner product notation and the following multi - object exponential notation , where is a real - valued function , with by convention .we denote a generalization of the kronecker delta that takes arbitrary arguments such as sets , vectors , etc , by the inclusion function , a generalization of the indicator function , by also write in place of when = .a labeled rfs is simply a finite set - valued random variable where each single - object dynamical state is augmented with a unique label that can be stated concisely as follows a labeled rfs with state space and ( discrete ) label space is an rfs on such that each realization has distinct labels .let be the projection , then a finite subset set of has distinct labels if and only if and its labels have the same cardinality , i.e. . the function is called the _ distinct label indicator_. the set integral defined for any function is given by where the integral of a function is : the notion of labeled rfs enables the incorporation of individual object identity into multi - object system and the bayes filter to be used as a tracker of these multi - object states .suppose that at time , there are target states , each taking values in the ( labeled ) state space . in the random finite set formulationthe set of targets is treated as the _ multi - object state _ each state either survives with probability and evolves to a new state or dies with probability .the dynamics of the survived targets are encapsulated in the multi - object transition density ) . for a given multi - object state , each state at time either detected with probability and generates an observation with likelihood or missed with probability .the _ multi - object observation _ at time , , is the superposition of the observations from detected states and poisson clutters with intensity . assuming that , conditional on , detections are independent , and that clutter is independent of the detections and is distributed as a poisson rfs , the multi - object likelihood is given by voglmb13,vvp_glmb13 ^{\mathbf{x}_k } \label{eq : rfsmeaslikelihood0}\]]where is a function such that implies , and is called an _ association map _ since it provides the mapping between tracks and observations , i.e. which track generates which observation , with undetected tracks assigned to 0 .the condition implies ensures that a track can generate at most one measurement at a point of time . the image of a set through the map is denoted by , i.e. while the notation is used to denote the collection of all eligible association maps on domain , i.e. if the clutter is distributed as an iid cluster rfs , i.e. ^k\mathbf{\pi } _ { k+1}(\mathbf{x }_ { } measurements upto time ] , which captures all information on the number of targets and individual target states at time . in multi - object baysian filtering , the multi - object filtering density is computed recursively in time according to the following prediction and update , commonly referred to as _ multi - object bayes recursion _ note , however , that the bayes filter is intractable since the set integrals in - have no analytic solution in general .the -glmb rfs , a special class of labeled rfs , provides an exact solution to - .this is because the -glmb rfs is closed under the multi - object chapman - kolmogorov equation with respect to the multi - object transition kernel and is conjugate with respect to the multi - object likelihood function .a -glmb rfs is a labeled rfs with state space and ( discrete ) label space , distributed according to ^{\mathbf{x}},\]]where is a discrete space while and satisfy the -glmb density is essentially a mixture of multi - object exponentials , in which each components is identified by a pair .each is a set of tracks labels while represents a history of association maps .the pair can be interpreted as the hypothesis that the set of tracks has a history of association maps and corresponding kinematic state densities .the weight , therefore , can be considered as the probability of the hypothesis .denote the collection of all label sets with unique elements by , the cardinality distribution of a -glmb rfs is given by a -glmb is completely characterized by the set of parameters . for implementationit is convenient to consider the set of parameters as an enumeration of all -glmb components ( with positive weight ) together with their associated weights and track densities , as shown in figure [ fig : paramtable ] , where and .-glmb parameter set with each component indexed by an integer .the hypothesis for component is while its weight and associated track densities are and , . ] given a -glmb initial density , all subsequent multi - object densities are -glmbs and can be computed exactly by a tractable filter called the -glmb filter .the -glmb filter recursively propagates a -glmb density forward in time via the bayes recursion equations and .closed form solutions to the prediction and update of the -glmb filter are given in the following propositions .[ prop_ck_strong ] if the multi - target posterior at time is a -glmb of the form , i.e. ^{\!\mathbf{x}_{}}\!\!\!,\end{aligned}\ ] ] and the birth density is defined on according to ^{\!\mathbf{y}},\ ] ] then the multi - target prediction density to the next time is a -glmb given by ^{\!\mathbf{x}_{}}\!\!\!,\end{aligned}\ ] ] where^{\!l}\!\!\!\sum_{i_{_{\!}k_{\!}-_{\!}1_{\!}}\supseteq_{_{\!}}l}\!\!\left[_{\!}1\!-\!\eta_{_{\!}s\!}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!})\!}\right]^{\!i_{_{\!}k_{\!}-_{\!}1\!}-l}\label{eq : propckstrongws } \\\!\!\!\!\!\eta_{_{\!}s\!}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!})\!}(_{\!}\ell)\!\!\!\!\!&=&\!\!\!\!\!\left\langle p_{_{\!}s}(\cdot , \ell ) , p_{_{\!}k_{\!}-_{\!}1}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!})\!}(_{\!}\cdot , \ell)\right\rangle \label{eq : propckstrong_eta } \\ \!\!\!\!\!p_{_{\!}k_{_{\!}}|_{_{\!}}k_{\!}-_{\!}1_{\!}}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!})_{\!}}(_{\!}x,\ell_{_{\!}})\!\!\!\!\!&=&\!\!\!\!\!1_{\mathbb{l}_{\!}}(_{\!}\ell_{_{\!}})p_{_{\!}s}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!})_{\!}}(_{\!}x,\ell_{_{\!}})+1_{_{\!}\mathbb{b}_{\!}}(_{\!}\ell_{_{\!}})p_{_{\!}b_{_{\!}}}(_{\!}x,\ell_{_{\!}})\label{eq : propckstrongpp } \\\!\!\!\!\!p_{_{\!}s}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!})_{\!}}(_{\!}x,\ell_{_{\!}})\!\!\!\!\!&=&\!\!\!\!\!\frac{\left\langle p_{_{\!}s}(\cdot , \ell)f_{_{\!}k_{_{\!}}|_{_{\!}}k_{\!}-_{\!}1_{\!}}(x|\cdot , \ell ) , p_{_{\!}k_{\!}-_{\!}1}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!})\!}(\cdot , \ell ) \right\rangle}{\eta_{_{\!}s\!}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!})\!}(\ell ) } \label{eq : propckstrongps}\end{aligned}\ ] ] [ propbayes_strong ] given the prediction density in , the multi - target posterior is a -glmb given by ^{\!\mathbf{x}_{}}\!\!\ ! , \label{eq : propbayes_strong0}\end{aligned}\ ] ] where ^{\!i_{_{\!}k } } , \label{eq : propbayes_strong1 } \\\eta_{_{\!}z_{_{\!}k}}^{_{\!}(_{\!}\theta_{_{\!}k\!})}(\ell)\!\!\ ! & = & \!\!\!\left\langle p_{_{\!}k_{_{\!}}|_{_{\!}}k_{\!}-_{\!}1{\!}}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!})\!}(\cdot , \ell ) , \psi_{_{\!}z_{_{\!}k}\!}(\cdot , \ell ; \theta_{_{\!}k\ ! } ) \right\rangle , \label{eq : propbayes_strong2 } \\p_{k}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!},\theta_{_{\!}k\!})}(x,\ell |z_k)\!\!\ ! & = & \!\!\!\frac{p_{_{\!}k_{_{\!}}|_{_{\!}}k_{\!}-_{\!}1{\!}}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!})\!}(x,\ell)\psi_{_{\!}z_{_{\!}k}\!}(x,\ell;\theta_{_{\!}k\!})}{\eta_{_{\!}z_{_{\!}k}}^{_{\!}(_{\!}\theta_{_{\!}k\!})}(\ell ) } .\label{eq : propbayes_strong3}\end{aligned}\ ] ] the propagations of -glmb components through prediction and update are illustrated in fig .[ fig : pred_propag ] and fig .[ fig : upd_propag ] , respectively .it is clear that the the number of components grows exponentially with time .specifically , a component in the filtering density at time generates a large number of predicted components , of which each one in turn produces a new set of multiple -glmb components in the filtering density at time .hence , it is necessary to reduce the number of -glmb components in both prediction and update densities at every time step .-glmb prediction : component in the prior generates a large set of predicted components with , i.e. , , , and . ] -glmb update : component in the predicted density generates a ( large ) set of update components with and weights , , . ]the simplest way to truncate a -glmb density is discarding components with smallest weights .the following proposition asserts that this strategy minimizes the -distance between the true density and the truncated one [ prop_l1_error]let denote the -norm of , and for a given let ^{\mathbf{x}}\]]be an unnormalized -glmb density .if then this section , we briefly review the original implementation of the -glmb filter in subsection [ subsec : orig_scheme ] and propose a new implementation strategy with joint prediction and update in subsection [ subsec : new_scheme ] .the first implementation of the -glmb filter , detailed in , recursively calculates the filtering density by sequentially computing the predicted and update densities at each iteration based on proposition [ prop_ck_strong ] and proposition [ propbayes_strong ] . since direct implementation of equations andis difficult due to the sum over supersets in , the predicted and update densities are rewritten as and , respectively , with ^{\!i_{_{\!}k_{\!}-_{\!}1\!}-_{\!}l\!}\!\left[_{\!}\eta_{_{\!}s}^{_{\!}(_{\!}\xi_{_{\!}k_{\!}-_{\!}1\!})\!}\right]^{\!l\!}.\ ] ] c _ _ k__|__k_-_1(___|__z__k_-_1)=_(__)_i__k_-_1,__k_-_1 , l , j 1___(_i__k_-_1)(_l_)1___(___)(_j)__k_-_1^_(_i__k_-_1,__k_-_1)__s^_(_i__k_-_1,__k_-_1)(_l_)__b(_j)___lj__((__))^_[eq : propck_strong3 ] + _ _ k(__z__k)= [ eq : propbayes_strong4 ] in the prediction stage , each component with weight generates a set of prediction components with weight where and represent two disjoint label sets for survival and birth tracks , respectively .since the weight of the prediction component can be factorized into two factors , and , which depend on two mutually exclusive sets ; truncating the predicted density is performed by solving two separate -shortest path problems for each set of tracks .this is because running only one instance of the -shortest path based on the augmented set of existing and birth tracks generally favours the selection of survival tracks over new births and typically results in poor track initiation . in the update stage ,each prediction component generates a ( large ) set of update components .these update components are truncated without having to exhaustively compute all the components by solving a ranked assignment problem .although the original two - staged implementation is intuitive and highly parallelizable , it is has several drawbacks .first , since truncation of the predicted -glmb density is performed separately from the update based purely on _ a priori _ knowledge ( e.g. survival and birth probabilities ) , in general , a significant portion of the predicted components would generate updated components with negligible weights .hence , computations are wasted in solving a large number of ranked assignment problems , each of which has cubic complexity in the number of measurements .second , it would be very difficult to determine the final approximation error of the truncated filtering density as the implementation involves least three separate truncating processes : one for existing tracks , one for birth tracks , and one for predicted tracks . in the following subsections, we will introduce the joint prediction and update as a better alternative to the original two - staged approach .the joint strategy eliminates the need for separate prediction truncating procedures , thus involves only one truncation per iteration .consequently , the new implementation yields considerable computational savings while preserving the filtering performance as well as the parallelizability of the original implementation . instead of computing the filtering density in two steps ,the new strategy aims to generate the components of the filtering density in one combined step by formulating a direct relationship between the component of the current filtering density with those of the previous density .specifically , we will derive a new formulation for that does not involve prediction induced variables and .this can be done via an _ extended association map _ , denoted by , and defined as follows the extended association map is a function such that for implies .the new map , in essence , only extends the original map to include a new association , .in particular , is identical to except for non - survival and unconfirmed birth tracks , i.e. the image of a set through the extended map and the collection of all eligible extended association maps on domain are denoted by and , respectively .based on the notion of extended association map , the following proposition establishes the direct relationship between two consecutive filtering densities at time and . for simplicity, we assume that target births are modeled by ( labeled ) multi - bernoulli rfs s , i.e. ^{\mathbb{b}-j}[r(\cdot)]^j ] .the key idea of the stochastic based approach is that can be considered as realizations of a random variable in the space with the following distribution ,\label{eq : theta_req}\\ \omega _ { k}^{(i_{k-1}^{(h)},\xi_{k-1}^{(h)},\tilde{\theta})}&\propto \omega _ { k\!-\!1}^{(i_{k\!-\!1}^{(h)},\xi _ { k\!-\!1}^{(h)})}\!\left[\gamma_{_{\!}z_{_{\!}k}\!}^{_{\!}(_{\!}\tilde{\theta}(\cdot)_{\!})\!}(\cdot ) \right ] ^{\!i_{k\!-\!1}\cup \mathbb{b}},\label{eq : omeg_theta}\end{aligned}\]]with and is given in .thus the probability of a valid extended association is proportional to the weight of the corresponding -glmb component in the next filtering density while zero probability is allocated to extended associations which do not satisfy the constraint that each measurement is assigned to at most one track .however , sampling directly from the distribution is very difficult since we can not exhaustively compute all of the values of .a common solution to this kind of problem is to use markov chain monte carlo ( mcmc ) methods such as the gibbs sampler to obtain samples from without having to directly compute .the gibbs sampler is a very efficient method to sample a difficult distribution if its conditional marginals can be computed in a simple closed form with proven convergence under generally standard assumptions .the main theoretical contribution in this section is stated in the following proposition , which allows conditional marginals to be computed via the entries of the matrix .[ marg_cond ] denote by the -th element of , i.e. , and all the other elements except , i.e. ^t ] compute according to ] , to be generated .alternatively , we can sample a long gibbs sequence of length and then extract every -th sample .the length of the gibbs sequence , roughly speaking , depends on the convergence rate of the gibbs sampler and the distance from the initial point to the true sample space .if we start with a good initialization right in the true sample space , we can use all the samples from the gibbs sequence . in practice ,one example of good initialization that allows us to use all of the samples from the resulting gibbs sequence is the optimal assignment , which can be obtained via either munkres or jonker - volgenant algorithm .otherwise , we can start with all zeros assignment ( i.e. all tracks are misdetected ) that is also valid sample and requires no additional computation . in terms of computational complexity ,sampling from a discrete distribution is linear with the weight s length , therefore the total complexity of the gibbs sampling procedure presented in algorithm alg : gibbs is . in comparison ,the fastest ranked optimal assignment algorithm is . for general multi - target tracking problems in practice, we usually have , thus the gibbs sampling algorithm will generally be much faster than the ranked assignment given the same .in this section we first compare the performance of the joint prediction and update approach with its traditional separated counterpart , both employ the ranked assignment algorithm for fair comparison .then , we illustrate the superior performance of the gibbs sampler based truncation to the conventional ranked assignment via a difficult tracking scenario with low detection probability and very high clutter rate .the first numerical example is based on a scenario adapted from vvp_glmb13 in which a varying number targets travel in straight paths and with different but constant velocities on the two dimensional region \times [ -1000,1000]m ] .measurements are noisy vectors of planar position only ^t ] .the detection probability is and clutter follows a poisson rfs with an average intensity of giving an average of false alarms per scan .first , we compare the performance of the traditional separated and the proposed joint prediction and update approaches . for a fair comparison , both approaches are capped to the same maximum components .results are shown over 100 monte carlo trials .figures [ fig : card ] shows the mean and standard deviation of the estimated cardinality versus time .figures fig : ospa_comb and [ fig : ospa_sep ] show the ospa distance and its localization and cardinality components for and .it can be seen that both approaches estimate the cardinality equally well .similarly , in terms of ospa distance , the performance of the two approach is virtually the same .second , we demonstrate the fast implementation via the gibbs sampler . in this example , we keep all parameters the same as in the previous example except that the clutter rate is now increased to average false alarms per scan .the performance of the gibbs sampler implementation is compared with that of a ranked assignment based implementation with the same maximum number of posterior hypotheses .the average ospa distances over 100 monte carlo trials are presented in fig .[ fig : ospa_100 ] . ) . ]it is obvious that the gibbs sampler has a better ospa from around time onward .the reason is in difficult scenario ( e.g. high clutter rate , low detection probability ) , if the number of existing targets are high the gibbs sampling technique is expected to pick up the new born target better than the ranked assignment algorithm given the same number of samples / hypotheses due to its randomized behaviour .this is clearly illustrated in the cardinality statistics for both approaches in fig .fig : card_gibbs and fig .[ fig : card_murty ] . as expected , however , the joint approach averaged run time is significantly lower that that of the original approach .reductions in execution time of 1 to 2 orders of magnitude are typical .in this paper we propose a new implementation scheme for the -glmb filter that allows joint prediction and update .in contrast to the conventional two - staged implementation , the joint approach use _ a posteriori _ information to construct cost matrices for every individual track , thereby requires only one truncation in each iteration due to the elimination of inefficient intermediate steps . more importantly, this joint strategy provides the platform for the development of an accelerated randomized truncation procedure that achieves superior performance as compared to that of its traditional deterministic counterpart .the proposed method is also applicable to approximations of the -glmb filter such as the labeled multi - bernoulli ( lmb ) filter .m. tobias and a. d. lanterman , `` probability hypothesis density - based multitarget tracking with bistatic range and doppler observations , '' _ iee proceedings - radar , sonar and navigation _ , vol .152 , no . 3 , pp .195205 , june 2005 . d. e. clark and j. bell , `` bayesian multiple target tracking in forward scan sonar images using the phd filter , '' _ iee proceedings - radar , sonar and navigation _ , vol .152 , no . 5 , pp .327334 , october 2005 .r. hoseinnezhad , b .-vo , and b. t. vo , `` visual tracking in background subtracted image sequences via multi - bernoulli filtering , '' _ ieee trans .signal process ._ , vol .61 , no . 2 ,392397 , jan 2013 .s. rezatofighi , s. gould , b. vo , b. vo , k. mele , and r. hartley , `` multi - target tracking with time - varying clutter rate and detection profile : application to time - lapse cell microscopy sequences , '' _ ieee trans . med ._ , vol .34 , no . 6 , pp .13361348 , 2015 . c. lundquist , l. hammarstrand , and f. gustafsson , `` road intensity based mapping using radar measurements with a probability hypothesis density filter , '' _ ieee trans . signal process ._ , vol .59 , no . 4 , pp . 13971408 , april 2011 .g. battistelli , l. chisci , s. morrocchi , f. papi , a. benavoli , a. di lallo , a. farina , and a. graziano , `` traffic intensity estimation via phd filtering , '' in _ proc .2008 european radar conference ( eurad ) _ , oct 2008 , pp . 340343 .a. gostar , r. hoseinnezhad , and a. bab - hadiashar , `` robust multi - bernoulli sensor selection for multi - target tracking in sensor networks , '' _ ieee signal process ._ , vol . 20 , no . 12 , pp .11671170 , dec 2013 .g. battistelli , l. chisci , c. fantacci , a. farina , and a. graziano , `` consensus cphd filter for distributed multitarget tracking , '' _ ieee j. sel .topics signal process . _ , vol . 7 , no . 3 , pp .508520 , june 2013 .m. pascoal , m. captivo , and j. clmaco , `` a note on a new variant of murty s ranking assignments algorithm , '' _4or : quarterly journal of the belgian , french and italian operations research societies _ , vol . 1 , no . 3 , pp .243255 , 2003 .i. cox and s. hingorani , `` an efficient implementation of reid s multiple hypothesis tracking algorithm and its evaluation for the purpose of visual tracking , '' _ ieee trans . pattern anal ._ , vol . 18 , no . 2 , pp .138150 , 1996 .a. frigessi , p. d. stefano , c .- r .hwang , and s .- j .sheu , `` convergence rates of the gibbs sampler , the metropolis algorithm and other single - site updating dynamics , '' _ journal of the royal statistical society .series b ( methodological ) _ , vol .55 , no . 1 ,pp . 205219 , 1993 .g. roberts and a. smith , `` simple conditions for the convergence of the gibbs sampler and metropolis - hastings algorithms , '' _ stochastic processes and their applications _ ,49 , no . 2 ,pp . 207216 , 1994 .c. j. geyer and e. a. thompson , `` constrained monte carlo maximum likelihood for dependent data , '' _ journal of the royal statistical society .series b ( methodological ) _ , vol .54 , no . 3 , pp .657699 , 1992 . | this paper proposes an efficient implementation of the generalized labeled multi - bernoulli ( glmb ) filter by combining the prediction and update into a single step . in contrast to the original approach which involves separate truncations in the prediction and update steps , the proposed implementation requires only one single truncation for each iteration , which can be performed using a standard ranked optimal assignment algorithm . furthermore , we propose a new truncation technique based on markov chain monte carlo methods such as gibbs sampling , which drastically reduces the complexity of the filter . the superior performance of the proposed approach is demonstrated through extensive numerical studies . random finite sets , delta generalized labeled multi - bernoulli filter |
we are witnessing an explosion in visual content .significant recent advances in machine learning and computer vision , especially via deep neural networks , have relied on supervised learning and availability of copious annotated data .however , manually labelling data is a time - consuming , laborious , and often expensive process . in order to make better use of available unlabeled images , clustering and/or unsupervised learning is a promising direction . in this work, we aim to address image clustering and representation learning on unlabeled images in a unified framework .it is a natural idea to leverage cluster ids of images as supervisory signals to learn representations and in turn the representations would be beneficial to image clustering . at a high - level view , given a collection of unlabeled images , the global objective function for learning image representations and clusters can be written as : where is a loss function , denotes the cluster ids for all images , and denotes the parameters for representations .if we hold one in to be fixed , the optimization can be decomposed into two alternating steps : 0.32 0.32 0.32 intuitively , can be cast as a conventional clustering problem based on fixed representations , while is a standard supervised representation learning process . in this paper , we propose an approach that alternates between the two steps updating the cluster ids given the current representation parameters and updating the representation parameters given the current clustering result .specifically , we cluster images using agglomerative clustering and represent images via activations of a convolutional neural network ( cnn ) . the reason to choose agglomerativeclustering is three - fold : 1 ) it begins with an over - clustering , which is more reliable in the beginning when a good representation has not yet been learned .intuitively , clustering with representations from a cnn initialized with random weights are not reliable , but nearest neighbors and over - clusterings are often acceptable ; 2 ) these over - clusterings can be merged as better representations are learned ; 3 ) agglomerative clustering is a recurrent process and can naturally be interpreted in a recurrent framework .our final algorithm is farily intuitive .we start with an intial over - clustering , update cnn parameters ( 2b ) using image cluster labels as supervisory signals , then merge clusters ( 2a ) and iterate until we reach a stopping criterion .an outcome of the proposed framework is illustrated in fig .[ fig_introduction ] .initially , there are 1,762 clusters for mnist test set ( 10k samples ) , and the representations ( image intensities ) are not that discriminative . after several iterations, we obtain 17 clusters and more discriminative representations .finally , we obtain 10 clusters which are well - separated by the learned representations and interestingly correspond primarily to the groundtruth category labels in the dataset , even though the representation is learnt in an unsupervised manner . to summarize , the major contributions of our work are : we propose a simple but effective end - to - end learning framework to jointly learn deep representations and image clusters from an unlabeled image set ; we formulate the joint learning in a recurrent framework , where merging operations of agglomerative clusteringare expressed as a forward pass , and representation learning of cnn as a backward pass ; we derive _ a single loss function _ to guide agglomerative clustering and deep representation learning , which makes optimization over the two tasks seamless ; our experimental results show that the proposed framework outperforms previous methods on image clustering and learns deep representations that can be transferred to other tasks and datasets .* clustering * clustering algorithms can be broadly categorized into hierarchical and partitional approaches .agglomerative clustering is a hierarchical clustering algorithm that begins with many small clusters , and then merges clusters gradually . as for partitional clustering methods ,the most well - known is k - means , which minimizes the sum of square errors between data points and their nearest cluster centers .related ideas form the basis of a number of methods , such as expectation maximization ( em ) , spectral clustering , and non - negative matrix factorization ( nmf ) based clustering . * deep representation learning * many works use raw image intensity or hand - crafted features combined with conventional clustering methods .recently , representations learned using deep neural networks have presented significant improvements over hand - designed features on many computer vision tasks , such as image classification , object detection , etc .however , these approaches rely on supervised learning with large amounts of labeled data to learn rich representations .a number of works have focused on learning representations from unlabled image data .one class of approaches cater to reconstruction tasks , such as auto - encoders , deep belief networks ( dbn ) , etc .another group of techniques learn discriminative representations after fabricating supervisory signals for images , and then finetune them supervisedly for downstream applications . unlike our approach ,the fabricated supervisory signal in these previous works is not updated during representation learning .* combination * a number of works have explored combining image clustering with representation learning . in , the authors proposed to learn a non - linear embedding of the undirected affinity graph using stacked autoencoder , and then ran k - means in the embedding space to obtain clusters . in , a deep semi - nmf model was used to factorize the input into multiple stacking factors which are initialized and updated layer by layer . using the representations on the top layer ,k - means was implemented to get the final results .unlike our work , they do not jointly optimize for the representation learning and clustering . to connect image clustering and representation learning more closely , conducted image clustering and codebook learning iteratively .however , they learned codebook over sift feature , and did not learn deep representations . instead of using hand - crafted features ,chen used dbn to learn representations , and then conducted a nonparametric maximum margin clustering upon the outputs of dbn . afterwards , they fine - tuned the top layer of dbn based on clustering results . a more recentwork on jointly optimizing two tasks is found in , where the authors trained a task - specific deep architecture for clustering .the deep architecture is composed of sparse coding modules which can be jointly trained through back propagation from a cluster - oriented loss .however , they used sparse coding to extract representations for images , while we use a cnn .instead of fixing the number of clusters to be the number of categories and predicted labels based on softmax outputs , we predict the labels using agglomerative clustering based on the learned representations . in our experimentswe show that our approach outperforms .we denote an image set with images by .the cluster labels for this image set are . are the cnn parameters , based on which we obtain deep representations from .given the predicted image cluster labels , we organize them into clusters , where . are the nearest neighbours of , and is the set of nearest neighbour clusters of . for convenience ,we sort clusters in in descending order of affinity with so that the nearest neighbour is the first entry ] is the corresponding timesteps in period . for optimization , we follow a greedy search similar to conventional agglomerative clustering . starting from the time step , it finds one cluster and its nearest neighbour to merge so that is minimized over all possible cluster pairs . in fig .[ fig_approach_toyexample ] , we present a toy example to explain the reason why we employ the term .as shown , it is often the case that the clusters are densely populated in some regions while sparse in some other regions . in conventional agglomerative clustering, it will choose two clusters with largest affinity ( or smallest loss ) at each time no mater where the clusters are located .in this specific case , it will choose cluster and its nearest neighbour to merge .in contrast , as shown in fig . [ fig_approach_toyexample](b ) , our algorithm by adding will find cluster , because it is not only close to it nearest neighbour , but also relatively far away from its other neighbours , i.e. , the local structure is considered around one cluster .another merit of introducing is that it will allow us to write the loss in terms of triplets as explained next .0.48 0.48 in forward pass of the -th partially unrolled period , we have merged a number of clusters .let the sequence of optimal image cluster labels be given by , and clusters merged in forward pass are denoted by \}$ ] , . in the backward pass ,we aim to derive the optimal to minimize the losses generated in forward pass .because the clustering in current period is conditioned on the clustering results of all previous periods , we accumulate the losses of all periods , i.e. , minimizing w.r.t leads to representation learning on supervised by or . based on and , the loss in eq .[ eq_loss_overall_wst_theta ] is reformulated to ) - \bm{\mathcal{a}}(\mathcal{c}^t _ * , \mathcal{n}_{\mathcal{c}^t_*}^{k_c}[k])\right ) \end{aligned } \label{eq_loss_triplet_time_step}\ ] ] where .is a loss defined on clusters of points , which needs the entire dataset to estimate , making it difficult to use batch - based optimization .however , we show that this loss can be approximated by a sample - based loss , enabling us to compute unbiased estimators for the gradients using batch - statistics .the intuition behind reformulation of the loss is that agglomerative clustering starts with each datapoint as a cluster , and clusters at a higher level in the hierarchy are formed by merging lower level clusters .thus , affinities between clusters can be expressed in terms of affinities between datapoints .we show in the supplement that the loss in can be approximately reformulated as where is a weight whose value depends on and how clusters are merged during the forward pass . and are from the same cluster , while is from the neighbouring clusters , and their cluster labels are merely determined by the final clustering result . to further simplify the optimization , we instead search in at most neighbour samples of from other clusters in a training batch . hence , the batch - wise optimization can be performed using conventional stochastic gradient descent method .note that such triplet losses have appeared in other works . because it is associated with a weight, we call the weighted triplet loss. + : = collection of image data ; + : = target number of clusters ; + + : = final image labels and cnn parameters ; + ; initialize and update to by merging two clusters cluster number reaches ; [ alg_optimization ] given an image dataset with samples , we assume the number of desired clusters is given to us as is standard in clustering. then we can build up a recurrent process with timesteps , starting by regarding each sample as a cluster .however , such initialization makes the optimization time - consuming , especially when datasets contain a large number of samples . to address this problem, we can first run a fast clustering algorithm to get the initial clusters . here , we adopt the initialization algorithm proposed in for fair comparison with their experiment results .note that other kind of initializations can also be used , e.g. k - means .based on the algorithm in , we obtain a number of clusters which contain a few samples for each ( average is about 4 in our experiments ) .given these initial clusters , our optimization algorithm learns deep representations and clusters .the algorithm is outlined in alg .[ alg_optimization ] . in each partially unrolled period , we perform forward and backward passes to update and , respectively . specifically , in the forward pass , we merge two clusters at each timestep . in the backward pass ,we run about 20 epochs to update , and the affinity matrix is also updated based on the new representation .the duration of the -th period is timesteps , where is the number of clusters at the beginning of current period , and is a parameter called _ unrolling rate _ to control the number of timesteps .the less is , the more frequently we update .we compare our approach with 12 clustering algorithms , including k - means , njw spectral clustering ( sc - njw ) , self - tuning spectral clustering ( sc - st) , large - scale spectral clustering ( sc - ls ) , agglomerative clustering with average linkage ( ac - link) , zeta function based agglomerative clustering ( ac - zell ) , graph degree linkage - based agglomerative clustering ( ac - gdl ) , agglomerative clustering via path integral ( ac - pic ) , normalized cuts ( n - cuts ) , locality preserving non - negative matrix factorization ( nmf - lp ) , nmf with deep model ( nmf - d ) , task - specific clustering with deep model ( tsc - d ) .we show the first three principle components of learned representations in fig .[ fig_pca_display_5 ] and fig .[ fig_pca_display_4 ] at different stages . for comparison , we show the image intensities at the first column .we use different colors for representing different clusters that we predict during the algorithm . at the bottom of each plot, we give the number of clusters at the corresponding stage . at the final stage ,the number of cluster is same to the number of categories in the dataset . after a number of iterations, we can learn more discriminative representations for the datasets , and thus facilitate more precise clustering results . | in this paper , we propose a recurrent framework for * * j**oint * * u**nsupervised * * le**arning ( * jule * ) of deep representations and image clusters . in our framework , successive operations in a clustering algorithm are expressed as _ steps in a recurrent process _ , stacked on top of representations output by a convolutional neural network ( cnn ) . during training , image clusters and representations are updated jointly : image clustering is conducted in the forward pass , while representation learning in the backward pass . our key idea behind this framework is that good representations are beneficial to image clustering and clustering results provide supervisory signals to representation learning . by integrating two processes into a single model with a unified weighted triplet loss and optimizing it end - to - end , we can obtain not only more powerful representations , but also more precise image clusters . extensive experiments show that our method outperforms the state - of - the - art on image clustering across a variety of image datasets . moreover , the learned representations generalize well when transferred to other tasks . the source code can be downloaded from https://github.com/jwyang/joint-unsupervised-learning . |
the most complete kinematical information obtainable for a distant stellar system is the distribution of line - of - sight velocities at every point in the image .velocity distributions are crucial for understanding the dynamical states of slowly - rotating stellar systems like elliptical galaxies and globular clusters , since velocity dispersions alone place almost no constraints on the form of the potential unless one is willing to make ad hoc assumptions about the shape of the velocity ellipsoid ( ) .velocity distributions are also useful when searching for kinematically distinct subcomponents ( e.g. ; ) .the velocity distribution at point in the image of a stellar system , , can be related to the data in different ways depending on the nature of the observations . in a system like a globular cluster , for which the data usually consist of individual stellar velocities ,the velocity distribution is just the frequency function of stellar velocities defined by those stars with apparent positions near to .since measured velocities are always in error , the observed and true s are related via a convolution integral . in a distant , unresolved galaxy ,one typically measures the integrated spectrum of many stars along a line of sight .the observed spectrum is then a convolution of the velocity distribution of these stars with the broadening function of the spectrograph , and the spectrum of a typical star . with both sorts of data ,the goal is to find a function , at some set of points , such that the log likelihood of observing the data given is large . maximizing this quantity over the space of all possible functions is unlikely to yield useful results , however , since any that maximizes the likelihood ( assuming it exists , which it often will not ) is almost certain to be extremely noisy .this is obviously true if the data are related to the model via a convolution , since the process of deconvolution will amplify the errors in the data .but it is equally true if is simply the frequency function of observed velocities , since the most likely distribution corresponding to an observed set of s is just a sum of delta functions at each of the measured velocities .one is therefore forced to place smoothness constraints on the solution . butsmoothing always introduces a bias , i.e. a systematic deviation of the solution from the true .the nature of the bias is obvious when the smoothing is carried out by imposing a rigid functional form on , since the true function will almost certainly be different from this assumed form .but even nonparametric smoothing generates a bias since it effectively averages the data over some region .furthermore , because the required degree of smoothing increases with the amplitude of the noise in the data , the error from the bias goes up as the quality of the data falls .an ideal algorithm for estimating would therefore be one in which the bias introduced by the smoothing was effectively minimized , so that the derived was close to the true function even when the data were so poor that a great deal of smoothing was required .one way to accomplish this is to make use of prior knowledge about the likely form of .many studies of stellar and galactic systems have shown that is often close to a gaussian .this fact suggests that we infer by maximizing a quantity like the `` penalized log likelihood , '' where the penalty functional is large for any that is noisy and zero for any that is gaussian .a natural choice for such a penalty functional has been suggested by : ^ 2 dv.\ ] ] this functional assigns zero penalty to any ] is the minimizer of eq .( [ sildisc ] ) with the data point left out . in effect , the cv measures the degree to which the spectral intensities predicted by are consistent with the observed intensities .this is not quite the same as asking how close the estimated is to the true , but it is probably the best that one can do in practice ( ) .figure 4 shows the result of two attempts to recover the optimal from fake spectra generated using the merrifield - kuijken broadening function with s / n and .the values of that minimize the cv are tolerably close to the values that actually minimize the ise .more to the point , the ise of the s generated using the cv estimates of differ only negligibly from those obtained using the optimal s .these examples suggest that one can indeed hope to recover a useful estimate of the optimal smoothing parameter from the data alone . at the very least ,such an estimate would provide a starting point when searching for the the produces the physically most appealing estimate of .in many stellar systems , information about ) is most naturally obtained in the form of discrete velocities .if the velocities are measured with negligible error , is simply their frequency function , which can be defined as the function that maximizes the penalized log - likelihood subject to the constraints ( ) .the penalty functional is needed since , in its absence , the optimal estimate would be a set of delta - functions at the measured velocities .but the uncertainty in the measured velocities is sometimes comparable to the width of .for instance , radial velocities of faint stars in globular clusters may have measurement errors of a few km s compared to intrinsic velocity dispersions of km s .the observable function is then not but rather its convolution with the error distribution , . assuming that the errors have a normal distribution , with dispersion , we have \ dv'\ ] ] and one accordingly seeks the that maximizes subject to the same constraints on . following silverman ( 1982 ) , we find the solution to this constrained optimization problem as the unconstrained maximizer of the functional this problem is formally very similar to the deconvolution problem solved above , and a discrete representation of ( [ sildisc2 ] ) on a grid in velocity space is ^ 2 - n\sum_{j=1}^m \epsilon_jn_j,\ ] ] with , .the convolution of with is again represented as a matrix , eq .( [ matrix ] ) , with dv,\ ] ] dv,\ ] ] which can be expressed in terms of the error function .noise in the data now comes from two sources : measurement errors , as described by ; and finite - sample fluctuations due to the limited number of measured velocities .figure 5 shows how the mise depends on for data sets generated from the velocity distribution + \exp[{-(v - v_2)^2/2\sigma^2}]\right ) , \label{flat}\ ] ] with km s , km s and km s .this flat - topped velocity distribution was designed to mimic near the projected center of a globular cluster containing an abundance of nearly - radial orbits .( examples of such models and their velocity distributions may be found in ) .the two sets of points in figure 5 correspond to pseudo - data generated with zero velocity errors ( circles ) and with km s ( squares ) ; the latter value is roughly one - third of the intrinsic velocity dispersion .the mise falls off roughly as for both types of data , almost as steep as the dependence of parametric estimators .of course the mean error is larger , at a given , for the sample with nonzero ; however an increase in sample size from to produces the same decrease in the mise as a reduction in the measurement uncertainties from to km s .thus even relatively large measurement errors can be overcome by a modest increase in sample size ( assuming , of course , that the distribution of errors is well understood ) .figure 6 shows the average estimates obtained with the optimal smoothing parameters and their 95% variability bands .the non - gaussian nature of is surprisingly well reproduced even for , but the two peaks only begin to be clearly resolved for . various techniques , including a version of the cross - validation score described above , can be used to estimate optimal smoothing parameters for data like these .the `` unbiased '' or `` least - squares '' cross - validation score ( scott 1992 , p. 166 ; silverman 1986 , p. 48 ) is defined as }\circ p)_i \label{ucv}\ ] ] where }$ ] is an estimate of obtained by omitting the velocity .the value of that minimizes the ucv is an estimate of the value that minimizes the ise of .figure 7 shows the dependence of the ucv on for two data sets , with and , generated from the velocity distribution of eq .( [ flat ] ) with .the minimum in both curves occurs at a value of close to the value that actually minimizes the ise .the results just presented suggest that increasing the number of measured velocities in globular clusters may be a greater priority than reducing measurement errors if the goal is to determine , since existing techniques can already extract stellar radial velocities with greater precision than the uncertainty km s adopted here . because one would like to estimate at several different points in an image , the total number of velocities required for a single globular cluster would be in the thousands at least .fortunately , data sets of this size are now becoming available for a number of stellar systems . herewe analyze radial velocities of a new sample of 4200 stars near the center of the globular cluster centauri .the velocities were measured using the rutgers fabry - perot interferometer on the ctio 1.5 m telescope and were kindly made available by c. pryor , who also carried out an analysis of the measurement errors .figure 8 shows the spatial distribution of the observed stars .the effective field of view of the fabry - perot is about 2.75 arc minutes in radius and is offset by about 1.1 arc minutes e / se from the cluster center as determined by meylan et al .the core radius of centauri is around 2.5 arc minutes ( ) so the observed velocities lie mostly within the projected core .the core of a globular cluster is perhaps not a very auspicious place to look for non - gaussian velocity distributions .however centauri is fairly young in a collisional - relaxation sense , with an estimated central relaxation time of a few billion years ( ) .thus its velocity distribution might still be non - maxwellian .furthermore there is evidence for two , chemically distinct populations in centauri ( norris et al . 1996 ) which may have formed at different epochs and hence may have different kinematics .finally , we note that the theoretical expectation of maxwellian velocity distributions in globular clusters has rarely been tested by direct determination of line - of - sight velocity distributions . for this reason alone, it seems worthwhile to estimate in centauri .about 20 stars were removed from the original sample because their fabry - perot line profiles showed contamination by h emission .velocity uncertainties of the remaining stars were estimated using standard procedures ( e.g. ) ; a typical estimated error was 3 - 5 km s .although the estimated errors were found to correlate with stellar magnitude , this fact was ignored in the analysis and was simply set equal to the rms estimated uncertainty of all the stars in each subsample , about 4 km s in each case . for comparison , the central velocity dispersion of centauri is 15 - 20 km s ( meylan et al .1995 ) .one would like to estimate independently at many different points in this observed field of view .however figure 6 suggests that sample sizes less than a few hundred are not very useful for detecting departures from normality .the compromise , as illustrated in figure 8 , was to identify five , partially overlapping fields of one arc minute radius containing about one - half of the observed stars .two fields lie along the estimated rotation axis of the cluster , at a position angle of east from north with their centers displaced one arc minute from the cluster center .the three additional fields were situated along a perpendicular line .the orientation of the rotation axis was estimated from this sample , but is consistent with a determination based on 500 stars with velocities measured by coravel ( ) .the offset between the fabry - perot field of view and the cluster center was used to advantage by centering the fifth field at a distance of 2.5 arc minutes from the cluster center along the direction of maximal rotation .the number of stars in each of the subsamples , along with their mean velocity and velocity dispersion ( with estimated errors removed ) in km s , are given in table 1 .the average number of stars per field was 653 for the four inner fields , with 456 stars in field 5 .the velocity distributions in fields 1 and 5 exhibit the greatest apparent departures from normality .figure 9 shows the dependence of the inferred in these two fields on the value of the smoothing parameter .figure 10 displays estimates of in all five fields , for one choice of , and their estimated 95% confidence bands .the confidence bands were computed in the usual way via the bootstrap ( scott 1992 , p. 259 ) ; the choice of is justified below . also shownare the normal distributions with the same mean and variance as the inferred s .none of the recovered velocity distributions is strikingly non - gaussian , although the confidence bands in fields 1 and 5 do barely exclude the normal distribution at one or more points .the inferred for field 5 is more centrally peaked than a gaussian and has what might be described as a tail or bump at large positive velocities .it is tempting to interpret this curve as resulting from the superposition of two normal distributions with different means and variances , though such an interpretation seems extravagant given the relatively small amplitude of the deviations .the choice of the optimal for these estimates presented certain difficulties .when dealing with random samples drawn from the normal distribution , the value of that minimizes the error in an estimate of would clearly be very large , since an infinite always returns a normal distribution . because the velocity distributions in centauri do appear to be quite gaussian, we might expect the optimal s to be `` nearly infinite '' and hence difficult to estimate from the data alone .in fact the ucv score ( eq . [ ucv ] ) was found not to have a minimum at any value of in any of these five sub - samples ; instead the function was always found to asymptote to a constant value as was increased .a similar result was obtained using the `` likelihood cross - validation '' score ( silverman 1986 , p. 53 ) .these results certainly do not imply that the distribution of velocities in centauri must be exactly gaussian , since cross - validation is not a precise prescription for determining and often fails to give an extremum .but it appears that these velocity distributions are close enough to gaussian that the cross - validation technique can not find a significant difference between the estimates made with finite and infinite .the following alternative scheme was adopted for selecting the optimal .a crude estimate of can be obtained by replacing each of the measured velocities by a kernel function of fixed width .if is exactly gaussian , and if the kernel is also gaussian , the optimal window width ( i.e. dispersion ) for the kernel may be shown to be ( silverman 1986 , p. 45 ) .fixed - kernel estimates of in each of the five fields were generated from the data using this optimal , and compared to estimates of using the penalized - likelihood algorithm with various values of .these comparisons could not be made precise since the fixed - kernel estimates have a larger bias and do not compensate for the velocity measurement errors .nevertheless it is reasonable to assume that the degree of `` roughness '' in the optimal kernel - based estimates ought to be similar to that in the penalized - likelihood estimates made with the optimal value of .the plots of in fig .10 were made using these estimates of the optimal s .we conclude from this analysis that the evidence for non - gaussian velocity distributions in our five fields near the center of centauri is marginal at best .we note that the strongest deviations appear in the field that is farthest from the center .perhaps a sample of stars even farther from the core where the relaxation time exceeds the age of the universe would show even larger departures from normality .the centauri radial velocities analyzed here were obtained by k. gebhardt , j. hesser , c. pryor and t. williams using the rutgers fabry - perot interferometer . c. pryor devoted considerable time to reducing the observations and estimating the measurement uncertainties in this sample .conversations with c. joseph , m. merrifield , h .- w .rix , p. saha , r. van der marel and t. williams were very helpful for understanding the ins and outs of spectral deconvolution .c. pryor read parts of the manuscript and made useful suggestions for improvements .this work was supported by nsf grant ast 90 - 16515 and nasa grant nag 5 - 2803 .bender , r. 1990 , , 229 , 441 dejonghe , h. 1987 , , 224 , 13 dejonghe , h. & merritt , d. 1992 , , 391 , 531 franx , m. & illingworth , g. 1988 , , 327 , l55 gebhardt , k. , pryor , c. , williams , t. & hesser , j. e. 1994 , , 107 , 2067 green , p. j. & silverman , b. w. 1994 , nonparametric regression and generalized linear models , london : chapman & hall , ch . 8 kuijken , k. & merrifield , m. r. 1993 , , 264 , 712 merrifield , m. r. & kuijken , k. 1994 , , 432 , 575 merritt , d. 1989 , in dynamics of dense stellar systems , edited by d. merritt ( cambridge university press , cambridge ) , p. 75meylan , g. 1996 , private communication meylan , g. , mayor , m. , duquennoy , a. & dubath , p. 1995, , 303 , 761 norris , j. e. , da costa , g. s. , freeman , k. c. & mighell , k. j. 1996 , asp conference series vol .92 , formation of the galactic halo ... inside and out , edited by h. morrison & a. sarajedini , p. 375peterson , c. j. & king , i. r. 1975 , , 80 , 427 rix , h .- w . ,franx , m. fisher , d. & illingworth , g. 1992 , , 400 , l5 rix , h .- w .& white , s. d. m. 1992 , , 254 , 389 saha , p. & williams , t. 1994 , , 107 , 1295 scott , d. w. 1992 , multivariate density estimation , new york : john wiley & sons silverman , b. w. 1982 , ann .10 , 795 silverman , b. w. 1986 , density estimation for statistics and data analysis , london : chapman & hall thompson , j. r. & tapia , r. a. 1990 , nonparametric function estimation , modeling , and simulation ( siam , philadelphia ) van der marel , r. & franx , m. 1993 , , 407 , 525 wahba , g. 1980 , tech .595 , dept . of statistics ,univ . of wisconsin ,madison , wi wahba , g. 1990 , spline models for observational data , philadelphia : siam bias - variance tradeoff in estimates of non - gaussian s from absorption line spectra .input spectra were generated using two different broadening functions ( thin curves ) with added noise of amplitude s / n=20 .( a ) - ( c ) : merrifield - kuijken broadening function , eq .( [ mkbf ] ) ; ( d ) - ( f ) : broadening function of eq . ( [ peak ] ) . was then estimated ( thick curves ) using three different values of .( a ) , ( d ) : undersmoothed ; ( b ) , ( e ) : optimally smoothed ; ( c ) , ( f ) : oversmoothed . in the limit of large ,i.e. infinite smoothing , the estimates tend toward a gaussian with approximately the same mean and dispersion as the true .dependence of the mean integrated square error of the recovered broadening function on the signal - to - noise ratio of the spectrum , for spectra generated from the merrifield - kuijken broadening function ( eq .( [ mkbf ] ) . for each value of s / n , 300 noise realizations of the same spectrum were generated and the value of that minimized the average square deviation between the true and estimated s was found .the ordinate is the mise of the estimates using this optimal .average estimates of , and their 95% variance bands , based on 300 noise realizations of spectra generated from the merrifield - kuijken broadening function ( [ mkbf ] ) . as the signal - to - noise ratio increases, the average estimate tends toward the true broadening function and the variance of the estimates decreases .dependence of the cross - validation score ( cv ) on the smoothing parameter for two spectra generated using the merrifield - kuijken broadening function ( [ mkbf ] ) , with and .the value of at the minimum of the cv curve is an estimate of the value of the smoothing parameter that minimizes the integrated square error of the estimated .arrows indicate the values of that actually minimize the ise of the broadening functions inferred from these two spectra .dependence of the mean integrated square error of the recovered on the number of velocities , for pseudo - data generated from the flat - topped velocity distribution of eq .( [ flat ] ) .squares : km s ; circles : .dependence of the unbiased cross validation score ( ucv ) on the smoothing parameter for data generated from the velocity distribution of eq .( [ flat ] ) , with .the value of at the minimum of the ucv curve is an estimate of the value that minimizes the integrated square error of .arrows indicate the values of that actually minimize the ise of the velocity distributions inferred from these two data sets .map of stars with observed radial velocities in the globular cluster centauri .the origin of the coordinate system is the center of the cluster as defined by meylan & mayor ( 1986 ) ; major tick marks are separated by one arc minute .the fabry - perot field of view is offset by approximately 1.1 arc minute from the origin .the dashed line is an estimate of the projected rotation axis of the cluster .velocity distributions were computed for stars in the five circular fields shown .penalized - likelihood estimates of in two fields in centauri .the value of the smoothing parameter increases from ( a ) to ( c ) .thin lines are normal s with the same mean velocity and velocity dispersion as the estimated s .penalized - likelihood estimates of in five fields in centauri .heavy solid lines are the estimates ; dashed lines are 95% bootstrap confidence bands ; thin solid lines are the normal distributions with the same mean velocity and velocity dispersion as the estimated s .arrows indicate mean velocities . | line - of - sight velocity distributions are crucial for unravelling the dynamics of hot stellar systems . we present a new formalism based on penalized likelihood for deriving such distributions from kinematical data , and evaluate the performance of two algorithms that extract from absorption - line spectra and from sets of individual velocities . both algorithms are superior to existing ones in that the solutions are nearly unbiased even when the data are so poor that a great deal of smoothing is required . in addition , the discrete - velocity algorithm is able to remove a known distribution of measurement errors from the estimate of . the formalism is used to recover the velocity distribution of stars in five fields near the center of the globular cluster centauri . rutgers astrophysics preprint series no . 191 |
we build up a matlab model to simulate a direct current ( dc ) network shown in fig .[ fig.3.dc for simulink ] to illustrate the flow tracing process .the flow quantity is given by the electric current in this model .nodes 1 and 2 are two nodes with current sources where and , respectively .the resistances of resistors are randomly chosen within the set of integer numbers [ 1,10 ] , shown in tab .[ tab.3.resistance of resistors ] . the sink flow leaving from the sink nodes 9 and 10are measured by the current scopes as and .the current directions are shown in fig .[ fig.3.dc for simulink abstract ] .next , we show how to calculate the source - to - sink hidden currents from the current source and to the sink and by different methods ..resistances of the resistors in fig .[ fig.3.dc for simulink ] . [ cols="<,^,^,^,^,^,^",options="header " , ] from the matlab simulation results of the dc network , the downstream extended incidence matrix , , is and the downstream contribution matrix , , is we also obtain , from the experiments , that , , , , and . thus , we calculate for and by , , , and .we note that all these numbers coincide with that in tab .[ tab.3.flow tracing result proportion sharing ] .define the _ upstream extended incidence matrix _ , , by we know , implying , since , we have equations ( [ eq.3.upstream extended incidence matrix ] ) and ( [ eq.3.upstream equaiton ] ) imply where ^t ] . from , we have {ij } f_j^t\\ & = \sum_{j=1}^n\left[{\mathbf{k'^{-1}}}\right]_{ij}f_j^{out } \cdot\iota_j^t . \end{split}\ ] ] let be the _ upstream contribution matrix _ whose element , {ij}$ ] , is a _ upstream contribution factor _ indicating how much proportion of the total outflow at node is coming from node , i.e. , .then , .the upstream extended incidence matrix , , of the dc network is and the upstream contribution matrix , , is we also obtain , , , , and .then , , , , and .the results are the same as that in tab .[ tab.3.flow tracing result proportion sharing ] .figure [ fig.comparison of local and global before fs ] shows the experiment results of the local interaction strength and non - local interaction strength in different types of networks , including the er , ws and ba networks .final results are taken by averaging the results of 100 time - points that are uniformly chosen in the time scale [ 10,20 ] , i.e. , and , where and are the values of and at the time - point .the dynamic behaviour of the oscillators in these networks is described by the kuramoto model by assigning a small coupling strength , such that the oscillators are in an incoherent state . comparing the results in fig .[ fig.comparison of local and global before fs ] with that in the paper when fs is present , we find that those pairs of nodes which are non - locally interacted when fs is not present also have non - local interactions when fs is present .this suggests that the existence of non - local interaction between a pair of nodes strongly depends on the network topological features of the network rather than the coupling strength . | understanding the interactions among nodes in complex networks are of great importance , since it shows how these nodes are cooperatively supporting the functioning of the systems . scientists have developed numerous methods and approaches to uncover the underlying physical connectivity based on measurements of functional quantities of the nodes states . however , little is known about how this local connectivity impacts on the non - local interactions and exchanges of physical flows between arbitrary nodes . in this paper , we show how to determine the non - local interchange of physical flows between any pair of nodes in a complex network , even if they are not physically connected by an edge . we show that such non - local interactions can happen in a steady or dynamic state of either a linear or non - linear network . our approach can be used to conservative flow networks and , under certain conditions , to bidirectional flow networks . research on complex networks has been attracting the attention of many scientists for several decades . if the topology of physical connections between any nodes in a complex network is known ( such as in a power grid ) , one wishes to understand how this topology of the connected nodes drives large - scale non - local behaviour of the network . to understand large - scale behaviour of complex networks , it is imperative to develop an approach capable of calculating how much physical flow goes from one node to another one through all possible existing paths , a quantity that we refer in this work as hidden " flow , since this quantity is usually inaccessible from measurements and it is unknown . in this paper , we introduce the flow tracing method which is known in electrical engineering to track power flows . this method applied to power grids not only provides the information of how much power is supplied by a particular generator to a particular consumer , but also helps energy companies to cost proper revenue to energy suppliers and to charge proper wheeling fee from consumers . we avail from this approach to demonstrate how to calculate the hidden flow between any two nodes in conservative flow networks , by only requiring information about the adjacent flows between any two connected nodes . this work thus provides a rigorous way to gauge the non - local interactions among nodes in a network . the applicability of the method is enormous since flow networks can be used to model many complex networks , such as transportation networks , water networks , gas and oil networks and power grids . furthermore , we extend the method to provide an instantaneous picture of how nodes interact non - locally in non - linear networks by constructing linear equivalent model to these networks . we also discuss the application of the method to bidirectional networks , such as transportation networks . a flow network is a digraph , , where and are the sets of nodes and edges , respectively . the direction of an arbitrary edge in corresponds to the direction of the physical flow on it . a flow network normally contains three types of nodes : ( 1 ) the source node [ e.g. , node 1 or 2 in fig . [ fig:3.flow ] ( a ) ] , which has a source injecting flow into the network ; ( 2 ) the sink node [ e.g. , node 3 or 4 in fig . [ fig:3.flow ] ( a ) ] , which has a sink taking flow away from the network ; ( 3 ) the junction node [ e.g. , node 5 in fig . [ fig:3.flow ] ( a ) ] , which distributes the flow . we define to be the _ adjacent flow _ , or simply the _ flow _ , between nodes and , which is the measurable flow coming from nodes to through edge . particularly , if node is not physically connected with node . in this paper , we consider the conservative flow networks , which satisfy the following properties : ( 1 ) the adjacent flow from node to node is the opposite of that from node to node , i.e. , ; ( 2 ) the adjacent flow is conserved at a junction node , i.e. , , where node is a junction node ; ( 3 ) there is no loop flow representing a closed path in a flow network , where a loop flow is shown in fig . [ fig:3.flow ] ( b ) ; ( 4 ) there is no isolated node in the network , i.e. , every node must be connected to at least one other node in the network . a path in a digraph from node to node , , is an alternating sequence of distinct nodes and edges starting from node and ending at node , in which the directions of all edges must coincide with their original directions in . the _ hidden flow _ , , is defined to be the summation of the flows starting from node and arriving at node through all possible paths from node to . normally , we can measure or calculate the adjacent flows in a flow network , but it is not easy to obtain the hidden flows , a quantity typically not accessible through measurements . we find the calculation of hidden flows based on the information of adjacent flows , in a conservative flow network , by the flow tracing " method . define the _ node - net exchanging flow _ at node by if node is a source node , we have ; we denote by as the amount of the _ source flow _ being injected into the network from a source at node . we set if node is a sink node or a junction node . if node is a sink node we have ; we denote to indicate the amount of the _ sink flow _ leaving the network from the sink at node . we set if node is a source node or a junction node . assume there is a positive flow from node to node , denoted by . we use to indicate as an _ outflow _ from node arriving at node , and to represent as an _ inflow _ at node coming from node . thus , . can be positive , negative or zero in a flow network . however , we restrict any outflow or inflow at a node to be a non - negative number . this means that , if , we force and to be zeros . analogously , means , we have to denote the outflow from node to node and to be the inflow at node from node . define the _ total inflow _ at node by and the _ total outflow _ at node by in a conservative flow network , the total inflow of a node is equal to its total outflow , i.e. , . we assume , , meaning that each node in a flow network must exchange flow with other nodes , i.e. , no node is isolated . the _ proportional sharing principle ( psp ) _ states that for an arbitrary node , , with inflows and outflows ( fig . [ fig.3.proportion node ] ) in a conservative flow network , ( 1 ) the outflow on each outflow edge is proportionally fed by all inflows , and ( 2 ) by assuming that node injects a flow to node , and node takes a flow out of node , we have that the _ node - to - node hidden flow _ from node to node via node is calculated by or by equations ( [ eq.3.proportional sharing principle down ] ) and ( [ eq.3.proportional sharing principle up ] ) result in the same value of , since . equation ( [ eq.3.proportional sharing principle down ] ) represents the _ downstream flow tracing _ method , where we start tracing the hidden flow from a source node to a sink node , by using the percentage , , to indicate the percentage of that goes to . equation ( [ eq.3.proportional sharing principle up ] ) denotes the _ upstream flow tracing _ method , where we trace the flow from a sink node to a source node , by knowing the proportion of is provided by . we just deal with the downstream flow tracing in the main context of this paper and explain the upstream flow tracing by an example in the supplementary material . with inflows and outflows . ] define the _ downstream coefficient _ at node for the outflow by to indicate the proportion of the outflow at edge to the total outflow at node . define the _ upstream coefficient _ at node for the inflow by denoting the proportion of the inflow at edge to the total inflow at node . then the calculation of can be simply expressed by or . define the _ sink proportion _ and _ source proportion _ at node by respectively , where the sink proportion , , indicates the proportion of the sink flow to the total outflow at node , and the source proportion , , indicates the proportion of the source flow to the total inflow at node . by defining the sink proportion and source proportion , we are now able to calculate the source - to - sink hidden flow from a source at node to a sink at node denoted by . from eq . ( [ total inflow ] ) , we know that is a part of , where is the source flow at node . from eq . ( [ sink and source proportion ] ) , we know the proportion of to . according to the psp , we can then calculate the source - to - sink hidden flow by . the source - to - sink hidden flow is different from the node - to - node hidden flow between two nodes . for example , in fig . [ fig:3.flow ] ( a ) , indicates how much hidden flow from the 4-unit source at node 1 to the 3-unit sink at node 4 , which is different from that indicating how much hidden flow goes from node to node . in the supplementary material , we demonstrate how to use the downstream flow tracing method to trace hidden current flows in a dc network . however , the downstream flow tracing is most suitable for small networks , because finding all the paths with different lengths between any pair of nodes is a huge amount of work in large sized networks . the extended incidence matrix , proposed in refs . , solves this problem . the _ downstream extended incidence matrix _ , , in a flow network including nodes is an dimensional matrix , defined by transform eq ( [ total inflow ] ) to considering , we have from eqs . ( [ eq.3.downstream extended incidence matrix ] ) and ( [ eq.3.downstream equaiton ] ) , we have where ^t ] . is an invertible matrix , thus , , implying that , {ij } f_j^s,\ ] ] where {ij} ] is a _ donwstream contribution factor _ indicating how much hidden flow goes from node to , i.e. , for any pair of nodes . then , indicates the source - to - sink hidden flow from node to node . an example is given in the supplementary material explaining how to apply the downstream extended incidence matrix to carry out flow tracing . we now formally conclude and extend our work to analyse the nonlocality study in complex networks . let , , , , , be different nodes in a conservative flow network , where node has a source , node has a sink , nodes , are connected by edge with an adjacent flow , and nodes , are connected by edge with an adjacent flow . the non - local interaction calculation includes the following parts : ( 1 ) the node - to - node hidden flow from node to node , , is calculated by ( 2 ) the source - to - sink hidden flow from node to node , , is calculated by ( 3 ) the node - to - edge hidden flow from node to edge , , is calculated by ; ( 4 ) the edge - to - node hidden flow from edge to node , , is calculated by and ( 5 ) the edge - to - edge hidden flow from edge to edge , , is calculated by . next , we extend these concepts to show how nodes interact non - locally in non - linear systems by constructing linear model analogous to the non - linear networks . let the equation indicate a dynamic scheme describing the behaviour of coupled nodes , where is the dynamical variable of each node , is the isolated dynamic function , is the element of the laplacian matrix , and is an arbitrary coupled dynamic function . we treat the system as a flow network by interpreting as the node - net exchanging flow at node . the value and sign of may change over time . if , we treat node as a source node at time and the source flow is . if , we treat node as a sink node at time and the sink flow is . if , we treat node as a junction node at time . let be the adjacent flow from node to node . if , we have as the outflow from node and as the inflow at node at time . if , we have as the outflow from node and as the inflow at node at time . by doing this interpretation , we are constructing an equivalent linear conservative flow network that instantaneously behaves in the same way as the non - linear network described by eq . ( [ eq.3.dynamic system ] ) . this enables us to calculate the non - local interactions in the equivalent linear flow network which informs us about the non - local interactions in the original non - linear network . we demonstrate the non - local interaction process in non - linear networks by tracing hidden flows in networks where the dynamics of nodes is described by a revised kuramoto model , as given by where is the coupling strength , is the entry of the laplacian matrix , and indicate the phase angle and natural frequency in a rotating frame , respectively . in this rotating frame , , when the oscillators emerge into _ frequency synchronisation ( fs ) _ for a large enough . in the fs state , all the node - net exchanging flows and all the adjacent flows are constants , since are constants . let be a normalised variable in [ 0,1 ] indicating the local interaction strength between oscillator and , where is the maximum of all absolute values of adjacent flows . since , we have . every hidden flow is traced by considering that flows are directed . this implies that all the calculated hidden flows are non - negative and at least one of and is 0 . we let be the non - local interaction strength between oscillator and , where is the non - zero one between and , and is the maximum of all hidden flows . this definition of the non - local interaction strength allows us to compare and for the same pair of nodes in a network . we construct three types of networks with 25 nodes , namely the erds - rnyi ( er ) , watts - strogatz ( ws ) and barabsi - albert ( ba ) models . the dynamic behaviour of the nodes in these networks follows eq . ( [ eq.4 kuramoto model ] ) . the natural frequencies of oscillators are set to be random numbers in [ 0,1 ] . figure [ fig.comparison of local and non - local ] shows the comparison of the local interactions and the non - local interactions when the oscillators emerge into fs with a large enough . figures [ fig.comparison of local and non - local ] ( a ) , ( b ) and ( c ) show the local interaction strengths , , for er , ws and ba networks , respectively . figures [ fig.comparison of local and non - local ] ( d ) , ( e ) and ( f ) demonstrate the non - local interaction strengths , , for er , ws and ba networks , respectively . figure [ fig.comparison of local and non - local ] ( d ) exposes some hidden interactions that fig . [ fig.comparison of local and non - local ] ( a ) does not show to exist in an er network . by comparing figs . [ fig.comparison of local and non - local ] ( b ) and ( e ) , we see that a randomly rewired edge in a ws network not only produces interaction between the two adjacent nodes connected by this edge , but also creates functional clusters among nodes close to the two adjacent nodes . so , complex systems , such as social networks that are modelled by a ws network can in fact be better connected than previously thought . we constructed the ba network by assigning smaller labels to nodes with larger degrees . both figs . [ fig.comparison of local and non - local ] ( c ) and ( f ) illustrate the strong interactions among the nodes with large degrees ( small labels ) . figure [ fig.comparison of local and non - local ] ( c ) shows that the interactions between unconnected nodes with small degrees ( large labels ) are week or inexistent , though , such interactions are revealed in fig . [ fig.comparison of local and non - local ] ( f ) . through this comparison , we understand that two nodes in a network may strongly interact with each other even if they are not connected by an edge . in the supplementary material , we present a non - local interaction study for these networks when fs is not present . comparing the results of the experiments with and without the existence of fs , we find that those pairs of nodes which are non - locally interacting when fs is not present also have non - local interactions in the result when fs is present . this suggests that , the existence of non - local interaction between a pair of nodes strongly depends on the network topological features of the network rather than the coupling strength . our method can also be applied to a bidirectional flow network if the network can be separated into two independent unidirectional networks . for example , under the assumption that all roads are bidirectional , we can separate the transportation network of a city into two networks . one network includes all the left hand roads and the other one contains all the right hand roads . using our method , we can trace the transportation flow between any two nodes in these two networks . 37ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) * * , ( ) in _ _ , vol . ( , ) pp . * * , ( ) * * , ( ) in _ _ , vol . ( , ) pp . in _ _ ( , ) pp . * * , ( ) in _ _ ( , ) pp . * * , ( ) * * , ( ) ( ) in _ _ , vol . ( , ) pp . in _ _ ( , ) pp . _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * supplementary material * |
the cleo experiment , located on the cornell electron storage ring ( cesr ) , consists of a general - purpose particle physics detector used to reconstruct the products of symmetric collisions with centre - of - mass energies up to approximately 10 gev , corresponding to the resonance .luminosity - increasing improvements in the cesr optics , which required the placement of superconducting quadrupole magnets nearer to the interaction region , precipitated upgrades to several inner subsystems of the detector prior to the cleo iii datataking run .the main outer components of the cleo detector are a csi electromagnetic calorimeter , a 1.5 t solenoidal superconducting magnet , and muon detectors .the upgraded inner subsystems consist of , in order of decreasing radius with respect to the beam line , the following four devices : a 230 channel ring - imaging erenkov ( rich ) detector for charged - hadron identification ; a new central drift chamber , with 9796 cells arranged in 47 layers , for charged - particle tracking ; a four - layer double - sided silicon microstrip vertex detector with 125 readout channels for precise tracking and decay vertexing ; and a thinly walled beryllium beam pipe to minimize tracking uncertainties due to multiple - scattering effects .an active coolant - control farm , which is the focus of this article , was used to provide heat removal and mechanical stability ( via temperature control ) for the above four upgraded inner cleo subsystems . in the case of the rich detector ,the principal heat sources were chains of viking front - end signal processing chips , with a combined power output of w. the drift chamber s heat sources consisted of a total of w of power dissipated by pre - amplifiers mounted on each of the two end plates ; temperature stability across the end plates was crucial to prevent wire breakage .the silicon detector produced w of power from 122 beo hybrid boards of front - end electronics chips , each dissipating 4 w ; spatial tracking resolution of the silicon layers depended on the alignment precision of the sensors and therefore the temperature stability .the be beam pipe was itself passive , but required a design heat - removal capability up to the kw range due to the possibility of higher - order mode heating in the cesr collider . in this article, we describe the approach taken in designing the cleo coolant - control farm , studies of the hydrocarbon pf-200ig and its suitability as a heat - transfer fluid , mechanical construction of the farm platforms , the sensor elements , the process - control and diagnostic systems , and the farm s performance in the cleo iii datataking run .the principal challenge posed in the design of the cleo cooling system lay in the requirement that multiple detector subsystems , each with markedly different plumbing layouts , power loads , pressure tolerances , and physical locations , were to be actively cooled by a ` farm ' of independent coolant - control platforms governed and monitored by centralized control and diagnostic systems , respectively . rather than design separate custom coolant circuits tailored to the specifications of each detector subsystem , our approach was to found the system on a single simple generic design , one that possessed the flexibility to satisfy the cooling requirements of any of the subsystems without extensive modification .the advantages of a scalable generic design were numerous .each system had the same basic set of fittings and sensors , introducing economies of scale and easing spare - component inventories .the modularity of the design was intended to allow for the rapid swapping of an entire coolant - control platform with a spare unit in case of a problem .the offending platform could then be removed _ en masse _ from the radiation area for diagnosis and repair , without the accrual of excessive downtime .technician training , serviceability , and the management of on - call experts were simpler in this farm paradigm ; separate cooling experts were not required for different detector subsystems , and the total number of experts needed was reduced .our use of a modular farm of nearly identical cooling platforms lent itself well to a unified process - control system , which we describe in section [ sect : control ] .scaling of the control electronics to accommodate additional coolant platforms was designed to be a facile matter of adding more input / output channels and updating the control logic .all electronic sensors , including those not used for process - control variables , were read out by the control electronics . a separate diagnostic system , described in section [ sect : diagnostics ] , therefore had no need to interact with the farm sensors directly ; instead , all the diagnostic information was acquired through the control infrastructure .the diagnostic system was designed to provide globally accessible , minute - by - minute performance information browsable on the world wide web ( www ) .beneficial to the farm concept was a simple , rapid , and independent channel of communication between the coolant - control electronics and the detector subsystems proper .instead of using a subsystem - based temperature reading , which would have required significant customization on each platform , the cooling platforms used the temperatures of their coolant supply as process variables , thereby precluding a need for the detector subsystems to communicate with any aspect of the cooling system .communication in the other direction , however , namely from the process - control electronics to the detector subsystems , was possible in the form of interlocks that the cooling system could breach in the event of serious performance problems .each cooling platform was designed to maintain a specific , user - defined , fixed coolant - supply set - point temperature at approximately a constant flow rate of heat - transfer fluid , with an active feedback system that automatically compensated for changes in subsystem thermal power load , ambient temperature and humidity changes , and variations in heat - sink water temperatures and flow rates .supply set - point temperatures were remotely selectable , and were designed to remain at most within k of the requested temperature .all the farm platforms were designed to deliver flow rates up to approximately 23 l / min and to handle heat loads of up to approximately 1 kw .table [ tab : specs ] provides a summary of the principal operating parameters of a generic cleo coolant - control platform design ; subsystem - specific requirements are described in section [ sect : special ] below ..summary of principal design parameters of a generic cleo cooling platform . [ cols="<,^,^",options="header " , ] the pf-200ig solvent was described by the manufacturer as non - corrosive to metals such as aluminium , copper , magnesium , and stainless steel .motivated by the type of metal used to construct the cleo beam pipe , we undertook a study to examine the compatibility of pf-200ig with beryllium .three beryllium plates with dimensions 5.3 cm 2.5 cm 0.3 cm were each coated on one side with a corrosion resistant primer ( br127 ) and immersed in volumes of de - ionized water , pf-200ig , and air ( the control sample ) , respectively .following a period of 3 months at ambient temperature , pressure , and light exposure , there were no visibly discernible changes ; the samples were subsequently placed near cesr where they absorbed a radiation dose of krad . with still no visually apparent deterioration in the exposed or primer - coated sides of the beryllium plates, we used a scanning electron microscope to examine the sample surfaces , micrographs of which are depicted in figure [ fig : sem ] .we observed that the sample that was in contact with pf-200ig ( figure [ fig : sem](c ) ) exhibited less surface modification than the sample that was in contact with h ( figure [ fig : sem](b ) ) , as compared to the control sample ( figure [ fig : sem](a ) ) .based on these observations , we concluded that pf-200ig was at least as compatible with beryllium as de - ionized water .= 4.cm = 4.cm = 4.cm in addition to our studies with beryllium , we tested the compatibility of pf-200ig with other materials : cesium iodide , buna and viton elastomers , push - lok rubber hose , brass and stainless - steel fittings , copper , aluminium , plastic , and both polypropylene and nylon tubing .our tests consisted of recording the masses and dimensions of the material samples and immersing them in containers of pf-200ig under ambient conditions for a period of weeks , whereupon we measured the mass changes and the fractions of linear swell .for each of the materials tested , the observed changes in mass and size were negligible .the mechanical design of the coolant - control platforms was driven primarily by the following criteria : reliability and serviceability ; modularity ; elevation and footprint ; mobility ; and ease of access to gauges , valves , filters , and reservoirs .the limited space in the `` pit '' beneath the cleo detector , an approximately 16 m area of cm high crawl space , dictated a footprint of 76 cm 76 cm and an elevation of cm for each of the coolant - control platforms , including the rich active - manifold platform .each member of the farm was constructed on a rubber - footed skid of 0.6 cm thick aluminium , enabling a degree of mobility and easy access to platform facilities .figure [ fig : platform_schematic ] depicts a schematic of the coolant flow circuit on board one of the platforms in the farm . upon leaving the reservoir due to suction from the pump ,pf-200ig coolant reached the pump inlet by way of either a branch through the hot side of a heat exchanger or via one of two bypass shunts .the cold side of the heat exchanger , the primary heat sink for the platform consisting of a 20-brazed - plate cetetherm ( cetetherm ab , ronneby , sweden ) honeycomb unit , was connected to a closed water circuit driven by a chiller system ( refer to figure [ fig : farm_schematic ] ) .the amount of flow through the heat exchanger was regulated by a proportioning valve powered by a computer - controlled step motor ; the fraction that the valve was open constituted the control variable in the feedback system .unregulated 12 vdc power supplies were mounted on each platform to power the proportioning - valve step motors .= 14.cm the two heat - exchanger shunt branches each contained a ball valve and a normally - open solenoid valve .one design goal of the bypass system was to guarantee that there be flow to the detector subsystem at all times , assuming that both solenoid valves could never be simultaneously closed while the proportioning valve was 0% open .dual bypass branches were implemented in order to expand the dynamic range of the system using binary flow logic . for a given fixed fractional flow rate in the bypass branches ,large variations in the subsystem s heat load meant that the proportioning valve alone would not provide enough compensation to maintain a stable set - point temperature . using solenoid valves to switch the two bypass branches ,each with a fixed ball - valve setting , resulted in four discrete bypass flow configurations . with ball - valve settings appropriately chosen , the proportioning valve could provide full analog coverage for continuous flow changes intermediate to the four binary combinations of the two solenoid valves .we note that an alternative to this scheme to maximize the dynamic range would be simply to use a second proportioning valve in the heat - exchanger bypass shunt ; we did not adopt this approach for reasons of reliability and economy .once through the outlet port of the pump , the pf-200ig could either bypass back through the filter and into the reservoir or depart the platform for transport to the detector subsystem , as indicated in figure [ fig : platform_schematic ] .the global rate of pf-200ig flow leaving the platform was configured by partially closing the ball valve on the bypass branch leading back to the reservoir . maximizing the fraction of flow through this bypass branch greatly assisted the feedback process by pre - cooling the temperature of the reservoir contents to near the set - point value .the plumbing on the coolant - control platforms consisted primarily of brass 3/4 npt threaded pipe fittings connected using approximately 79 nipples ( sealed with loctite pst 567 ) per platform .each platform also had ball valves , elbows , couplings , tees , and several bushings .reservoirs were constructed from stainless steel , and the filter housings , also stainless steel , contained 75 m polypropylene filters .the self - priming centrifugal pumps had stainless - steel casings , 750 w ( 2.2 kw for the drift - chamber platform ) single - phase electric motors , and a 34 m ( 45 m for the drift - chamber platform ) water head rating .our determinations of these pump - head specifications took into account the flow requirements through the detector subsystems , the reduced density of pf-200ig ( refer to table [ tab : props ] ) , the diameters and elevations of the plumbing runs , and the need for adequate flow in the bypass branches for optimum temperature control of the heat - transfer fluid . throughout the plumbing of each platform ,12 stainless - steel unions were used to aid in the servicing of different components .an additional emergency bypass shunt containing a 690 kpa pressure relief valve linked the pump outlet port with the reservoir in the event of an overpressure situation .refer to figure [ fig : platform_drawing ] for an assembly drawing of a typical coolant - control platform .= 16.cm at the maximum 23 l / min flow rates required by the design , the vapour pressure of pf-200ig ( refer to table [ tab : props ] ) was low enough to ensure that the cavitation number was well in excess of the incipient cavitation value .no pressurization of the coolant circuits beyond ambient was therefore deemed necessary to avoid cavitation , permitting us to vent any of the reservoirs to the atmosphere during operation . in practice, we only vented the reservoir in the case of the beryllium beampipe cooling platform , where the differential pressure limit on the thin walls of the beryllium cooling channels was required not to exceed kpa ( refer to section [ sect : special ] ) , a criterion that we also explicitly imposed on the system pressure near the outlet port of the pump on the platform proper . in lieu of the fixed ( 690 kpa ) pressure relief valve, we installed an adjustable unit with the range 103 165 kpa .as an extra precaution , the beampipe coolant - control platform also had a graphite rupture disc rated at 207 kpa . during prototype testing, we observed that this disc would rupture at what appeared to be lower than the rated pressure , as indicated by a glycerin - damped analog visual gauge .we ascribed this to transient pressure pulses from the pump s impeller and relief - valve oscillations ; we subsequently installed a water - hammer suppressor in an effort to minimize the effect of these pulses .figure [ fig : farm_schematic ] depicts the main elements of the coolant - control farm , located in the pit beneath the cleo detector , and the flow of cooling fluids ( air , liquid water , and pf-200ig hydrocarbon ) between them .wherever possible , brass quick - disconnect fittings were used to link up the liquid connections , which were insulated with ap armaflex .a closed - circuit water chiller system provided water near a temperature of 282 k to the cold sides of the heat exchangers residing on each of the coolant - control platforms ( refer to figure [ fig : platform_schematic ] ) . in order to assist the water chillers , a heat exchanger to 280k building water was inserted to pre - cool the returning coolant .three air handlers , their compressor coils cooled using 280 k building water , maintained a flow of cool dry air across the farm platforms . also shown in figure [ fig : farm_schematic ] is the rich active - manifold platform , a dedicated module that consisted of two one - to - five manifolds , the supply manifold outfitted with five computer - controlled proportioning valves and the return manifold instrumented with five flow meters and transmitters .on or near the coolant - control platforms , the relatively harsh environment due to dust , water , hydrocarbons , emi from pump and fan motors , and vibration demanded a design that made use of industrial sensor technologies . in order to minimize the amount of delicate electronics in the cleo pit , the analog - to - digital conversion of sensor signals took place in the control - system crate , described in section [ sect : control ] .standard industrial 4 ma transmitter technology was used to condition and send analog sensor signals from the farm platforms to the control system .current transmitters had the advantages of greater noise immunity and an ability to send analog signals over relatively long distances .in addition , the remote sensors in the cleo pit could be powered ( usually using .5 ma of excitation current ) and read out using a single shielded two - wire current loop , with no local power supply requirements . because of the possibility for rigourous environmental conditions in the cleo pit , all sensors in the design were required to have sealed weatherproof enclosures that fulfilled the specifications of the nema-4x standard . as indicated in figure[ fig : platform_schematic ] , each coolant - control platform was instrumented with two temperature sensors , one supply and one return , to provide a measurement of the temperature rise due to the combination of subsystem power load , frictional heating of the moving coolant , and ambient heat transfer .the coolant supply temperature measurement was particularly critical since it served as the process variable in each of the main feedback control systems .this required the use of temperature sensors that were accurate and stable to less than .1 k with low noise and good linearity characteristics .we investigated four temperature measurement technologies : thermocouples , resistance temperature detectors ( rtds ) , thermistors , and solid - state integrated circuits .thermistors were too unreliable and nonlinear .we used solid - state devices , based on the ad592 ( analog devices , inc . ) integrated circuit , in a prototype coolant - control platform , finding them to be attractive due to their relatively good linearity and well - defined characteristics , but difficult to insert into the coolant flow stream reliably and necessitating the design and production of a custom , weatherproof , diode - protected , 4 ma transmitter circuit ( using , _e.g. _ , the ad693 device ) .we concluded that the ad592 integrated - circuit sensors were better suited for measuring temperatures at the surfaces of solid objects rather than inside flowing fluids .we disfavoured a thermocouple solution because of the costs of the cold - junction compensation and linearization circuitry needed to achieve the desired accuracy .our final design used tip - sensitive 100 ( industry standard iec 751 ) platinum rtds manufactured by minco products , inc . ; once combined with some relatively simple off - the - shelf linearization and 4 ma transmitter circuitry , rtds were more linear , accurate , and sensitive than thermocouple implementations of comparable cost .a weatherproof 316 stainless - steel thermowell assembly safely housed the rtd and the front - end electronics , allowed for the effective insertion of the rtd directly into the flow of coolant , and permitted the replacement of the platinum element without having to breach the volume of heat - transfer fluid .in addition to the pairs of temperature sensors instrumenting each of the farm platforms ( as shown in figure [ fig : platform_schematic ] ) , the rich system had a dedicated array of 32 tip - sensitive 100 platinum rtds distributed azimuthally and on both the electron and positron sides of the detector in coolant - return manifolds located near the rich modules .for these rtds , ` hockey - puck ' style ma transmitters were mounted together in a single array located atop the cleo detector ; shielded wire leads , 18.3 m in length , connected the rtds to these dedicated transmitters , which were calibrated to compensate for the net lead resistance .a custom - built multiplexer circuit facilitated addressed read - out of any one of the 32 temperature transmitters at a time . each coolant - control platform had visual pressure and flow - rate gauges to aid in manual adjustments to the flow configuration . in addition , every platform transmitted two pressure measurements , _i.e. _ , the pressure drop through the cleo subsystem and its plumbing network ( refer to figure [ fig : platform_schematic ] ) , to the control crate using two 0 kpa series 634e pressure sensors from dwyer instruments , inc .flow rates of heat - transfer fluid were transmitted from density - compensated vane - type flow sensors manufactured by universal flow monitors , inc .level transmitters in the coolant reservoirs were manufactured by omega engineering , inc . , and consisted of a magnet that was mounted inside a stainless - steel float that tracked up and down an insertion stem while setting a series of reed switches in a voltage divider resistor network .the accuracies of the pressure , flow , and level sensors and transmitters used were kpa , l / min , and .3 cm , respectively .the heart of the coolant - control system consisted of a small logic controller ( slc ) module , a member of the slc 500 family of programmable controllers manufactured by allen - bradley company , inc ., that resided in a dedicated 13-slot chassis with an integrated 1 a ( 24 vdc ) power supply module .the slc 5/04 module used was capable of up to 4096 inputs plus 4096 outputs and had a memory of 32 k words .the remainder of the process - control system consisted of three other module varieties mounted in the chassis : a 32-channel current - sourcing digital dc output module ( 1746-ob32 ) , three 4-channel 0 ma analog output modules ( 1746-no4i ) with 14-bit resolution , and eight 4-channel ma ( vdc ) analog input modules ( 1746-ni4 ) with 16-bit resolution .the 13-slot allen - bradley chassis was mounted inside a crate enclosure positioned in the experimental hall outside the main radiation area . also residing in the enclosure was a 24 vdc/12 a regulated power supply used to energize the entire set of ma sensors remotely deployed both on the coolant farm platforms underneath the cleo detector ( refer to figure [ fig : farm_schematic ] ) , m away , and in the array of 32 rich temperature transmitters on top of the detector , m away .terminal blocks mounted on a din rail in the enclosure served to interconnect the 24 vdc power supply , the individual sensors , and the appropriate terminals of the analog input modules in the allen - bradley chassis . in the case of the 32 rich transmitter signals , since the array was multiplexed , only a single analog input channel was required .the slc clocked through the 32 addresses in 0.25 s intervals by using five of the digital output channels in the 1746-ob32 module to switch the 24 vdc of the main sensor power supply .the slc processor used a ladder - logic programming language in which subroutines were organized into ladders , their rungs each acting as if - then conditional statements . on the left side of every rung one or moreconditions were defined ; the corresponding right side executed one or more actions provided that all the conditions were met for the given rung .editing of the ladder - logic code was achieved with rslogix 500 ( rockwell automation ) programming software running on a networked intel - based computer located in a clean computing environment and connected to the allen - bradley control crate by a 30 m rs-232 serial connection .the ladder - logic code was compiled with the rslogix 500 software and was communicated to the slc 5/04 module using a dedicated utility ( rslinx ) . in a similar manner , this serial communication configuration had the capability to allow online edits , parameter adjustments , and diagnostic readout of inputs , outputs , and internal memory structures _ during _ programme execution in the slc .the allen - bradley slc control code consisted of rungs organized into a main ladder that cycled through a series of calls to 14 subroutine ladders .for each of the beampipe , drift - chamber , rich , and silicon subsystems there were subroutines for platform sensor data acquisition , interlock decisions and output , and process control .the rich system had two extra subroutines , one to read out the multiplexed rich temperature signals and one to perform process - control duties and flow - sensor data acquisition for the rich active - manifold platform . during every cycle of control code execution ,an interlock decision was taken for each of the farm platforms .the four criteria forming this decision consisted of a minimum coolant flow rate , minimum and maximum set - point temperatures , and a minimum level of heat - transfer fluid in the reservoir ( for leak detection ) . supply andreturn pressure criteria were not included in the interlock decisions . for a given farm platform , if these conditions were all satisfied , the interlock ladder logic would use a channel in the 1746-ob32 output module to energize a normally open relay switch mounted to the din rail near the terminal blocks in the chassis enclosure .the cooling - interlock relays were connected to a higher - level interlock crate that could switch off power to the subsystem electronics crates in the event of a failure .the control system was designed such that there would also be an interlock breach if the allen - bradley slc had any interruption in power .specific to the drift - chamber coolant - control platform , an additional interlock was used to reduce the possibility of cooling the chamber end plates unevenly , a situation that potentially posed deleterious consequences to the chamber s mechanical integrity .if part or all of the drift - chamber electronics underwent an unexpected loss of power , the power to the centrifugal pump on the coolant - control platform was switched off by means of a relay on a 10-minute delay . in turn, this would render the minimum - flow - rate criterion unsatisfied , thereby breaking the cooling interlock and removing power from all of the drift - chamber electronics crates . at the core of the process - control logic in each closed feedback loopwas a proportional integral derivative , or pid , instruction tuned to maintain a desired setting of an input process variable by computing appropriate real - time adjustments to an output control variable . for each coolant - control platform , the process variable consisted of an input from the platinum rtd sensor measuring the temperature of the heat - transfer fluid supplied to the detector subsystem ; the control variable consisted of the fractional opening of the heat - exchanger proportioning valve , as set by the step motor positioned using 4 ma signals from the 1746-no4i analog output modules ( refer to figure [ fig : platform_schematic ] ) . in order to increase step - motor life ,a deadband of 0.1 k was used in the pid algorithm ; in this way , the control variable was left unchanged once the process variable passed through the set point and until it was different from the set point by the deadband amount .for reasons of reliability and flexibility , the diagnostic and process - control systems for the cleo coolant - control farm were kept relatively independent through the use of a two - tiered configuration .notwithstanding , the same networked computer that was used to upload the compiled ladder - logic code into the allen - bradley slc , as described in section [ sect : control ] , served as the run - time implementation platform for the diagnostic software .a graphical programming language was used to develop a user interface , also graphical , to the diagnostic system parameters .the software was based on virtual instruments ( vis ) in the labview environment provided by national instruments corporation . during normal farm operation , _i.e. _ , when the slc was in a ` standalone ' mode and assumed sole control of the farm , the rs-232 serial connection between the slc and the networked computer ( refer to section [ sect : control ] ) could be used by the labview software to read out regions of the slc memory for diagnostic purposes .code from proprietary driver and vi libraries ( highwayview , from seg , watertown ma , usa ) transacted the data between the slc and the labview software .although the diagnostic interface primarily read data from the slc for the purpose of displaying them to users , it did provide password - privileged cooling experts with the ability to adjust the platform set - point temperatures maintained by the control system .such set - point changes were effected , again using a highwayview vi , by writing data into slc memory via the serial interface .users of the local area network could therefore `` window in '' to the labview computer to view real - time diagnostics and , if necessary , make limited adjustments to the control parameters . diagnostic information was made available to other subsystem experts and data - acquisition personnel by hypertext transfer - protocol ( http ) servers running on the labview computer . for each platform in the farm , there was an http server capable of delivering streaming quasi - real - time images of the main labview front panel to a client web browser .an example of one of these web - based front - panel displays is given in figure [ fig : labviewfp_subsystem ] .other web accessible displays included special diagnostics for the rich temperature - sensor array ( section [ sect : sensors ] ) , rich active - manifold parameters ( section [ sect : mech ] ) , and virtual strip charts giving a graphical representation of the 12-hour history of some of the more important system parameters .in addition , every minute , the labview machinery updated a detailed text - only html file containing an expert diagnostics digest that listed the status of all farm parameters .this diagnostic file was written to public web space and was accessible remotely using the www .in particular , the digest file format lent itself well to remote small - screened portable devices like personal digital assistants using mobile infrastructure software such as that available from avantgo , inc .long - term archiving of the coolant - control farm operating history was achieved by taking half - hourly snapshots of the expert diagnostics digest file , compressing them , and storing them in a database .maintenance of the farm platforms proved to be minimal , consisting of weekly visual inspections and annual cleanings and polypropylene filter replacements .some minor coolant leaks , requiring occasional top ups , were detected by the observation of changes in reservoir level readings with time . in the beampipe and silicon - detector systems ,which each had longer runs of push - lok rubber hose connecting the platforms to the subsystems , the pf-200ig acquired an orange colour , contrary to earlier compatibility studies ( refer to section [ sect : becompat ] ) .we attributed this discolouration to the dissolution of a powdered protectant with which the inner surfaces of the push - lok rubber had been treated ; although no degradation of the hose was observed , we recommend using polypropylene or nylon tubing in future applications . operationally , the dynamic ranges of the proportioning valves alone precluded a need to switch the solenoids in the heat - exchanger bypass shunts in order to maintain pid control . with fixed ball - valve openings in the two shunt branches and a deadband setting of .1 k in all the pid loops , supply temperatures were asserted with a stability of .2 k with respect to the set - point values for the duration of the cleo iii datataking period .this degree of control was exploited during cleo iii commissioning in pedestal sensitivity studies of the cleo beampipe pin - diode radiation monitors , for which the beampipe coolant - control platform was employed to vary the coolant supply temperature between 293 k and 299 k. the cleo coolant - control farm commenced operation in november 1999 with the beginning of the cleo iii commissioning period .as described in section [ sect : special ] , prior to march 2000 the silicon farm platform was deployed remotely and was successfully operated and monitored by the central farm control and diagnostics systems , respectively . on 16 march 2000 , a building cooling - water pipe burst , flooding the cleo pit and submerging the farm platforms under cm of water ; all sensors , power supplies , and pumps , after drying and cleaning , survived the event . on a few other occasions ,many of the farm s interlocks were exercised in power failures or due to temperature variations in the 290 k water supply that caused trips in the chiller compressors ( refer to figure [ fig : farm_schematic ] ) .the allen - bradley slc 5/04 module ( described in section [ sect : control ] ) , which was equipped with a battery back - up system , was rendered immune to brief power glitches and , it was found , could be relied upon as though it were a firmware device .the cleo iii detector began taking physics quality data in july 2000 for a run that ended in june 2001 , with a scheduled three - week down in september 2000 .the coolant - control farm induced no cleo or cesr downtime in this period , during which a time - integrated luminosity of fb was accumulated .we have described a novel approach to active particle - detector cooling that is based upon a farm of modular coolant - control platforms charged with the hydrocarbon solvent pf-200ig , uniquely used as a heat - transfer fluid . during the cleo iii datataking run , the farm provided reliable cooling support to the rich detector , the drift chamber , the silicon vertex detector , and the beryllium beam pipe , with a temperature stability that exceeded design specifications .the cleo hydrocarbon coolant farm will see continued service in an upcoming programme of operation to explore the resonances , charm physics , and quantum chromodynamics ( cesr - c / cleo - c ) .as mentioned in section [ sect : special ] , the drift - chamber platform , by virtue of its greater flow capacity , is being upgraded to provide additional cooling for passive permanent - magnet ( ndfeb ) quadrupole elements that are being installed prior to the resonance running period .other aspects of the farm will remain the same .we have shown that a centrally controlled and monitored farm of generic active coolant - control platforms , running the liquid hydrocarbon solvent pf-200ig as a heat - transfer fluid , can provide independent and regulated heat removal from several different subsystems in a particle - physics detector .aspects of this design are applicable to future high - energy physics apparatus where flexibility , minimal maintenance , and the ability to monitor and operate detector and accelerator systems remotely will be progressively important .we would like to thank k. powers and g. trutt of the wilson synchrotron laboratory , cornell university , for their excellent technical support .this work was supported by the u.s .national science foundation , the u.s .department of energy , and the natural sciences and engineering research council of canada .p - t technologies , a division of lps laboratories , 4647 hugh howell rd ., p. o. box 105052 , tucker , georgia 30085 - 5052 , usa ; material safety data sheet and private communication .note that the substance pf-200ig is a discontinued product ; possible replacement heat - transfer fluids are being considered , including pf-145hp , also manufactured by p - t technologies .cleo collaboration , cleo - c and cesr - c taskforce , `` cleo - c and cesr - c : a new frontier of weak and strong interactions '' , cornell report no .clns 01/1742 , 2001 ( unpublished ) ; l.k .gibbons , [ hep - ex/0107079 ] . | we describe a novel approach to particle - detector cooling in which a modular farm of active coolant - control platforms provides independent and regulated heat removal from four recently upgraded subsystems of the cleo detector : the ring - imaging erenkov detector , the drift chamber , the silicon vertex detector , and the beryllium beam pipe . we report on several aspects of the system : the suitability of using the aliphatic - hydrocarbon solvent pf-200ig as a heat - transfer fluid , the sensor elements and the mechanical design of the farm platforms , a control system that is founded upon a commercial programmable logic controller employed in industrial process - control applications , and a diagnostic system based on virtual instrumentation . we summarize the system s performance and point out the potential application of the design to future high - energy physics apparatus . clns 01/1754 , , , , , , , , , , , , , and . cleo , detector , cooling , process control , hydrocarbon heat - transfer fluids 29.90.+r , 07.07. , 07.05.dz , 65.20.+w |
there has been high interest in recent years in the development and analysis of adaptive finite element methods ( afem ) for approximating eigenvalues and eigenfunctions of elliptic operators .we consider the following model eigenvalue problem : find such that and here , , is a polyhedral domain , and .there is then a sequence of eigenvalues and corresponding -orthonormal eigenfunctions satisfying .given a nested sequence of adaptively generated simplicial meshes with associated finite element spaces ( ) , the corresponding discrete eigenvalue problem is : find such that and we seek to approximate an eigenvalue cluster and associated invariant subspace .our index set is given by for some , .the corresponding discrete sets are and .afem for eigenvalues are typically based on the standard loop to the finite element error , afem employs local error indicators , where is a standard residual error indicator for the residual .let be a given parameter . is used in to select a smallest set satisfying the drfler ( bulk ) criterion proofs of convergence and optimality of afem for approximating have been given in several papers .the first proof of optimality of afem for controlling simple eigenvalues and eigenfunctions was given in .other papers concerning convergence of afem for simple eigenvalues include .the paper contains a proof of optimal convergence of standard afem for an eigenvalue with multiplicity greater than one , while proves a similar result for clustered eigenvalues .these papers mirror afem convergence theory for source problems ( cf . ) in that they first prove that the afem contracts at each step .an optimal convergence rate dependent on membership of the eigenfunctions in suitable approximation classes is then obtained by standard methods .all require that the maximum mesh diameter in the initial mesh be sufficiently small to suitably resolve the nonlinearity of the problem .the behavior of afem for eigenvalues in the pre - asymptotic regime was studied in , where the authors proved plain convergence results ( with no rates ) starting from any initial mesh .these results guarantee convergence of afem for general elliptic eigenproblems to some eigenpair , but not generically to the correct pair . the works of dai et al . and gallistl are most relevant to ours .in the authors establish convergence of an afem for a multiple eigenvalue of a symmetric second - order linear elliptic operator for arbitrary - degree finite element spaces .approximation of eigenclusters of the laplacian using piecewise linear elements is considered in . .this leaves open the question of convergence results for afem for eigenvalue clusters using lagrange spaces of arbitrary polynomial degree .we fill this gap by showing that standard afem for eigenvalue clusters using polynomials of arbitrary degree also converge optimally . while we consider only conforming simplicial meshes , we also provide a key step in extending such analysis to quadrilateral elements of any degree , meshes with hanging nodes , and discontinuous galerkin methods ; cf . for analysis of the source problem .the analysis of additionally does not immediately apply in the case of non - constant diffusion coefficients .in contrast , our results extend as in to general symmetric second - order linear elliptic operators ( see remark [ rem : genops ] ) .we briefly explain the difficulty which we resolve . a key step in standard afem convergence proofs establishing a certain continuity between error indicators on adjacent mesh levels . in the case of multiple or clustered eigenvalues ,the ordering and alignment of the discrete eigenfunctions may change between mesh levels even on fine meshes .thus and may not approximate the same eigenpair , making the comparison between and used in standard afem convergence proofs irrelevant .a critical contribution was made in , where this problem was circumvented by first analyzing a theoretical ( non - computable ) afem based on error indicators aligned to the _ fixed _ continuous cluster . may be viewed as an indicator for a `` pseudo - residual '' , where and are projections onto defined later .given ] , ] , and . herethese quantities vary with ( for and ) or lying on the mesh skeleton ( for and ) , but this fact has little significance for the time being .we also define the matrix ] .a standard result from linear algebra is that and are isospectral for square , so is isospectral with . therefore we have that , and both quantities are equal to the maximum eigenvalue of .the matrix was analyzed extensively in the proof of lemma 5.1 of .in particular , is nonsingular under the condition , and by ( 5.2 ) and following of , we have gershgorin s theorem thus gives that the eigenvalues of satisfy thus , which is the first inequality in .the invertibility of guarantees the invertibility of .computing as above yields .because is positive and diagonalizable , we have from that , thus completing the proof of the second inequality in . in is shown for piecewise linear elements that under the assumption , the first bound in holds with no restriction on .our proof thus yields an improved cluster - independent bound under the fineness assumption .also gives an improved bound for in terms of .significantly loosening the restriction appears to be substantially more difficult . is invertible for , but the obtained upper bound for degenerates as .the condition is not substantially weaker than , and control over the equivalency constants is lost as , so we retain from . in lemma 3.3 of it is shown that if and is of arbitrary polynomial degree , where as and is similar to . in our analysisthe obtained bounds are independent of and the double sum term on the right is completely absent .the absence of the double sum term on the right is important , as ; cf .remark 9 ( f ) of .[ rem : genops ] in afem optimality is analyzed for eigenvalues of arbitrary second - order symmetric operators . letting , our equivalence analysis extends immediately to the general case .we consider only , because analyzing optimality in the general case involves consideration of data oscillation , which would make our presentation much more involved ; cf .let given lemma [ lem : main ] , the analysis of mostly holds verbatim for higher - order finite element spaces .in particular , the following counterpart to theorem 3.1 of holds .[ t : opt ] assume that the bulk parameter and initial mesh size are sufficiently small .assume that for some , . then there exists depending possibly on but independent of other essential quantities such that several remarks are in order . in optimalityis expressed with respect to the class where is the -projection onto , the space of piecewise polynomials of degree subordinate to .employing this class requires proving equivalence between and , that is , approximation by functions in is equivalent to approximation by gradients of functions in . in ,the needed result is obtained by citing , in which the equivalence is shown up to data oscillation terms .however , the necessary equivalence has recently been shown to hold on arbitrary meshes without data oscillation terms in , which could simplify the proof of proposition 3.1 of .[ r : theta ] the dependence of and on various quantities is given in detail in .the threshold condition for depends on the second equivalence constant in .thus requires ^{-1} ] with independent of essential quantities . for our particular computations ,this yields : thus following precisely the theory of would lead to a thousand - fold reduction in when moving from our first to our second computational example .this would potentially require a massive increase in the number of afem iterations required in order to achieve a given error reduction .we have demonstrated theoretically and confirmed computationally that this increase in computational expense is unnecessary . | proofs of convergence of adaptive finite element methods for approximating eigenvalues and eigenfunctions of linear elliptic problems have been given in a several recent papers . a key step in establishing such results for multiple and clustered eigenvalues was provided by dai et . al . in , who proved convergence and optimality of afem for eigenvalues of multiplicity greater than one . there it was shown that a theoretical ( non - computable ) error estimator for which standard convergence proofs apply is equivalent to a standard computable estimator on sufficiently fine grids . in , gallistl used a similar tool to prove that a standard adaptive fem for controlling eigenvalue clusters for the laplacian using continuous piecewise linear finite element spaces converges with optimal rate . when considering either higher - order finite element spaces or non - constant diffusion coefficients , however , the arguments of and do not yield equivalence of the practical and theoretical estimators for clustered eigenvalues . in this note we provide this missing key step , thus showing that standard adaptive fem for clustered eigenvalues employing elements of arbitrary polynomial degree converge with optimal rate . we additionally establish that a key user - defined input parameter in the afem , the bulk marking parameter , may be chosen entirely independently of the properties of the target eigenvalue cluster . all of these results assume a fineness condition on the initial mesh in order to ensure that the nonlinearity is sufficiently resolved . eigenvalue problems , spectral computations , a posteriori error estimates , adaptivity , optimality 65n12 , 65n15 , 65n25 , 65n30 |
the methods of statistical physics have shown to be very fruitful in physics , but in the last decades they also have become increasingly important in interdisciplinary research . for example , the _ master equation _ has found many applications in thermodynamics , chemical kinetics , laser theory and biology .moreover , weidlich and haag have successfully introduced it for the description of social processes like opinion formation , migration , agglomeration and settlement processes .another kind of wide - spread equations are the boltzmann equations , which have been developed for the description of the kinetics of gases and of chemical reactions .however , boltzmann-_like equations _ play also an important role for quantitative models in the social sciences : it turns out ( cf .[ s2.2 ] ) that the _ logistic equation _ for the description of limited growth processes , the socalled _ gravity model _ for spatial exchange processes , and the _ game dynamical equations _ modelling competition and cooperation processes are special cases of boltzmann - like equations .moreover , boltzmann - like models have recently been suggested for avoidance processes of pedestrians and for attitude formation by direct pair interactions of individuals occuring in discussions . in this paperwe shall show that boltzmann - like equations and boltzmann - fokker - planck equations are suited as a foundation of quantitative behavioral models .for this purpose , we shall proceed in the following way : in section [ s2 ] the boltzmann - like equations will be introduced and applied to the description of behavioral changes .the model includes _ spontaneous _ ( or _ externally induced _ ) behavioral changes and behavioral changes by _ pair interactions _ of individuals .these changes are described by _transition rates_. they reflect the results of mental and psychical processes , which could be simulated with help of osgood and tannenbaum s _ congruity principle _ , heider s _ balance theory _ or festingers _ dissonance theory _ . however , it is sufficient for our model to determine the transition rates empirically ( sect .the _ ansatz _ used for the transition rates distinguishes _ imitative _ and _ avoidance processes _ , and assumes _ utility maximization _ of the individuals ( sect .[ s2.1 ] ) .it is shown , that the resulting boltzmann - like model for imitative processes implies as special cases many generally accepted theoretical approaches in the social sciences ( sect .[ s2.2 ] ) . in section [ s3 ] a consequent mathematical formulation related to an idea of lewin is developed , according to which the behavior of individuals is guided by a _social field_. this formulation is achieved by a kramers - moyal expansion of the boltzmann - like equations leading to a kind of _ diffusion equations _ : the socalled boltzmann - fokker - planck _ equations _ . in these equationsthe most probable behavioral change is given by a vectorial quantity that can be interpreted as _social force _[ s3.1 ] ) .the social force results from external influences ( the environment ) as well as from individual interactions . in special casesthe social force is the gradient of a potential .this potential reflects the public opinion , social norms and trends , and will be called the _ social field_. by _ diffusion coefficients _ an individual variation of the behavior ( the `` freedom of will '' ) is taken into account . in section [ s4 ] representative casesare illustrated by computer simulations .the boltzmann - fokker - planck modell for the behavior of individuals under the influence of a social field shows some analogies with the physical model for the behavior of electrons in an electric field ( e.g. of an atomic nucleus ) ( cf .hartree s _ selfconsistent field ansatz _ ) .especially , individuals and electrons influence the concrete form of the _ effective _ social resp . electric field .however , the behavior of electrons is governed by a _different _ equation : the schrdinger _equation_. in physics , the boltzmann - fokker - planck equations can be used for the description of _ diffusion _ processes .let us consider a _ system _ consisting of a great number of _ subsystems_. these subsystems are in one _ state _ of several possible states combined in the set .due to _ fluctuations _ one can not expect a deterministic theory for the temporal change of the state to be realistic .however , one can construct a _stochastic _ model for the change of the _ probability distribution _ of states within the given system ( , ) . by introducing an index we may distinguish different _ types _ of subsystems . if denotes the number of subsystems of type , we have , and the following relation holds : our goal is now to find a suitable equation for the probability distribution of states for subsystems of type ( , ) . if we neglect memory effects ( cf .[ s6.1 ] ) , the desired equation has the form of a _ master equation _ : \ , .\label{boltz}\ ] ] is the _ effective transition rate _ from state to and takes into account the fluctuations .restricting the model to spontaneous ( or externally induced ) transitions and transitions due to pair interactions , we have : describes the rate of spontaneous ( resp .externally induced ) transitions from to for subsystems of type . is the transition rate for two subsystems of types and to change their states from and to and due to pair interactions . inserting ( [ effrate ] ) into ( [ boltz ] ) , we now obtain the socalled boltzmann-_like equations _ [ boltzlike ] \\ & + & \sum_{b=1}^a \sum_{{x}'\in \omega } \sum_{{y}\in \omega } \sum_{{y}'\in \omega } w_{ab}({\mbox{\boldmath }},{\mbox{\boldmath }}'|{\mbox{\boldmath }}',{\mbox{\boldmath }};t ) p_b({\mbox{\boldmath}},t)p_a({\mbox{\boldmath }}',t ) \nonumber \\ & - & \sum_{b=1}^a \sum_{{x}'\in \omega } \sum_{{y}\in \omega } \sum_{{y}'\in \omega } w_{ab}({\mbox{\boldmath }}',{\mbox{\boldmath }}'|{\mbox{\boldmath }},{\mbox{\boldmath }};t ) p_b({\mbox{\boldmath }},t)p_a({\mbox{\boldmath }},t)\end{aligned}\ ] ] with obviously , ( [ boltzlike]b ) depends nonlinearly on the probability distributions , which is due to the interaction processes . neglecting spontaneous transitions ( i.e. , ) the boltzmann - like equations agree with the boltzmann equations , that originally have been developed for the description of the kinetics of gases .a more detailed discussion can be found in . in order to apply the boltzmann - like equations to behavioral changes we have now to take the following specifications given by table 1 ( cf . ) : .specification of the notions used in statistical physics for an application to behavioral models . [ cols="<,<",options="header " , ] it is possible to generalize the resulting behavioral model to simultaneous interactions of an arbitrary number of individuals ( i.e. , higher order interactions ) .however , in most cases behavioral changes are dominated by pair interactions .many of the phenomena occuring in social interaction processes can already be understood by the discussion of pair interactions . in the following we have to find a concrete form of the effective transition rates [ eff ] ( cf .( [ effrate ] ) ) that is suitable for the description of behavioral changes . is the rate of pair interactions where an individual of subpopulation changes the behavior from to under the influence of an individual of subpopulation showing the behavior .there are only two important kinds of social pair interactions : [ ints ] obviously , the interpretation of the above kinds of pair interactions is the following : * the interactions ( [ ints]a ) describe _ imitative processes _ ( processes of persuasion ) , that means , the tendency to take over the behavior of another individual . * the interactions ( [ ints]b )describe _ avoidance processes _ , where an individual changes the behavior when meeting another individual showing the same behavior . processes of this kind are known as aversive behavior , defiant behavior or snob effect .the corresponding transition rates are of the general form [ totrate ] where the term ( [ totrate]a ) describes imitative processes and the term ( [ totrate]b ) describes avoidance processes . has the meaning of the kronecker function . by inserting ( [ totrate ] ) into ( [ eff ] )we arrive at the following general form of the effective transition rates : [ concrates ] \ , .\ ] ] for behavioral models one often assumes in ( [ concrates ] ) , * is a measure for the rate of spontaneous ( or externally induced ) behavioral changes within subpopulation . * [ resp . is the _ readiness _ for an individual of subpopulation to change the behavior from to spontaneously [ resp . in pair interactions ] .* is the _ interaction rate _ of an individual of subpopulation with individuals of subpopulation .* is a measure for the frequency of imitative processes .* is a measure for the frequency of avoidance processes . a more detailled discussion of the different kinds of interaction processes and of _ ansatz _ ( [ concrates ] ) is given in . for take the quite general form [ util ] with ( cf .then , the readiness for an individual of subpopulation to change the behavior from to will be the greater , * the greater the _ difference _ of the _ utilities _ of behaviors and is , * the smaller the _ incompatibility ( `` distance '' ) _ between the behaviors and is .similar to ( [ util ] ) we use and , therefore , allow the utility function for spontaneous ( or externally induced ) behavioral changes to differ from the utility function for behavioral changes in pair interactions ._ ansatz _ ( [ util ] ) is related to the _ multinomial logit model _ , and assumes _ utility maximization _ with incomplete information about the exact utility of a behavioral change from to , which is , therefore , estimated and stochastically varying ( cf . ) .computer simulations of the boltzmann - like equations ( [ boltz ] ) , ( [ concrates ] ) , ( [ util ] ) are discussed and illustrated in ( cf . also sect .[ s4 ] ) .the boltzmann - like equations ( [ boltz ] ) , ( [ concrates ] ) include a variety of special cases , which have become very important in the social sciences : * the _ logistic equation _ describes limited growth processes .let us consider the situation of two behaviors ( i.e. , ) and one subpopulation ( ) . may , for example , have the meaning to apply a certain strategy , and not to do so .if only imitative processes and processes of spontaneous replacement are considered , one arrives at the _ logistic equation _ * the _ gravity model _ describes processes of exchange between different places .it results for , , , and : p({\mbox{\boldmath }},t)p({\mbox{\boldmath }}',t ) \ , .\ ] ] here , we have dropped the index because of . is the probability of being at place . the absolute rate of exchange from to is proportional to the probabilities and at the places and . is often chosen as a function of the metric distance between and : . *the _ behavioral model _ of weidlich and haag assumes spontaneous transitions due to _ indirect interactions _ , which are , for example , induced by the media ( tv , radio , or newspapers ) .we obtain this model for and is the _ preference _ of subpopulation for behavior . are _ coupling parameters _ describing the influence of the behaviorial distribution within subpopulation on the behavior of subpoplation . for , the _ social pressure _ of behavioral majorities .* the _ game dynamical equations _ result for , , and ( cf .their explicit form is + [ game ] + \\ & + & \nu_{aa}(t ) p_a({\mbox{\boldmath }},t ) \big [ e_a({\mbox{\boldmath }},t ) - \langle e_a \rangle \big ] \ , .\end{aligned}\ ] ] + whereas ( [ game]a ) again describes spontaneous behavioral changes ( _ `` mutations '' _ , innovations ) , ( [ game]b ) reflects competition processes leading to a _ `` selection '' _ of behaviors with a _ success _ that exceeds the _ average success _ the success is connected with the socalled _ payoff matrices _ by . means the success of behavior with respect to the environment .+ since the game dynamical equations ( [ game ] ) agree with the _ selection mutation equations _ they are not only a powerful tool in social sciences and economy , but also in evolutionary biology .we shall now assume the set of possible behaviors to build a _ continuous _ space .the dimensions of this space correspond to different characteristic _ aspects _ of the considered behaviors . in the continuous formulation ,the sums in ( [ boltz ] ) , ( [ effrate ] ) have to be replaced by integrals : [ cont ] \nonumber \\ & = & \int d^n x ' \ , \big [ w^a[{\mbox{\boldmath }}'|{\mbox{\boldmath }}-{\mbox{\boldmath }}';t ] p_a({\mbox{\boldmath }}-{\mbox{\boldmath }}',t ) - w^a[{\mbox{\boldmath }}'|{\mbox{\boldmath }};t ] p_a({\mbox{\boldmath }},t ) \big ] \, , \end{aligned}\ ] ] where : = w^a({\mbox{\boldmath }}'|{\mbox{\boldmath }};t ) : = w_a({\mbox{\boldmath }}'|{\mbox{\boldmath }};t ) + \sum_{b=1}^a \int\limits_\omega d^n y \int\limits_\omega d^n y ' \ , n_b \ , \widetilde{w}_{ab } ( { \mbox{\boldmath }}',{\mbox{\boldmath }}'|{\mbox{\boldmath }},{\mbox{\boldmath }};t ) p_b({\mbox{\boldmath }},t ) \ , .\ ] ] a reformulation of the boltzmann - like equations ( [ cont ] ) via a kramers - moyal expansion ( second order taylor approximation ) leads to a kind of _ diffusion equations _ : the socalled boltzmann - fokker - planck _ equations_ [ bfp ] + \frac{1}{2 } \sum_{i , j=1}^n \frac{\partial}{\partial x_{i}}\frac{\partial } { \partial x_{j } } \big[q_{a i j}({\mbox{\boldmath }},t ) p_a({\mbox{\boldmath }},t)\big ] \label{bfp}\ ] ] with the effective _ drift coefficients _ and the effective _ diffusion coefficients _ contains additional terms due to another derivation of ( [ bfp ] ) .however , they make no contributions , since they result in vanishing surface integrals ( cf . ) . ] whereas the drift coefficients govern the systematic change of the distribution , the diffusion coefficients describe the spread of the distribution due to fluctuations resulting from the individual variation of behavioral changes . for _ ansatz _ ( [ concrates ] ), the effective drift and diffusion coefficients can be splitted into contributions due to spontaneous ( or externally induced ) transitions ( ) , imitative processes ( ) , and avoidance processes ( ) : [ sum ] where and the behavioral changes induced by the _ environment _ are included in and .the boltzmann - fokker - planck equations ( [ bfp ] ) are equivalent to the stochastic equations ( langevin equations ) [ langevin ] with g_{ajk}({\mbox{\boldmath }},t ) \label{drift}\ ] ] and ( cf . ) . for an individual of subpopulation the vector with the components describes the contribution to the change of behavior that is caused by behavioral fluctuations ( which are assumed to be delta - correlated and gaussian ) . since the diffusion coefficients and the coefficients are usually small quantities , we have ( cf .( [ drift ] ) ) , and ( [ langevin ] ) can be put into the form whereas the fluctuation term describes individual behavioral variations , the vectorial quantity drives the systematic change of the behavior of individuals of subpopulation .therefore , it is justified to denote as _ social force _ acting on individuals of subpopulation .the social force influences the behavior of the individuals , but , conversely , due to interactions , the behavior of the individuals also influences the social force via the behavioral distributions ( cf .( [ cont]b ) , ( [ effdr ] ) ) .that means , is a function of the social processes within the given population . under the integrability conditions exists a time - dependent _ potential _ so that the social force is given by its gradient : the potential can be understood as _ social field_. it reflects the social influences and interactions relevant for behavioral changes : the public opinion , trends , social norms , etc . clearly , the social force is no force obeying the newtonian laws of mechanics. instead , the social force is a vectorial quantity with the following properties : * drives the temporal change of another vectorial quantity : the behavior of an individual of subpopulation . *the component r^a({\mbox{\boldmath }}'|{\mbox{\boldmath }};t)\ ] ] of the social force describes the reaction of subpopulation on the behavioral distribution within subpopulation and usually differs from , which describes the influence of subpopulation on subpopulation . * neglecting fluctuations , the behavior does not change if vanishes . corresponds to an _ extremum _ of the social field , because it means we can now formulate our results in the following form related to lewin s _ `` field theory '' _ : * let us assume that an individuals objective is to behave in an optimal way with respect to the social field , that means , he or she tends to a behavior corresponding to a _minimum _ of the social field .* if the behavior does not agree with a minimum of the social field this evokes a _psychical tension ( force ) _ that is given by the gradient of the social field . *the psychical tension is a vectorial quantity that induces a behavioral change according to * the behavioral change drives the behavior towards a minimum of the social field .* when the behavior has reached a minimum of the social field , it holds and , therefore , , that means , the psychical tension vanishes . *if the psychical tension vanishes , except for fluctuations no behavioral changes take place in accordance with ( [ accor ] ) . in the special case , where an individual s objective is the behavior , one would expect behavioral changes according to which corresponds to a social field with a minimum at .examples for this case are discussed in .note , that the social fields of different subpopulations usually have _ different _ minima .that means , individuals of different subpopulations will normally feel different psychical tensions .this shows the psychical tension to be a _`` subjective '' _ quantity .in the following , the boltzmann - fokker - planck equations for behavioral changes will be illustrated by representative computer simulations .we shall examine the case of subpopulations , and situations for which the interesting aspect of the individual behavior can be described by a certain _ position _ x x x x x x x x x x x$}},t')\big ] \bigg\ } \end{array } \label{membfp}\ ] ] obviously , in these formulas there only appears an additional integration over past times .the influence of the past results in a dependence of , , and on . the boltzmann - like equations ( [ boltzlike ] ) resp . the boltzmann - fokker - planck equations ( [ bfp ] ) used in the previous sections result from ( [ memb ] ) resp .( [ membfp ] ) in the markovian limit of short memory ( where is the dirac delta function ) .the boltzmann - like equations ( [ boltzlike ] ) can also be used for the description of chemical reactions , where the states denote the different sorts of molecules ( or atoms ) , and distinguishes different isotopes or conformeres .imitative and avoidance processes correspond in chemistry to self - activatory and self - inhibitory reactions .although the concrete transition rates will be different from ( [ concrates ] ) , ( [ util ] ) in detail , there may be found analogous results for chemical reactions .note , that the arrhenius formula for the rate of chemical reactions can be put into a form similar to ( [ util ] ) .+ this work has been financially supported by the _volkswagen stiftung _ and the _ deutsche forschungsgemeinschaft _ ( sfb 230 ) . the author is grateful to prof .w. weidlich and dr .r. reiner for valuable discussions and commenting on the manuscript .99 r. zwanzig . on the identity of three generalized master equations ._ physica _ * 30 * , 11091123 , 1964 . i. oppenheim , k. e. schuler and g. h. weiss , eds ._ stochastic processes in chemical physics : the master equation_. mit press , cambridge , mass . , 1977 . h. haken ._ laser theory_. springer , berlin , 1984 .l. arnold and r. lefever , eds . _ stochastic nonlinear systems in physics , chemistry and biology_. springer , berlin , 1981. w. weidlich & g. haag ._ concepts and models of a quantitative sociology .the dynamics of interacting populations_. springer , berlin , 1983 .w. weidlich .physics and social science the approach of synergetics . _ physics reports _ * 204 * , 1163 , 1991. w. weidlich . the use of statistical models in sociology ._ collective phenomena _ * 1 * , 51 - 59 , 1972 . w. weidlich & g. haag , eds . _interregional migration_. springer , berlin , 1988 .w. weidlich & g. haag . a dynamic phase transition model for spatial agglomeration processes. _ journal of regional science _ * 27*(4 ) , 529569 , 1987 .w. weidlich & m. munz . settlement formation , part i : a dynamic theory ._ annals of regional science _ * 24 * , 83106 , 1990 . l. boltzmann ._ lectures on gas theory_. university of california , berkeley , 1964 .f. wilkinson _ chemical kinetics and reaction mechanisms_. van nostrand reinhold co. , new york , 1980 .d. helbing .interrelations between stochastic equations for systems with pair interactions . _physica a _ * 181 * , 2952 , 1992 ._ studies in human biology_. williams & wilkins , baltimore , 1924 .p. f. verhulst .bruxelles _ * 18 * , 1 , 1845 . g. k. zipf . the p1p2/d hypothesis on the intercity movement of persons. _ american sociological review _ * 11 * , 677686 , 1946 . j. hofbauer & k. sigmund ._ the theory of evolution and dynamical systems_. cambridge university press , cambridge , 1988 .p. schuster , k. sigmund , j. hofbauer , r. wolff , r. gottlieb & ph .selfregulation of behavior in animal societies i iii .cybern . _ * 40 * , 125 , 1981 .d. helbing ._ stochastische methoden , nichtlineare dynamik und quantitative modelle sozialer prozesse_. phd thesis , university of stuttgart , 1992 , _ submitted to _oldenbourg publishers , mnchen .d. helbing . a fluid dynamic model for the movement of pedestrians .submitted to _complex systems_. d. helbing . a mathematical model for attitude formation by pair interactions ._ behavioral science _ * 37 * , 190214 , 1992 .e. osgood & p. h. tannenbaum .the principle of congruity in the prediction of attitude change ._ psychological review _ * 62 * , 4255 , 1955 . f. heider .attitudes and cognitive organization . _ journal of psychology _ * 21 * , 107112 , 1946 . l. festinger . _a theory of cognitive dissonance_. row & peterson , evanston , il , 1957 ._ field theory in social science_. harper & brothers , new york , 1951 .d. r. hartree .cambridge philos .* 24 * , 111 , 1928 .d. montgomery .brownian motion from boltzmann s equation . _the physics of fluids _ * 14*(10 ) , 20882090 , 1971 .d. helbing . a mathematical model for behavioral changes by pair interactions and its relation to game theory ._ angewandte sozialforschung _ * 17 * ( 3/4 ) , 179194 , 1991/92 .a. domencich & d. mcfadden ._ urban travel demand .a behavioral analysis _ , pp .north - holland , amsterdam , 1975 .j. de d. ortzar & l. g. willumsen ._ modelling transport_. wiley , chichester , 1990 .d. helbing .stochastic and boltzmann - like models for behavioral changes , and their relation to game theory ._ physica a _ , 1993 ( in press ) .r. axelrod ._ the evolution of cooperation_. basic books , new york , 1984. j. von neumann & o. morgenstern ._ theory of games and economic behavior_. princeton university press , princeton , 1944 .the selforganization of matter and the evolution of biological macromolecules ._ naturwissenschaften _ * 58 * , 465 , 1971 .r. a. fisher . _the genetical theory of natural selection_. oxford university press , oxford , 1930 .m. eigen & p. schuster ._ the hypercycle_. springer , berlin , 1979 .r. feistel & w. ebeling ._ evolution of complex systems_. kluwer academic , dordrecht , 1989 .h. a. kramers ._ physica _ * 7 * , 284 , 1940. j. e. moyal ._ j. r. stat .soc . _ * 11 * , 151210 , 1949 .d. helbing . a mathematical model for the behavior of pedestrians ._ behavioral science _ * 36 * , 298310 , 1991 . g. e. forsythe , m. a. malcolm & c. b. moler ._ computer methods for mathematical computations_. prentice hall , englewood cliffs , n.j ., 1977 . j. b. kruskal & m. wish ._ multidimensional scaling_. sage , beverly hills , 1978 .f. w. young & r. m. hamer ._ multidimensional scaling : history , theory , and applications_. lawrence erlbaum associates , hillsdale , n.j . , 1987 .c. w. gardiner ._ handbook of stochastic methods_. springer , berlin , 2nd edition , 1985 . | + it is shown , that the boltzmann - like equations allow the formulation of a very general model for behavioral changes . this model takes into account spontaneous ( or externally induced ) behavioral changes and behavioral changes by pair interactions . as most important social pair interactions imitative and avoidance processes are distinguished . the resulting model turns out to include as special cases many theoretical concepts of the social sciences . + a kramers - moyal expansion of the boltzmann - like equations leads to the boltzmann - fokker - planck equations , which allows the introduction of `` social forces '' and `` social fields '' . a social field reflects the influence of the public opinion , social norms and trends on behaviorial changes . it is not only given by external factors ( the environment ) but also by the interactions of the individuals . variations of the individual behavior are taken into account by diffusion coefficients . -32pt |
strong replica consistency is an essential property for replication - based fault tolerant distributed systems .it can be achieved via a number of different techniques . in this paper , we investigate the challenges in achieving integrity - preserving strong replica consistency and present our solutions for state - machine based byzantine fault tolerant systems . while it is widely known that strong replica consistency can also be achieved through the systematic - checkpointing technique for nondeterministic applications in the benign fault model , it is generally regarded as too expensive and it is not suitable for byzantine fault tolerance .the state - machine based approach is one of the fundamental techniques in building fault tolerant systems . in this approach ,replicas are assumed to be either deterministic or rendered - deterministic .there has been a large body of work on how to render replicas deterministic in the presence of replica nondeterminism under the benign fault model ( _ e.g., _ ) . however , when the replicas can be subject to byzantine faults , which is the case for many internet - based systems , most of the previous work is no longer effective .furthermore , the determinism ( or rendered - determinism ) of the replicas is often considered harmful from the security perspective ( _ e.g., replication , an adversary can compromise any of the replicas to obtain confidential information ) and for many applications , their integrity is strongly dependent on the randomness of some of their internal operations ( _ e.g., numbers are used for unique identifier generation in transactional systems and for shuffling cards in online poker games , and if the randomness is taken away by a deterministic algorithm to ensure replica consistency , the identifiers or the hands of cards can be made predictable , which can easily lead to exploit ) .this calls for new approaches towards achieving strong replica consistency while preserving the randomness of each replica s operations . in this paper, we present two alternative approaches towards our goal . the first one is based on byzantine agreement ( referred to as the ba - algorithm in this paper ) and the other on a threshold coin - tossing scheme ( referred to as the ct - algorithm ) .both approaches rely on a collective determination for decisions involving randomness , and the determination is based on the contributions made by a set of replicas ( at least one of which must be correct ) , to avoid the problems mentioned above .they differ mainly by how the collective determination is carried out . in the ba - algorithm ,the replicas first reach a byzantine agreement on the set of contributions from replicas , and then apply a deterministic algorithm ( for all practical purposes , the bitwise exclusive - or operation ) to compute the final random value .the ct - algorithm uses the threshold coin - tossing scheme introduced in to derive the final random value , without the need of a byzantine agreement step .even though the ct - algorithm saves on communication cost , it does incur significant computation overhead due to the cpu - intensive exponentiation calculations .consequently , as we will show in section [ implsec ] , the ba - algorithm performs the best in a local - area network ( lan ) environment , where the ct - algorithm is more appropriate for the wide - area network ( wan ) environment where message passing is expensive .furthermore , to ensure the freshness of the random numbers generated , the replicas using the ba - algorithm should have access to high entropy sources ( which is relatively easy to satisfy ) and the replicas should be able to refresh their key shares periodically in the ct - algorithm . for the latter , we envisage that a proactive threshold signature scheme could be used . however , the discussion of proactive threshold signature techniques is out of the scope of this paper . to summarize ,we make the following research contributions in this paper : * we point out the danger and pitfalls of controlling replica randomness for the purpose of ensuring replica consistency . removing randomness from replica operations ( when it is needed ) could seriously compromise the system integrity .* we propose the use of collective determination of random numbers contributed from replicas , as a practical way to reconcile the requirement of strong replica consistency and the preservation of replica randomness .* we present a light - weight , byzantine agreement based algorithm to carry out the collective determination .the ba - algorithm only introduces two additional communication steps because the byzantine agreement for the collective determination of random numbers can be integrated into that for message total ordering , as needed by the state - machine replication .the ba - algorithm is particularly suited for byzantine fault tolerant systems operating in the lan environment , or where replicas are connected by high - speed low - latency networks .* we further present an algorithm that uses the threshold coin - tossing scheme as an alternative method for collective determination of random numbers .the coin - tossing scheme is introduced in as an instrumental mechanism for a group of replicas to reach byzantine agreement in asynchronous systems . to the best of our knowledge ,our work is the first to show its usefulness in helping to ensure strong replica consistency without compromising the system integrity .* we conduct extensive experiments , in both a lan testbed and an emulated wan environment , to thoroughly characterize the performance of the two approaches .in this section , we introduce the system model for our work , and the practical byzantine fault tolerance algorithm ( bft algorithm , for short ) developed by castro and liskov as necessary background information .byzantine fault tolerance refers to the capability of a system to tolerate byzantine faults .it can be achieved by replicating the server and by ensuring that all server replicas reach an agreement on the total ordering of clients requests despite the existence of byzantine faulty replicas and clients .such an agreement is often referred to as byzantine agreement . in recent several years , a number of efficient byzantine agreement algorithms have been proposed . in this work ,we focus on the bft algorithm and use the same system model as that in .the bft algorithm operates in an asynchronous distributed environment .the safety property of the algorithm , _i.e., correct replicas agree on the total ordering of requests , is ensured without any assumption of synchrony .however , to guarantee liveness , _ i.e., the algorithm to make progress towards the byzantine agreement , certain synchrony is needed .basically , it is assumed that the message transmission and processing delay has an asymptotic upper bound .this bound is dynamically explored in the algorithm in that each time a view change occurs , the timeout for the new view is doubled .the bft algorithm is executed by a set of replicas to tolerate up to byzantine faulty replicas .one of the replicas is designated as the primary while the rest are backups .each replica is assigned a unique i d , where varies from to . for view , the replica whose i d satisfies would serve as the primarythe view starts from 0 .for each view change , the view number is increased by one and a new primary is selected .the normal operation of the bft algorithm involves three phases . during the pre - prepare phase , the primary multicasts a pre - prepare message containing the client s request , the current view and a sequence number assigned to the request to all backups . a backup verifies the request and the ordering information .if the backup accepts the pre - prepare message , it multicasts a prepare message containing the ordering information and the digest of the request being ordered .this starts the prepare phase .a replica waits until it has collected matching prepare messages from different replicas , and the pre - prepare message , before it multicasts a commit message to other replicas , which starts the commit phase .the commit phase ends when a replica has collected matching commit messages from different replicas ( possibly including the one sent or would have been sent by itself ) . at this point ,the request message has been totally ordered and it is ready to be delivered to the server application once all previous requests have been delivered .all messages exchanged among the replicas , and those between the replicas and the clients are protected by an authenticator ( for multicast messages ) , or by a message authentication code ( mac ) ( for point - to - point communications ) .an authenticator is formed by a number of macs , one for each target of the multicast .we assume that the replicas and the clients each has a public / private key pair , and the public keys are known to everyone .these keys are used to generate symmetric keys needed to produce / verify authenticators and macs . to ensure freshness ,the symmetric keys are periodically refreshed by the mechanism described in .we assume that the adversaries have limited computing power so that they can not break the security mechanisms described above .furthermore , we assume that a faulty replica can not transmit the confidential state , such as the random numbers collectively determined , to its colluding clients in real time .this can be achieved by using an application - level gateway , or a privacy firewall as described by yin et al. , to filter out illegal replies .a compromised replica may , however , replace a high entropy source to which it retrieves random numbers with a deterministic algorithm , and convey such an algorithm via out - of - band or covert channels to its colluding clients .in this section , we analyze a few well - known approaches possibly be used to ensure replica consistency in the presence of replica randomness .we show that they are not robust against byzantine faulty replicas and clients . for replicas that use a pseudo - random number generator ,they can be easily rendered deterministic by ensuring that they use the same seed value to initialize the generator .one might attempt to use the sequence number assigned to the request as the seed . even though this approach is perhaps the most economical way to render replicas deterministic ( since no extra communication step is needed and no extra information is to be included in the control messages for total ordering of requests ) , it virtually takes the randomness away from the fault tolerant systems . in the presence of byzantine clients, the vulnerability can be exploited to compromise the integrity of the system .for example , a byzantine faulty client in an online poker game can simply try out different integer values as the seed to the pseudo - random generator ( if it is known to the client ) to guess the hands of the cards in the dealer and compare with the ones it has gotten .the client can then place its bets accordingly and gain unfair advantage .a seemingly more robust approach is to use the timestamp as the seed to the pseudo - random number generator .as shown in , the use of timestamp does not offer more robustness to the system because it can also be guessed by byzantine faulty clients .furthermore , the use of timestamp imposes serious challenges in asynchronous distributed systems because of the requirement that all replicas must use the same timestamp to seed the pseudo - random number generator . in , a mechanism is proposed to handle this problem by asking the primary to piggyback its timestamp , to be used by backups as well , with the pre - prepare message . however , the issue is that the backups have very limited ways of verifying the timestamp proposed ( other than that the timestamp must be monotonically increasing ) without resorting to strong synchrony assumptions ( such as bounds on processing and message passing ) . the only option remaining seems to be the use of a truly random number to seed the pseudo - random number generator ( or to obtain random numbers entirely from a high entropy source ) .we note that the elegant mechanism described in can not be used in this case because backups have no means to verify whether the number proposed by the primary is taken from a high - entropy source , or is generated according to a deterministic algorithm .if the latter is the case , the byzantine faulty primary could continue colluding with byzantine faulty clients without being detected .therefore , we believe the most effective way in countering such threats is to collectively determine the random number , based on the contributions from a set of replicas so that byzantine faulty replicas can not influence the final outcome .the set size depends on the algorithms used , as we will show in the next two sections , but it must be greater than the number of faulty replicas tolerated ( ) by the system .the normal operation of the ba - algorithm is illustrated in figure [ bafig ] . as can be seen , the collective - determination mechanism is seamlessly integrated into the original bft algorithm . on ordering a request ,the primary determines the order of the request ( _ i.e., a sequence number to the request ) , and queries the application for the type of operation associated with the request .if the operation involves with a random number as input , the primary activates the mechanism for the ba - algorithm .the primary then obtains its share of random number by extracting from its own entropy source , and piggybacks the share with the pre - prepare message multicast to all backups .the pre - prepare message has the form - prepare, , where is the view number , is the sequence number assigned to the request , is the digest of the request , is the random number generated by the primary , and is the authenticator for the message . on receiving the pre - prepare message , a backup performs the usual chores such as the verification of the authenticator before it accepts the message .it also checks if the request will indeed trigger a randomized operation , to prevent a faulty primary from putting unnecessary loads on correct replicas ( which could lead to a denial of service attack ) .if the pre - prepare message is acceptable , the replica creates a pre - prepare certificate for storing the relevant information , generates a share of random number from its entropy source , and multicasts to all replicas a pp - update message , in the form - update, , where is the sending replica identifier , is the random number contributed by replica .when the primary has collected pp - update messages , it combines the random numbers received according to a deterministic algorithm ( referred to as the entropy combination step in figure [ bafig ] ) , and builds a pp - update message with slightly different content than those sent by backups . in the pp - update message sent by the primary ,the component is replaced by a set of tuples containing the random numbers contributed by replicas ( possibly including its own share ) , .each tuple has the form .the replica identifier is included in the tuple to ease the verification of the set at backups . on receiving a pp - update message , a backup accepts the message and stores the message in its data structure provided that the message has a correct authenticator , it is in view and it has accepted a pre - prepare message to order the request with the digest and sequence number . a backup proceeds to the entropy combination step only if ( 1 ) it has accepted a pp - update message from the primary , and ( 2 ) pp - update messages sent by the replicas referenced in the set .the backup requests a retransmission from the primary for any missing pp - update message .after the entropy combination step is completed , a backup multicasts a prepare message in the form , where is the digest of the request concatenated by the combined random number . when a replica has completed the entropy combination step , and it has collected valid prepare messages from different replicas ( possibly including the message sent or would have been sent by itself ) , it multicasts to all replicas a commit message in the form . when a replica receives valid commit messages , it decides on the sequence number and the collectively determined random number . at the time of delivery to the application , both the request and the random numberare passed to the application . in figure[ bafig ] , the duration of the entropy extraction and combination steps have been intentionally exaggerated for clarify . in practice , the entropy combination can be achieved by applying a bitwise exclusive - or operation on the set of random numbers collected , which is very fast .the cost of entropy extraction depends on the scheme used .some schemes , such as the truerand method , allows very prompt entropy extraction .truerand works by gathering the underlying randomness from a computer by measuring the drift between the system clock and the interrupts - generation rate on the processor .the normal operation of the ct - algorithm is shown in figure [ ctfig ] .the ct - algorithm is the same as the bft algorithm in the first two phases ( _ i.e., - prepare and prepare phases ) .the commit phase is modified by incorporating threshold coin - tossing operations .most existing threshold signature schemes can be used for the ct - algorithm , where is the threshold number of signature shares needed to produce the group signature , and is the total number of players ( _ i.e., in our case ) participating the threshold signing . in most threshold signature schemes ,a correct group signature can be derived by combining shares from players , where is the maximum number of corrupted players tolerated .some schemes , such as the rsa - based scheme in , allow the flexibility of using up to as the minimum number of shares required to produce the group signature .since in our work , can be set as high as .this property offers additional protection against byzantine faulty replicas . at the beginning of the commit phase, each replica generates its share of threshold signature by signing using its private key share , where is the digest of the request message and is the sequence number assigned to the request .this operation is referred to as the share - generation step in figure [ ctfig ] .the signature share is piggybacked with the commit message , in the form , where is the replica s share of threshold signature .when a replica has collected valid commit messages from different replicas , it executes the shares - combination step by combining threshold signature shares piggybacked with the commit messages .after the shares have been combined into a group signature , it is mapped into a random number , first by hashing the group signature with a secure hash function ( _ e.g., ) , and then by taking the first group of most significant bits from the hash according to the type of numbers needed ,_ e.g., .the random number will be delivered together with the request to the application , when all previous requests have been delivered .in this section , we provide an informal argument on the correctness of our two algorithms .the correctness criteria for the algorithms are : * all correct replicas deliver the same random number to the application together with the associated request , and * the random number is secure ( _ i.e., is truly random ) in the presence of up to byzantine faulty replicas .we first argue for the ba - algorithm .c1 is guaranteed by the use of byzantine agreement algorithm .c2 is ensured by the collection of shares contributed by different replicas , and by a sound entropy combination algorithm ( _ e.g., using the bitwise exclusive - or operation on the set to produce the combined random number ) . by collecting contributions, it is guaranteed that at least of them are from correct replicas , so faulty replicas can not completely control the set .shares are all that needed for this purpose . however , collecting more shares is more robust in cases when some correct replicas use low - entropy sources .this is analogous to the benefit of shoup s threshold signature scheme . ]the entropy combination algorithm ensures that the combined random number is secure as long as at least one share is secure .the bitwise exclusive - or operation could be used to combine the set and it is provably secure for this purpose .therefore , the ba - algorithm satisfies both c1 and c2 .next we argue for the ct - algorithm .c1 is guaranteed by the following fact : ( 1 ) the same message ( ) is signed by all correct replicas , according to the ct - algorithm .( 2 ) the threshold signature algorithm guarantees the production of the same group signature by combining shares .different replicas could obtain different set of shares and yet they all lead to the same group signature .( 3 ) the same secure hash function is used to hash the group signature .c2 is guaranteed by the threshold signature algorithm . for the threshold signature algorithm used in our implementation, its security is ensured by the random oracle model .therefore , the ct - algorithm is correct as well .this completes our proof .the ba - algorithm and the ct - algorithm have been implemented and incorporated into a java - based bft framework .the java - based bft framework is developed in house and it is ported from the c++ based bft framework of castro and liskov . due to space limitation , the details of the framework implementation is omitted .the ct - algorithm uses shoup s threshold signature scheme , implemented by steve weis and made available at sourceforge .the development and test platform consists of a group of dell sc440 servers each is equipped with a pentiumd processor of 2.8ghz and 1 gb of ram running suse 10.2 linux .the nodes are connected via a 100mbps lan . as we noted earlier ,the wan experiments are emulated by introducing artificial delays in communication , without injecting message loss .to character the cost of the two algorithms , we use an echo application with fixed 1kb - long requests and replies .the server is replicated at four nodes , and hence , in all our measurements .up to 12 concurrent clients are launched across the remaining nodes ( at most one client per node ) .each client issues consecutive requests without any think time . for the ct - algorithm, we vary a number of parameters , including the threshold value and the key length .we also experiment with certain optimizations .for all measurements , the end - to - end latency is measured at the client and the throughput is measured at the replicas . the java system.nanotime ( ) apiis used for all timing - related measurements .we first report the mean execution latency of basic cryptographic operations involved in the ba - algorithm and the ct - algorithm because such information is beneficial to the understanding of the behaviors we observe .the latency cost is obtained when running a single client and 4 server replicas in the lan testbed .the results are summarized in table [ cryptocost ] . as can be seen , the threshold signature operations are quite expensive , and it is impractical to use a key as large as 1024bit - long ..execution time for basic cryptographic operations involved with our algorithms .the data shown for ct signing is for a single share . [cols="<,<,<",options="header " , ] without any optimization ( and without fault ) , an end - to - end remote call from a client to the replicated server using the original bft algorithm involves a total of 4 authenticator generation operations ( ) , 5 authenticator verification operations ( ) ( one does not need to verify the message sent by itself ) , 1 mac generation operation ( ) and 2 mac verification operation ( ) on the critical execution path ( _ i.e., for request sending and receiving , for the pre - prepare phase , for the prepare phase , for the commit phase , and for the reply sending and receiving ) .the ba - algorithm introduces two additional communication steps and 2 and 3 on the critical path .the ct - algorithm does not require any additional communication step , but introduces 1 threshold signing operation ( ) and 1 operation for threshold shares verification and combination ( ) . from this analysis ,the minimum end - to - end latency achievable using the ba - algorithm is ( a replica can proceed to the next step as soon as it receives 1 valid prepare message from other replica in the prepare phase , and 2 valid commit messages from other replicas in the commit phase , and the client can proceed to deliver the reply as soon as it has gotten 2 consistent replies ) .similarly , the minimum latency using the ct - algorithm is .based on the values given in table [ cryptocost ] , and for and 64bit - long key .the minimum overhead incurred by the ba - algorithm is and that by the ct - algorithm is for and 64bit - long key .figure [ lanperf ] shows the summary of the experimental results obtained in the lan testbed .the end - to - end latency ( plotted in log - scale ) measured at a single client under various configurations is shown in figure [ lanperf](a ) . as a reference ,the latency for the bft system without the additional mechanisms described in this paper is shown as `` base '' . in the figure ,the result for the ba - algorithm is shown as `` ba '' , and the results for the ct - algorithm with different parameter settings are labeled as ct#-i , where # is the value , and is the key length . as can be seen , only if a very short key is used , the ct - algorithm incurs significant overhead .furthermore , the observed end - to - end latency results are in - line with the analysis provided in the previous subsection .the throughput measurement results shown in figure [ lanperf](b ) are consistent with those in the end - to - end latency measurements .the results labeled with `` no batching '' are obtained for the original ct - algorithm described in section [ ctsec ] , _i.e., coin - tossing operation ( _ i.e., share signing , combination and verification of shares ) is used for _ every _ request . those labeled with `` with batching '' are measured when the requests are batched ( for total ordering , they all share the same sequence number ) and only one coin - tossing operation is used for the entire batch of requests . as can be seen from figure [ lanperf](b ), the gain in throughput is significant with the batching optimization .however , if sharing the same random number among several requests is a concern , this optimization must be disabled . for the ba - algorithm ,the communication steps for reaching a byzantine agreement on the set of random numbers are automatically batched together with that for requests total - ordering . batching the byzantine agreement fora set of random numbers does not seem to introduce any vulnerability .the additional optimization of one set of entropy extraction and combination per batch of requests does not have any noticeable performance benefit .therefore , it is advised that this further optimization not to be considered in practice due to possible security concerns .figure [ lanperf](c ) shows the end - to - end latency as a function of the load on the system in the presence of concurrent clients .we use the system throughput as a metric for the system load because it better reflects the actual load on the system than the number of clients .it is also useful to compare with the results in the wan experiments . as can be seen , for the ct - algorithm , without the batching optimization , the latency increases very sharply with the load , due to the cpu intensive threshold signature computations .the results for the ct - algorithm with keys larger than 64bits are omitted in figure [ lanperf](b ) and ( c ) to avoid cluttering .the throughput is significantly lower and the end - to - end latency is much higher than those of the ba - algorithm in these configurations , especially when the load is high .the experimental results obtained in an emulated wan environment are shown in figure [ wanperf ] .the observed metrics and the parameters used are identical to those in the lan experiments . as can be seen in figure [ wanperf](a ) , the end - to - end latency as perceived by a single client is similar for the ba - algorithm and the ct - algorithm with a key size up to 256bits ( for either or ) .this can be easily understood because the end - to - end latency is dominated by the communication delays , as indicated by the end - to - end latency for the base system included in the figure .figure [ wanperf](b ) shows part of the measurement results on system throughput under different number of concurrent clients . to avoid cluttering ,only the results for and key sizes of up to 256bits are shown .the throughput for the base system is included as a reference .as can be seen , when batching for the coin - tossing operation is enabled , the ct - algorithm with short - to - medium sized keys out - performs the ba - algorithm .when batching is disabled , however , the ba - algorithm performs better unless very small key is used for the ct - algorithm .the end - to - end latency results shown in figure [ wanperf](c ) confirm the trend .how to ensure strong replica consistency in the presence of replica nondeterminism has been of research interest for a long time , especially for fault tolerant systems using the benign fault model .however , while the importance of the use of good random numbers has long been recognized in building secure systems , we have yet to see substantial research work on how to preserve the randomized operations necessary to ensure the system integrity in a fault tolerant system . for the type of systems where the use of random numbers is crucial to their service integrity ,the benign fault model is obviously inadequate and the byzantine fault model must be employed if fault tolerance is required . in the recent several years, significant progress has been made towards building practical byzantine fault tolerant systems , as shown in the series of seminal papers such as .this makes it possible to address the problem of reconciliation of the requirement of strong replica consistency and the preservation of each replica s randomness for real - world applications that requires both high availability and high degree of security .we believe the work presented in this paper is an important step towards solving this challenging problem .we should note that some form of replica nondeterminism ( in particular , replica nondeterminism related to timestamp operations ) has been studied in the context byzantine fault tolerant systems .however , we have argued in previous sections that the existing approach is vulnerable to the presence of colluding byzantine faulty replicas and clients . the main idea of this work , _i.e., determination of random values based on the contributions made by the replicas , is borrowed from the design principles for secure communication protocols . however , the application of this principle in solving the strong replica consistency problem is novel .the ct - algorithm is inspired by the work of cachin , kursawe and shoup , in particular , the idea of exploiting threshold signature techniques for agreement .however , we have adapted this idea to solve a totally different problem , _i.e., is used towards reaching integrity - preserving strong replica consistency .furthermore , we carefully studied what to sign for each request so that the final random number obtained is not vulnerable to attacks .in this paper , we presented our work on reconciling the requirement of strong replica consistency and the desire of maintaining each replica s individual randomness .based on the central idea of collective determination of random values needed by the applications for their service integrity , we designed and implemented two algorithms . the first one , the ba - algorithm , is based on reaching a byzantine agreement on a set of random number shares provided by replicas . the second one , the ct - algorithm , is based on threshold signature techniques .we thoroughly characterized the performance of the two algorithms in both a lan testbed and an emulated wan environment .we show that the ba - algorithm in general out - performs the ct - algorithm in most cases except in wan operations under relatively light load .furthermore , the overhead incurred by the ba - algorithm with respect to the base bft system is relatively small , making it possible for practical use .future research work will focus on the threshold key share refreshment issue for the ct - algorithm . to ensure long - term robustness of the system, the key shares must be proactively refreshed periodically .otherwise , the random numbers generated this way may age over time , which may open the door for attacks . the threshold signature algorithm used in this work does not have built - in mechanism for key share refreshment .we will explore other threshold signature algorithms that offer this capability .j. slember and p. narasimhan . living with nondeterminism in replicated middleware applications . in _ proceedings of the acm / ifip / usenix 7th international middleware conference _ , pages 81100 , melbourne , australia , 2006 .j. yin , j .-martin , a. venkataramani , l. alvisi , and m. dahlin . separating agreement from execution for byzantine fault tolerant services . in _ proceedings of the acm symposium on operating systems principles _ , pages 253267 , bolton landing , ny , usa , 2003 . | strong replica consistency is often achieved by writing deterministic applications , or by using a variety of mechanisms to render replicas deterministic . there exists a large body of work on how to render replicas deterministic under the benign fault model . however , when replicas can be subject to malicious faults , most of the previous work is no longer effective . furthermore , the determinism of the replicas is often considered harmful from the security perspective and for many applications , their integrity strongly depends on the randomness of some of their internal operations . this calls for new approaches towards achieving replica consistency while preserving the replica randomness . in this paper , we present two such approaches . one is based on byzantine agreement and the other on threshold coin - tossing . each approach has its strength and weaknesses . we compare the performance of the two approaches and outline their respective best use scenarios . * keywords * : replica consistency , byzantine fault tolerance , middleware , threshold signature , coin - tossing |
fingerprint recognition , orientation field estimation , orientation field models , orientation field compression , orientation field marking , latent fingerprint recognition , fingerprint image enhancement , fingerprint matchingthe orientation field ( of ) is a crucial ingredient for most fingerprint recognition systems .an of is an image ( or matrix ) which encodes at each pixel of the fingerprint foreground the orientation degrees ( or in radians ) of a tangent to the ridge and valley flow at location .modelling and estimating ofs is a fundamental task for automatic processing of fingerprints .we introduce a locally adaptive global model called the _ extended quadratic differential _ ( xqd ) model .we show that xqds can model the of of a fingerprint perfectly in the limit .the major advantage of the xqd model lies in its small number of parameters , each of which has a simple and obvious geometric meaning .ofs have many important areas of application at various stages of processing fingerprints . in the following part of this section, we discuss some of the most relevant of these applications .the rest of this manuscript is organized as follows . in section [ sec : ofliterature ] , we review related work from the literature for estimating , modelling and marking ofs of fingerprints . in section [ sec : xqdm ] , we describe the novel xqd model . in section [ convergence ] , we prove that the xqd model adapts perfectly to the of of a real fingerprint in the limit . in section [ sec : results ] , we present practical results for compressing real fingerprint ofs by xqd models .we conclude in section [ sec : conclusion ] with a discussion and we point out topics for future work .most systems for fingerprint verification and identification are based on minutiae templates .automatic extraction of minutiae from fingerprints can be a very challenging task for images of low and very low quality .quality loss of fingerprint images acquired on optical scanners can occur if a finger is too dry , or too wet , or contains scars . puttinga finger with too much or too little pressure on sensor can have similar negative effects on the image quality .poor image quality can cause a minutiae extraction module to miss some true minutiae and to introduce some spurious minutiae .the goal of fingerprint image enhancement is to avoid these two types of errors by improving the image quality prior to minutiae extraction .the most effective approach for fingerprint image enhancement is contextual filtering and the most important type of local context is the of .for example , oriented diffusion filtering uses only the of to perform anisotropic smoothing along the ridge and valley flow .curved regions are computed based on the of and first , they are used for estimating the local ridge frequency and subsequently , of and ridge frequency estimates are joint inputs for curved gabor filtering .these two methods estimate the local context and perform filtering in the spatial domain .alternatively , methods for contextual filtering of fingerprints can also operate in the fourier domain , see e.g. methods proposed by chikkerur _ et al._ and by bartunek _et al._ .a hybrid approach with processing steps in the spatial and fourier domain has been suggested by ghafoor_ et al._ .software - based fingerprint liveness detection strives to classify an input fingerprint as belonging to one of two classes : an image of an alive , real finger or an image of a fake or spoof finger made from artificial material like wood glue , gelatine or silicone .developing countermeasures against spoof attacks is a very active research area .two methods apply the of to compute invariant descriptors : for histograms of invariant gradients ( hig ) , the gradient direction at each pixel is normalized relative to the local orientation .convolution comparison patterns ( ccp ) are obtained from small image patches . to that end ,rotation - invariant patches are computed by locally rotating each window according to the local orientation at that pixel .fingerprint alteration is another type of presentation attack in which the attacker has the goal of avoiding identification ( e.g. attempting to not being found in a watchlist search during border crossing or not being identified in a forensic investigation ) .altered fingerprints often have a disturbed of .therefore it is not surprising that in recent comparisons of features for alteration detection , some of the most effective features are related to the of . in a nutshell ,dofts and ofa are based on the difference between an estimated of and a smoother version of it , coh relies on the coherence of gradients and spda makes use of the fact that alterations tend to introduce additional singularities into a fingerprint .orientation descriptors have been proposed by tico and kuosmanen for computing the similarity between two minutiae from two templates .these local similarities are aggregated into a global score which summarizes the similarity between both templates .improvements of fingerprint recognition performance have been observed by using differences between two aligned ofs for score revaluation : first , two minutiae templates are matched and the output is a global similarity score and a minutiae pairing .second , the corresponding ofs are aligned , based on the paired minutiae and , the similarity between the ofs is evaluated . on the one hand , if both ofs fit well together , the aligned ofs confirm the minutiae pairing and the global score is increased . on the other hand ,if major discrepancies between the aligned ofs are observed , this is considered as an indication of a potential impostor recognition attempt and the score is decreased accordingly .ofs are used for fingerprint alignment ( also known as registration ) , i.e. finding a global rotation and translation of one of with respect to other which is obtained by optimizing a cost function .et al._ considered in their work the alignment of partial fingerprints from fingermarks to full fingerprints , so called rolled fingerprints which are acquired with the help of e.g. a police officer who rolls the finger of subject to capture the full surface from nail to nail .similar to above described score revaluation , they found that of alignment improves the recognition performance , in their case , the rank-1 identification rate .tams studied the problem of absolute pre - alignment of a single fingerprint in the context of fingerprint - based cryptosystems and suggested an of based method .the goal of classification and indexing is to speed up fingerprint identification ( 1 to n comparisons , where n can be in the magnitude of millions for forensic databases , see chapter 5 in ) .for example , the class tented arch is observed in about 3% of all fingerprints . hence , if a query fingerprint belongs to the class tented arch , then the search space can reduced by 97% .the majority of approaches for classification and indexing relies on ofs , e.g. cappelli _ et al._ proposed a method for fingerprint classification by directional image partitioning . a recent survey by galar _et al._ lists 128 references and most of them use the of ( or its singular points ) for classification .the generation of artificial fingerprint images has the advantage that it is possible to create arbitrarily large databases for research purposes e.g. of a million or a billion fingerprints at virtually no cost and without legal constraints .methods for producing synthetic fingerprints include .a detailed discussion of approaches for constructing and reconstructing fingerprint images can be found in .all methods have in common that they require an of for the image creation process . the methods by cappelli __ and by araque _ rely on the global of model by vizcaya and gerhardt .in contrast , the realistic fingerprint creator ( rfc ) uses of estimations by a combination of gradient - based and line sensor methods from a database of real fingerprints . during a forensic investigation , it can occur that traces at a crime scene are detected where two or more latent fingerprints overlap on a surface .the task is to separate these fingerprints , so that the separated single fingerprints can individually be utilized for identification .several research groups have addressed this problem in their work , and the key to the solution are in each work the ofs , see e.g. . a different forensic problem studied by hildebrandt and dittmann is latent fingerprint forgery detection . for this applicationone may well compute rotationally invariant features such as hig or ccp by taking the orientation flow at the latent fingerprint into account .considering the importance of the of , it is no surprise that a large body of literature is treating the topic of automatic of estimation .a classic approach is to estimate the of by some form of averaging ( squared ) image gradients ( computed e.g. using the sobel filter ) or symmetry features , see e.g. .however , this works only for good quality images .further approaches include complex 2d energy operators . for dealing also with medium and low - quality fingerprint images ,the line sensor method was developed which recently has been adapted to detect the oriented filaments in microscopy images .a dictionary based method has been proposed for estimating the of in latent fingerprints .many additional references can be found in and chapter 3 of .the zero - pole model has been introduced by sherlock and monro in 1993 .the flow fields generated by the zero - pole model resemble in some generality ofs of fingerprints , however they deviate significantly from ofs of a real finger .vizcaya and gerhardt improved the simple zero - pole model in 1996 by suggesting an additional nonlinear bending scheme to better fit the of generated by their model to real ofs .a global model based on quadratic differentials ( qd ) has been proposed by huckemann _et al . _ in 2008 .the zero - pole model is a special case of this more general model which has five geometrically interpretable parameters .the qd model better fits real ofs especially for the fingerprint of the type arch .further global models include the work by ram _ et al._ who apply legendre polynomials for of modelling .there are two main motivations for manually marking information in fingerprints .the first is the creation of ground truth information which can be used for evaluating the performance of human experts and algorithms regarding the estimation or extraction of the target information .and the second use case is the labeling for ( semi)automated retrival of information such as the foreground region , singular points , orientations or minutiae for fingerprint images which are too difficult for automatic processing by current state - of - the - art automatic fingerprint identification systems ( afis ) software .forensic examiners mark such information in latent fingerprints to identify suspects in criminal investigations .advancements in latent fingerprint recognition have the goal of minimizing the time and effort required by human experts for successful identifications . in order to compare the performance of different of estimation methods ,1782 orientations at specific locations in various fingerprints in have been manually marked by gottschlich __ in with a focus on low - quality regions affected by noise .et al._ addressed the problem of enhancing very low - quality fingerprints and suggested to manually mark the ofs used for contextual filtering .they proposed to mark local orientations , compute the delaunay triangulation and interpolate the orientation at unmarked pixel locations inside a triangle from the marked orientations at the three vertices of triangle .a disadvantage of this approach is that a large number of small triangles is required to approximate the true orientation in highly curved regions around singular points .et al._ and turroni _et al._ created a ground truth benchmark called foe following the same marking strategy ( 10 good and 50 bad quality prints ) .they compared the of estimation performance of several algorithms from literature on this benchmark .the foe benchmark has recently also been used for evaluating the performance of methods which reconstruct ofs from minutiae . in our work , of compression results using the foe benchmarkare reported in section [ sec : results ] .we note that xqd models can be used as an alternate interpolation method not suffering from the need of large numbers of support points at high curvature near singularities .latent fingerprint recognition is still considered to be a difficult problem .the level of noise for some fingermarks from crime scenes can be high and depending the surface from which fingermarks are lifted ( or directly photographed ) , a complex background can make the recognition task far more difficult in comparison with the processing fingerprints captured by a fingerprint sensor .typical first steps , among them fingerprint segmentation and of estimation , are challenging .recently , a novel image decomposition technique called dg3pd has been introduced which can better cope with these challenging images , see figures 9 and 10 in .the goal of fully automatic latent fingerprint identification has not yet been achieved .even state - of - the - art commercial latent identification software fails for a considerable amount of images and information still has to be manually marked by forensic experts in these cases .e.g. in a work by yoon _et al._ , information about the region - of - interest ( roi ) , the location of singular points and the orientation at some sparse locations is still assumed to be manually marked . in the light of these problems ,a subordinate target is to minimize the time and effort required by a human experts and the xqd model proposed in our work approach can be instrumental in achieving this .forensic and governmental databases can contain millions of fingerprint images .storing large volumes of data efficiently is a key issue which can be addressed by image compression .et al._ suggested to utilize the of for improving lossless fingerprint image compression .more specifically , they suggest to increase redundancy by scanning pixels along the orientation , instead of standard procedures like horizontal ( row by row ) or vertical scanning of images .larkin and fletcher proposed a method for lossy fingerprint image compression by decomposing an image into four elemental images which are highly compressible .one of these four images , called the continuous phase , can be converted into an of and vice versa . both approaches by thrn _et al._ and by larkin and fletcher can profit from improvements of the of compression by our xqd models .if in an application e.g. by a law enforcement agency , fingerprint images and their minutiae templates are stored together , an straightforward idea would be to reconstruct the of from the minutiae template . however , a recent evaluation of of reconstruction methods showed that all existing methods have weaknesses , and especially in proximity to singular points , all methods tend to be very inaccurate . in an analogy, minutiae templates can viewed as a form of lossy fingerprint image compressions .a survey of methods for reconstructing fingerprint images from minutiae templates can be found in .recently , shao _ et al . _ studied fingerprint image compression by use of dictionaries of fingerprint image patches .an additional discussion of texture image compression can be found in section 7.3 in .the efficiency of xqd models for of compression will be detailed in sections [ sec : ofcompression ] and [ sec : results ] .our methods for manually marking and automatically compressing fingerprint ofs are based on the _ quadratic differential _ ( qd ) model of huckemann _et al._ .consequently , we shall outline that model first .the basis of this model is given by a model for the arch type fingerprint .adding given singular point coordinates , _ i.e. , _ cores and delta , this can be generalized to model the other fingerprint types .the of of an arch type fingerprint is roughly controlled by two parameters : given a cartesian coordinate system in complex coordinates , the of is linked to the following complex function for and , otherwise , , as follows .the orientation angle at the coordinate can be obtained by with the main branch of the argument of a complex number taking values in .the parameter controls the coordinates of two singularities and ( 2nd order zeroes of ) along the abscissa and is a factor controlling vertical stretching . in fig .[ fig : arch ] a reasonable fit of the qd model to an arch type fingerprint is visualized where , , and the rotation and translation of the coordinate system have been adjusted .fingerprints of other types ( such as loops , double loops , and whorls ) contain an equal number of deltas and cores where a fingerprint can not contain more than two deltas / cores ; note that a whorl can be considered as a double loop in which the two cores agree or are of small distance .the following formula extends eq .( [ eq : arch ] ) to also model an of of a loop type fingerprint of which core and delta coordinates are encoded by the complex and , respectively : for and , otherwise , . here denotes the complex conjugate of . an of of a loop type fingerprint modeled by the qd model is visualized in fig .[ fig : loop ] .similarly , a double loop with complex core coordinates and complex delta coordinates , is modeled by the following : for and , otherwise , . for both models , eq .( [ eq : loop ] ) and eq .( [ eq : doubleloop ] ) , orientation angles are computed via eq .( [ eq : qdmangles ] ) . for a more comprehensive treatment of the qd model, we refer the reader to and the literature therein on geometric function theory ; there the inverse of is considered giving the _ quadratic differential _( qd ) the solution curves of having the orientations from eq .( [ eq : qdmangles ] ) .then , in particular the `` zeroes '' of are in fact poles of the qd and `` poles '' of are zeroes of the qd . as can be seen in fig .[ fig : qdm ] , the qd model can be used to quite well approximate the general ridge flow using few parameters only. however , the reader quickly recognizes areas in which the model significantly deviates from the evident ground - truth ridge flow which is an unavoidable effect due to the fact that in the qd model has only few degrees of freedom .consequently , we need to change or extend the model . in this paper, we propose to attach a variable number of local correction points to which we refer as _ _ thereby obtaining an _ extended quadratic differential _ ( xqd ) model . with these points the local of modeled by a qd can be corrected to better match with the ridge flow of a fingerprint .[ [ section ] ] + an _ _ is a -tuple where is a two - dimensional coordinate , an orientation angle , and and are two postive numbers .more precisely , denotes a coordinate at which the orientation given by a qd model is to be corrected ; denotes the orientation angle of the true field at which is to become the new orientation angle there ; finally , and control how significantly the orientation correction influences the neighboring orientations around .even more specifically , given the orientation angles of a qd ( see eq .( [ eq : qdmangles ] ) ) , a true orientation at an the new orientation angles at any coordinate is computed as where $ ] denotes a correction angle .the correction angle is defined as where denotes a function that assumes the value at and decays quickly to zero away from it . here is the coordinate represented w.r.t .a coordinate system defined by ( origin ) and ( rotation ) ; specifically , for example can be a tent function as in eq .( [ tent - fcnt : eq ] ) with . to obtain a higher degree of smoothness , in the applications we use the two - dimensional gaussian , given the of of a qd , the correction angle at can be defined recursively from a multiple number of as for and as in eq .( [ eq : correctionangle0 ] ) for .this yields our final xqd model in fig .[ fig : xqdm ] the effect of correcting a qd model s of using an increasing number of is visualized . one important application of our xqd model is to manually mark semi - automatically a fingerprint s of by an expert . from the many choices of orders of tasks , by preliminary experiments, we found the following strategy useful , the steps of which are visualized in fig .[ fig : mark ] . 1 .[ step : marksps ] manually mark the position of all cores and deltas of the fingerprint ( fig .[ fig : marksps ] ) .[ step : markinitialof ] manually mark an initial of ( possibly at sparse locations only , see fig .[ fig : markinitialof ] ) .[ step : adjustqdm ] adjust the qd model to the initial of by minimizing a suitable objective function , given by eq .( [ eq : objectivefunction ] ) , say ( fig .[ fig : adjustqdm ] ) .[ step : insertaps ] successively insert to the xqd model further minimizing the objective function ( fig .[ fig : insertaps ] ) .the final xqd model agrees , within a preselected error bound , say , with the manually marked of .this and other stopping strategies are discussed and illustrated in section [ sec : results ] .+ given the of of an xqd model , _ , and an initial of , we can measure the deviation of the xqd model to the initial of by the following objective function , which , depends on all parameters of the xqd model : we note that , if steps [ step : marksps ] and [ step : markinitialof ] have been performed manually by an expert , the remaining steps can be implemented to run ( semi-)automatically by utilizing a steepest descent method applied to the objective function eq .( [ eq : objectivefunction ] ) .the key property of the xqd model is its ability to compactly represent , while having the power of arbitrarly well approximating , a fingerprint s of .more specifically , recall that we count a total of at most real parameters describing a qd model : the parameters and ( see eq .( [ eq : arch ] ) ) describing size and stretching , a two - fold parameter for translation , and one more parameter for rotation ; further , a fingerprint can contain at most two cores and two deltas . as an xqd modelis influenced by a variable number of each described by real parameters , an xqd model with consumes a total of at most real parameters , where is the number of singular points .given an uncompressed of , an xqd model can ideally be approximated automatically with a small number of to compress the field . in fig .[ fig : mark ] a manually marked fingerprint ( from foe , here assuming no ground truth of available ) has been modeled by a xqd with anchor points . at this pointwe stress that the xqd model requires a reasonable estimation of the singular points even if they lie outside of the fingerprint s region of interest .unfortunately , to date there is no method known that robustly estimates all singular points . beyond that, however , we are able to automatically obtain an xqd model from an of thereby obtaining an effective method for compressing ofs .it is general consent that fingerprint ofs are smooth except for the singularities at cores and deltas ( e.g. ( * ? ? ?* section 3.6 ) ) . in consequence , denoting with the complex orientation of the true field at pixel location and denoting by the complex orientation of a qd model at with the same cores and deltas we may assume that there is a lipschitz constant such that for all in the observation window , while of course is unbound near the singularities . for the followingwe assume that we have fit a qd model to a fingerprint s of with same singularities , such that we can assume ( [ lipschitz : eq ] ) for according to the algorithm introduced above , given an approximation to and a correction function , for our convergence considerations here we use not the one given by eq .( [ eq : correctionangle0 ] ) but a tent function for suitable , we first show that the next iterate is closer to than the previous .building on that we then propose an algorithm , theoretically assuring an asymptotically perfect adaption to the of . for fixed location and radius , with the lipschitz constant from ( [ lipschitz : eq ] ), the following hold : 1 . whenever , 2 .if for some and with then 3 .choosing we have the first assertion follows from construction . for the second set . then taking the maximum of the last expression over yields ( ii ) .( iii ) : with the choice for , setting , the right hand side of ( [ proof - lem1:eq ] ) attains its maximum at and the value at . with the algorithm of the following proof every fingerprint of can be perfectly adapted in the limit ,i.e. we detail one iteration step of the algorithm and then show its convergence .suppose after the -st iteration , , we have an approximation with .set and place a finite number of anchor points such that covers the fingerprint area , here .setting , define and . in every step , due to ( iii ) ( b ) of the above lemma, the approximation error within is below , everywhere else , due to ( iii ) ( a ) , the error will still be bound by . according to ( ii ) , the next iteration will change the error within to below .since the mapping maps the interval to itself , after at most iterations , with some constant independent of , we have now suppose that the sequence would not converge to zero but to to . then due to and pointwise monotonicity in iterates , yielding a contradiction .this proves that every of can be assymptotically perfectly adapted .here we report compression results using the ten good quality ofs provided by as ground truths . as detailed in section [ sec : ofcompression ] , we have first manually marked singular points and afterwards automatically fit xqd models employing the following several optimization strategies .stopping criteria and specific improvement steps in each iteration depend on the choice of to the main goal which can be : * as fast as possible ( minimal runtime ) in order to achieve a small deviation of the reconstructed of from the ground truth .* as exact as possible ( minimal deviation from the ground truth of ) where we allow e.g. at most anchor points .* as compressed as possible ( minimal file size of the stored xqd ) * as sparse as possible ( minimal number of anchor points ) note that in consequence of ( [ eq : numberparameters ] ) the model s sparsity relates directly to the compression rate : minimizing the number of anchor points is equivalent to a aiming for high compression , see table [ table : compress ] and [ table : compressfilesize ] . at every iteration stepseveral choices are possible .one may optimize speed by simply adding a few anchor points without optimizing all possible parameters ( e.g. strategy s1 ) .alternatively , when accuracy is optimized ( e.g. strategy s4 ) , in every iteration step not only all present anchor points are optimized but as well the choice of singular points and the other parameters of the underlying qd model are reconsidered . balancing the three main goals of speed , compression rate and accuracy of the reconstructed of allows for a range of intermediate strategies ( e.g. s2 and s3 )results for four example strategies using a grid spacing of 12 pixels for ground truth orientation locations are reported in table [ table : compress ] . this work we have presented a semi - automatic tool based on a comprehensive xqd model for the orientation field of fingerprints , that achieves arbitrary precision at a very high compression rate in rather short time .a compression by a factor of , say , at an accuracy of a few degrees in a few seconds ( see table [ table : compress ] and [ table : compressfilesize ] ) this semi - automatic tool can also be used for fast marking of orientation fields of fingerprints , be it for forensic application or in order to generate large orientation field benchmark databases . after labeling ( or accepting the tool s proposals ) of singular points , location , orientation and scaling of a fingerprint image , a very sparse representation of the orientation field with arbitrary precisionis fast and automatically built .while in order to give a proof of concept , we have used the benchmark foe dataset , in future work orientation field estimation methods ( e.g. ) can be combined with our xqd model allowing for a next generation comprehensive low - dimensional fingerprint template consisting of minutiae plus segmentation ( e.g. ) plus anchor points ( xqd ) plus at most 13 parameters ( qd ) .one may even consider to place the anchor points at minutiae locations , then only their need to be recorded .additionally , ideal locations of anchor points these give the deviation from a conformal qd model deserve to be studied over large databases .the authors gratefully acknowledge the support of the felix - bernstein - institute for mathematical statistics in the biosciences and the niedersachsen vorab of the volkswagen foundation .stephan huckemann expresses gratitude to the support by the samsi forensics workshop 2015/16 .r. cappelli , d. maltoni , and f. turroni . benchmarking local orientation extraction in fingerprint recognition . in _ proc .. pattern recogn .( icpr ) _ , pages 11441147 , istanbul , turkey , august 2010 . | fingerprint recognition is widely used for verification and identification in many commercial , governmental and forensic applications . the orientation field ( of ) plays an important role at various processing stages in fingerprint recognition systems . ofs are used for image enhancement , fingerprint alignment , for fingerprint liveness detection , fingerprint alteration detection and fingerprint matching . in this paper , a novel approach is presented to globally model an of combined with locally adaptive methods . we show that this model adapts perfectly to the true of in the limit . this perfect of is described by a small number of parameters with straightforward geometric interpretation . applications are manifold : quick expert marking of very poor quality ( for instance latent ) ofs , high fidelity low parameter of compression and a direct road to ground truth ofs markings for large databases , say . in this contribution we describe an algorithm to perfectly estimate of parameters automatically or semi - automatically , depending on image quality , and we establish the main underlying claim of high fidelity low parameter of compression . |
advances in computational power and increasingly accurate techniques for estimating the current state of the earth s atmosphere have significantly improved numerical weather prediction ( nwp ) . as state estimation erroris reduced due to improved methods of data assimilation , error in the model tendency plays an increasing role in the uncertainty of predictions at every temporal and physical scale . in 1978 , leith introduced a statistical technique to correct model tendency error , in which short model forecasts are compared to a time series of reference `` truth '' to estimate both state - independent model bias , and state - dependent error components which are approximated by a least - squares linear function of the model state .more recently , empirical correction has been employed with success in atmospheric models with relatively few degrees of freedom ( e.g. in ) , and low - dimensional modifications of the technique involving , for example , singular value decomposition ( svd ) of the state - dependent correction operator ( the least - squares linear function proposed by leith ) have proven successful in models with as many has degrees of freedom . in this study , we apply the original technique developed by leith to simple three - dimensional lorenz models , where in addition to testing the effectiveness of empirical correction , we aim to understand the dynamical ramifications of a statistical approach to the correction of model tendency error . the model tendency is defined as the change in state variables over one timestep of numerical integration , which we denote as a time derivative : where is the atmospheric state - vector , typically with degrees of freedom for nwp . note that represents the model state , whereas will represent the true system state , in terms of model variables .given the state of a physical system , consists of all the known physics , forcings , and parameterizations of sub grid - scale processes .to make a one - timestep model forecast , we approximate the true change in state variables over that time by the model tendency , , and the error in that approximation is called the _ tendency error _ : even with perfect estimates of the current state of the atmosphere , the model tendency error would quickly separate forecasts from the truth , due to the atmosphere s chaotic dynamics . as a result , techniques for reducing the model tendency error represent a current and active research area , and those that are applicable independent of the specific model are of special interest . clearly , one would like to improve the physics represented by from first principles . in what follows, we assume this improvement has met the limit of diminishing returns , and move towards a statistical approach .the general strategy of empirical correction is to compare short forecasts generated by a model to observations of the system being modeled over some training period .if the state - space of the system is well represented by the training period , and the model is a reasonable approximation of the true system , the forecast error statistics can be used to create an empirical correction that pushes the model closer to the truth _ at each timestep _ of numerical integration . adjusting the modelevery timestep reduces the nonlinear growth of tendency error , providing more effective error reduction than a posteriori statistical correction .this strategy is similar to nudging or newtonian relaxation in a data assimilation ( da ) context , where one is assimilating observations , except that here we are nudging with predicted , rather than observed , forecast error .the present study is an investigation of a three - step empirical correction procedure inspired by the work of leith , delsole and hou , and more recently danforth et .al . .we first test its effectiveness in synchronizing lorenz systems with varied parameter - values in a perfect model scenario .we then apply the correction to an alternative model derived by ehrhard and mller tuned to approximate the evolution of a toroidal thermosyphon , an experimental analogue to the original lorenz system .the `` true '' climate is represented by a long - time , high - dimensional computational fluid dynamics ( cfd ) simulation of the thermosyphon .an _ analysis _, which is an approximation of the true system state in terms of model variables , is then created by three - dimensional variational ( 3dvar ) data assimilation and used for training and verification of the empirical correction .this process mimics the application of empirical correction in an operational nwp setting .we also verify the corrected model by direct comparison with observations of the truth .the results in each experiment suggest that the correction procedure is effective for reducing error .however , there is an associated cost of this short - term error reduction , which is evidenced by substantial qualitative differences between the dynamics of the corrected model and the true system , differences that were not present in the uncorrected model .introduction of system - specific knowledge into the correction procedure is shown to mitigate some of that cost , while also improving error statistics further than the entirely general procedure .the paper is structured as follows . in sec .[ emcor ] we define the procedure for a general model .the application of the technique in the perfect model scenario is addressed in sec .[ pms ] , and in sec .[ tcm ] we present the thermosyphon model correction .finally , we discuss the results and conclude the paper in sec .the correction procedure employed in this experiment consists of three steps : ( a ) training , ( b ) state - independent correction and ( c ) state - dependent correction .the state - independent correction can be thought of as aligning the time - average of the model state with that of the true state .likewise , the state - dependent correction can be considered an alignment of the model variance with that of the truth . to determine the correction terms , we compare short model forecasts to observations of the true system over a training period in a process called_ direct insertion_. in general , comparing model forecasts to a true physical system requires estimates of the true system state in terms of the model state - variables .consider a vector time - series of such estimates , which we will call the `` reference truth '' .the amount of time , measured in model timesteps , between estimates is called the _ analysis window _ ; we assume it to be constant .the process of direct insertion begins with the generation of a time - series of duration- model forecasts , where each forecast in the time - series is initialized from the previous state in the reference truth .the first vector in the series , for example , will be , which is the model state resulting from an -timestep forecast started with initial condition . subtracting each of the model forecast states from the corresponding reference true state produces a third time - series which represents the forecast errors after timesteps .these errors result from differences between the model rate of change for each variable and the true rate of change , and they are commonly referred to as analysis corrections ( or increments ) .see fig .[ fig : dirins ] for a schematic of the procedure . represents a time series of the reference truth , and the analysis window represents the number of timesteps between estimations of the true system state . represents a time series of forecasts with duration equal to the analysis window , each of which is started from the previous true state .the time - average of the analysis corrections divided by the number of timesteps in the analysis window approximates the average ( state - independent ) model bias . ] finally , we separate each of the time series into anomalous and time - average components : where the expectation operator denotes averaging over the training period , and the primes denote _ anomalies _ , which are differences from the mean .the time - average components will be used for state - independent correction as described in sec .[ sec : si ] and the anomaly time - series will be used for state - dependent correction as detailed in sec .[ sec : sd ] .we turn our attention first to a state - independent correction of the form where the constant vector is the average model error ( bias ) to be determined .recall that our goal here is to empirically align the time - averages of the state - variables in the model to those of the true system .we call the time - averaged true system state the _ climatology _ , and we approximate it by , the average of the reference true state over the entire training period . the average of the analysis corrections over the training period provides an estimate for the systematic , state - independent error generated by the model during the analysis window , as explained in fig .[ fig : dirins ] . dividing by the number of timesteps in the analysis window , then , we approximate the model bias by , and the bias - corrected model tendency is thus given by note that at this point we are approximating the model tendency error by the model bias alone .we also wish to estimate any component of error that may depend on the system state , by approximating , where is a matrix operator to be described in the next section . to generate a linear state - dependent correction operator , we follow leith and delsole and hou , by first recomputing the forecast and correction time series in fig . [fig : dirins ] using the state - independent corrected model , , and then decomposing into mean and anomalous components .we seek an improved model of the form where includes both stages of correction .letting ] for each time . is the average true - state covariance matrix , and * l * is known as leith s state - dependent correction operator .when * l * operates on the current anomalous state , we can think of it as doing two things : ( 1 ) operates on , effectively relating the current state to the reference truth in the dependent sample , i.e. giving the best representation of the current state in terms of past states ; and then ( 2 ) operates on the result , determining what correction should be made .this allows the model correction to adjust to different regions of state - space , and explains why the state - dependent correction can be thought of as attempting to align the model and true - state variances . in the next section ,we describe the application of this three - stage procedure to align lorenz systems with different parameter values in a perfect model scenario , and in sec .[ tcm ] we describe its application to couple a low dimensional model to a high dimensional toy climate simulation .we also note here that the term _ corrected _ will imply the application of both state - independent and state - dependent correction , unless explicitly stated otherwise .as a first step in the investigation of the correction technique , we consider its application to a model originally studied by lorenz .the system of equations ( [ eq : lorenz ] ) represents fluid flow between two plates , rayleigh - bnard convection , in which convection cells form for certain parameter ranges .however , with only slight modification ( the details of which appear in sec .[ tcmed ] ) , they also describe the flow in a natural convection loop .lorenz systems are covered exhaustively in publication , and thus provide a familiar platform on which to perform preliminary tests of strategies for predicting the future state of chaotic systems . in this perfect model scenario ,the true system and the models share the structure of ( [ eq : lorenz ] ) and only differ in parameter values .specifically , a true or _ nature _ run was created by integrating the lorenz system with the standard parameter set : .models with the same and , but with -values varying from 25 to 31 in increments of 0.5 ( except for ) were the subjects for correction .for each of these 12 models , the correction algorithm was performed using the 4 different analysis windows and 8 timesteps , resulting in 48 distinct model - correction pairs in an exponential design .the training and testing of the corrected models is detailed in the following sections , and a picture showing one particularly positive outcome appears in fig .[ fig : traj1 ] . ) , a model with -perturbation -2 ( ) , and the corrected model using an analysis window of timesteps .all start from the same initial condition ( circle ) , and represent 2 time - units ( 200 timesteps ) of integration ( concluding with squares ) .note that the corrected model trajectory is well aligned with the true trajectory for much longer than the uncorrected model trajectory . even after it deviates noticeably ,the corrected model trajectory changes flow regimes ( switches lobes ) with the true trajectory .in contrast , the uncorrected model trajectory deviates from the true trajectory almost immediately , and remains in the initial lobe . ]a 100 time - unit nature run was generated by integrating system ( [ eq : lorenz ] ) from the initial condition ^{tr}$ ] ( see ) for 10000 timesteps of time units each , using fourth order runge - kutta .for each of the 12 models and 4 analysis windows , a time - series of short forecasts was generated by direct insertion ( see fig . [fig : dirins ] ) . as an illustrative example , consider training with an analysis window of .the first forecast , , is a 4-timestep model forecast started with the true initial condition .the second forecast , , is a 4-timestep model forecast started with the true state , and so on , resulting in 10000/4 = 2500 total short forecasts .the state - independent correction was then computed as described in fig .[ fig : dirins ] and sec .[ sec : si ] .next , the correction time - series was recomputed using a state - independent corrected version of the model , namely eq .( [ eq : sicortend2 ] ) . correction andtrue - state anomalies were calculated as in eq .( [ eq : tsdecomp ] ) , and the cross covariance and covariance matrices were determined for each time and averaged over the training sample to obtain and as outlined in sec .[ sec : sd ] . finally , the static leith operator * l * was computed and the training procedure was complete .note that the training design imparts a statistical disadvantage upon the use of wider analysis windows .specifically , doubling the analysis window halves the number of samples in the training period .the design was chosen , despite this prejudice , to more accurately reflect an operational implementation in which the training data is likely to be drawn from a fixed period of time . however , to further support the validity of comparisons between models corrected with different analysis windows , we note that letting the training period be , ensuring that the number of samples is held constant at 10000 , yields results that are qualitatively indistinguishable from those presented here .( color online ) plots of average rmse over 1000 trials for the uncorrected ( thick solid blue ) and corrected models ( all with ) using analysis windows of 8 , 4 , 2 and 1 , black dot , red dash - dot , green dash , solid magenta respectively . the exact model ( thin solid black )is the same as the true system , but started from an initial condition perturbed randomly on the order of in each state variable .time units are on the -axis , where one time unit is 100 timesteps in the numerical integration . the technique s performance clearly degrades with widening analysis window . ] a new nature run , 10000 time - units ( one million timesteps ) in length , was generated starting from the last true state in the training period .the purpose of beginning at the end of the training period was to obtain an _independent _ truth with which to test the effectiveness of the corrected models . for each of the 48 corrected models ,1000 randomly selected states from this new nature run were used as the initial state , and both the uncorrected and corrected models were integrated for 20 time units .in addition , an _ exact _ model , with the same parameters as the truth , was integrated from a random perturbation of that same initial condition , on the order of in each state variable .the purpose of the exact model is to represent an upper bound for the effectiveness of the empirical model error correction , i.e. demonstrating a case where the impact of the -perturbation was corrected exactly , and forecast error comes only from the initial condition discrepancy .[ fig : traj1 ] depicts the results of a single test with a particularly positive outcome .trajectories of the truth ( solid black ) , uncorrected model ( blue dot ) , and corrected model ( red dash ) , all start from the same initial state and progress for two time - units .the corrected model trajectory stays close to the true trajectory for much longer than the uncorrected model trajectory , and even after deviating markedly , the corrected model trajectory still switches lobes with the true trajectory .two metrics were used to measure forecast accuracy : root mean square error ( rmse ) , and anomaly correlation ( ac ) .the rmse is given at time by ^ 2 } } \label{eq : test.1}\ ] ] where is the model state and is the true state , and we are summing over the state variables in the lorenz system .[ fig : allrmse ] plots the average rmse over 1000 trials performed for correction of the ( -perturbation of -2 ) model using the 4 different analysis windows .anomaly correlation is a metric frequently used in weather and climate modeling to determine the length of time for which a model forecast is useful .the ac is given by where and are the anomalous model state and anomalous true state , respectively , at a particular time .ac is essentially the dot product of the anomalous model state with the anomalous true state , normalized such that ac for a perfect model .a forecast is typically considered useful for as long as its ac remains above 0.6 . as with rmse ,the ac scores for each model , corrected and uncorrected , were averaged over 1000 trials to provide a good representation of model performance .see fig .[ fig : acbyraw ] for ac plots demonstrating the effects of changing analysis window length and parameter perturbation in the original model on the duration of useful forecasts .+ fig .[ fig : allrmse ] demonstrates that the tested empirical correction technique succeeds in reducing error in the perfect model scenario .however , the importance of frequent observations of the truth during the training period is highlighted by the approach of the average corrected model rmse towards the uncorrected model rmse with widening of the analysis window .it is also noteworthy that the corrected model remains almost as good as the exact model for a full time unit when observations are made every timestep . in this casethe rmse remains below 2 more than 5 times as long the uncorrected model .[ fig : acbyraw ] demonstrates that along with reduced error , empirical correction has the potential to provide forecasts that are useful for much longer .training with an analysis window of 1 timestep , the corrected model forecasts are useful for nearly 4 times longer than the uncorrected model forecasts . in light of the sensitivity of ac to analysis window length , the bottom panel of fig .[ fig : acbyraw ] suggests that the accuracy of parameter values matters less for the effectiveness of the corrected model , as measured by error statistics , than does the frequency of observations in training .however , it should be noted that error statistics are not the whole story .the ability of the corrected models to reproduce the qualitative dynamical _ behavior _ of the true system accurately , like switching of lobes in the attractor , is only indirectly indicated by reduced forecast error .we address this issue in sec .[ sec : mqd ] . + a summary representation of the data in fig .[ fig : acbyraw ] is shown in fig .[ fig : skdurbyraw ] , where the duration of forecast usefulness is plotted vs. analysis window in the top panel , and vs. -perturbation in the bottom panel . however , these plots still focus only on two cross sections of the 48 total model - correction pairs that were tested in the perfect model scenario , specifically the 4 corrected models in the top and the 12 corrections across all -perturbations using an analysis window of 1 timestep in the bottom . to get a better sense of the relative importance , fig .[ fig : skillful ] shows a surface plot of duration of useful forecasts for all combinations of analysis window and -perturbation tested .surprisingly , with an analysis window of 2 timesteps , the corrected models with the largest r - perturbation ( parameter error greater than 10% ) produce forecasts that are useful longer than those made by the uncorrected model with the smallest r - perturbation ( parameter error less than 2% ) . for systems with reasonably small model errors , this is an indication that empirical correction may improve forecasts more readily than parameter tuning .-perturbation tested , top surface corrected , bottom surface uncorrected . on the left facewe can see the cross section represented by fig .[ fig : skdurbyrpert ] . using an analysis window of 2 timesteps ,corrected models with largest r - perturbation provide forecasts that are useful longer than those of uncorrected models with smallest r - perturbation.,title="fig : " ] +we next investigate the effectiveness of the correction procedure in a more realistic situation , where the forecast model is structurally different ( in dynamics , dimension , parameterization , etc ... ) from the true system .consider a fluid - filled , vertically - oriented natural convection loop , or thermosyphon , with circular geometry , a schematic of which appears in fig .[ fig : loop ] . the constant temperature imposed on the wall of the lower half of the loop , , is greater than the constant temperature imposed on the wall of the upper half , , resulting in a temperature inversion . for large enough temperature differences , convection dominates , and the flow undergoes chaotic reversals of direction referred to as _ flow regime changes _ , while remaining laminar .these dynamics produce forecasting difficulties similar to those encountered in weather and climate prediction , and thus the thermosyphon provides a useful platform on which to test potential improvements to forecasting methods . , while the bottom half is heated to , creating a temperature inversion .the three state variables in the low - dimensional model are ( 1 ) , the mass flow rate ( mean fluid velocity ) ; ( 2 ) , the horizontal temperature difference ; and ( 3 ) , the difference between the vertical temperature profile and the value it takes during conduction.,title="fig : " ] + the true system was represented by numerical simulation using the 2-d laminar navier - stokes equations along with the energy equation , and a finite - volume - based flow modeling software package ( fluent 6.3 ) was used to perform the numerical integration , see for details .almost 90 days of fluid behavior was generated , in which flow reversals occurred .we also note that for the rayleigh number used in this experiment , ra , the thermosyphon has two unstable convective equilibrium solutions corresponding to steady clockwise and steady counter - clockwise flow .the low - dimensional model used to make forecasts of the cfd simulation is the ehrhard - mller ( em ) system : where is proportional to the mean fluid velocity , is proportional to the horizontal temperature difference in the loop , and is proportional to the deviation of the vertical temperature difference from the value it takes during conduction .this system was derived from physical principles to model a natural convection loop , and for this study the parameters were tuned empirically to best match the flow reversal behavior of the cfd simulated thermosyphon .the primary difference between em and lorenz systems is , a function that determines the velocity dependence of the heat transfer between the fluid and the wall .this characteristic of the flow is ignored by the lorenz equations ( i.e. ) . varies as the third root of the magnitude of the mean fluid velocity for , and as a fourth degree polynomial in for to remain differentiable ; the reader may see for more detail .we note that when in the em equations ( [ eq : em ] ) , they are identical to the lorenz system ( [ eq : lorenz ] ) with .physically , the unitary geometric factor ( i.e. ) in em results from the forced single circular convection cell in a thermosyphon , as opposed to the unconstrained flow producing multiple cells between two plates .using a background forecast created with the em model , and observations of the cfd mean fluid velocity with gaussian noise added to simulate error , 3dvar data assimilation was performed to generate an analysis , or best guess of the true state of the system in terms of the variables of the forecast model .one segment of the analysis is used as the reference truth for training , and the remainder is used for testing .approximately 3 days of 3dvar analysis , corresponding to 432 time - units in the em forecast model , was used as the training period reference truth . as in the perfect model scenario ,fourth - order runge - kutta was used with a timestep of time - units .an analysis window of timesteps , corresponding to about 30 seconds of simulated flow , was used to match the frequency of observation in the data assimilation scheme .thus , 8640 short forecasts were used to compute the model bias and leith operator for the model , as outlined in sec .[ emcor ] .three forecast models were compared by verification against both the 3dvar analysis and direct observation of the mass flow - rate in the cfd simulated thermosyphon : ( 1 ) the uncorrected em model with parameters tuned to best represent observations of the cfd simulated mass flow rate ; ( 2 ) the tuned model with correction applied ; and ( 3 ) an em model whose parameters differ from the tuned model by 10% , _ with _ correction applied .the purpose of the third test - model is to gauge the relative capabilities of empirical correction and parameter tuning for error reduction and prolonging the usefulness of forecasts .a set of 1000 trial forecasts were performed , starting from randomly chosen points in the analysis _after _ the training period .for each forecast , rmse and ac time - series were computed with respect to the analysis and averaged over all trials .see the top panel of fig .[ fig : emcor ] for the resulting average ac plot ( rmse not shown ) .we also verify the model forecasts against observed scalar mass flow - rate , for which time - series of relative error were averaged over the 1000 trials , pictured in the bottom panel of fig .[ fig : emcor ] .two details are important in the computation of the relative error .first , to compare model output to observations , it is necessary to convert the model state - variables to `` observation - space '' variables . in other words ,an observation operator determined by data assimilation was used to convert the model state - vector to an observation - space value , which is the predicted mass flow - rate of the system .second , the error is taken relative to the saturation point , which we define as the average absolute difference between the mass flow - rates of the system at randomly chosen points in time .thus , an average relative error near 1 means that the forecast model is no better than a random guess . + the results presented in fig .[ fig : emcor ] indicate that corrected models , tuned or not , produce smaller short - term forecast error on average than the uncorrected , optimally tuned model .corrected model forecasts are thus typically useful for longer .though this is an important benefit , average short - term error statistics may conceal considerable qualitative differences between model dynamics and those underlying the true system .stability of equilibrium solutions , and changes of flow regime characterized by aperiodic switching between otherwise confined regions of state - space are examples of qualitative characteristics for which it may be crucial that the model dynamics match those of the truth . in the next sectionwe address the effect of empirical correction on the dynamical matching capability of the em forecast model . to measure the accuracy of models with regard to matching the flow reversal behavior of the true system , forecasts were generated with both the corrected and uncorrected em models for 5000 initial states throughout the attractor , from the testing portion of the 90-day 3dvar analysis , and the time of the first predicted flow reversalwas recorded for each one .we investigate the difference between the predicted times and the actual times ( from the analysis ) of first flow reversal , taken , so that positive values indicate late predictions while negative values indicate early predictions .see fig .[ fig : rcdiffbyic ] for plots of the results .+ + three unforeseen costs of empirical correction for this system , in terms of lost dynamical matching capability , can be summarized as follows : * stabilization of convective equilibrium solutions * elimination of flow reversal behavior for states in a neighborhood of either convective equilibrium * spurious dynamical asymmetry between lobes we address these costs in the sections that follow , examining the first two related phenomena in sec .[ sec : eqstab ] , and then explaining the spurious asymmetry in sec .[ sec : brokensym ] .jacobian analysis of the em equations ( [ eq : em ] ) provides analytical confirmation of the instability of the convective equilibria in the uncorrected model .specifically , the jacobian evaluated at each equilibrium has one negative real eigenvalue , whose eigenvectors are in the local direction of ( tangent to ) the stable manifold of the equilibrium , and a conjugate pair of complex eigenvalues with positive real part , whose 2-d eigenspace is locally tangent to the unstable manifold of the equilibrium .in fact , for both convective equilibria the positive real parts of these unstable eigenvalues are quite small , on the order of , indicating weakly repelling instability . in the followingwe explain analytically the mechanism by which empirical correction overcomes this weak repulsion , producing a forecast model with _ attracting _ , and thus stable , convective equilibria , see fig .[ fig : falsestab ] .empirical correction of the em model effectively alters the right - hand side of the differential equations ( [ eq : em ] ) by first adding a constant related to the bias term , and then adding a term that depends linearly on the model state , i.e. , something related to . letting the vector - valued em differential equation , and be the corrected equation , we write where can be thought of as the computed bias term and leith operator , respectively , for an infinitesimal timestep . because of the nonlinearity of the system , we can not determine the exact relationship between the infinitesimal correctors and , and the bias term and leith operator , respectively , that we compute using a timestep of and analysis window of timesteps .however , since we discretize in numerical integration , we _ can _ determine the and that we actually apply .we effectively approximate the correction terms within the fourth order runge - kutta scheme .now , armed with an analytical representation of the differential equations , we note the following relationship between the corrected model jacobian , and that of the uncorrected em model : since the constant bias term disappears and operates on a translation of the model state . we evaluate the jacobian of the corrected model at each of the convective equilibria , anddetermine its eigenvalues .indeed , the real part of the complex conjugate eigenvalues , for each convective equilibria , is _ negative _ for the corrected model , and thus the convective equilibria have become stable .the equilibria attract all states inside neighborhoods around them , which thereby separate state - space near the attractor into regions whose trajectories will either change flow regime at least once , or not at all . see fig .[ fig : falsestab ] for an example of this effect .furthermore , any trajectories that land in one of these neighborhoods after only one or two flow reversals , which might occur within the expected 17-minute duration of useful forecasts for the corrected model reported in fig .[ fig : emcor ] , will then approach steady convection .this behavior is in qualitative _ opposition _ to that of the true system , _ and _ that of the original uncorrected em model , for which steady convection in a single direction is an unstable equilibrium . the size discrepancy between left and right - lobe regions attracted to the convective equlibria of the corrected model revealed in fig .[ fig : rcdiffbyic ] demonstrates that empirical correction breaks the symmetry of the em system .as in the conventional lorenz system ( [ eq : lorenz ] ) , the em system ( [ eq : em ] ) is symmetric under the mapping . again letting the vector - valued em differential equation , this symmetry implies that commutes with a certain matrix , i.e. \ ] ] in fact , empirical correction breaks this symmetry in two ways .first , recall that after bias correction alone we have changed the em system by adding a constant vector to the right - hand side of the differential equation . letting the bias - corrected differential equation , _ does not _ commute with unless there is zero bias in and , i.e. \ ] ] where can be any constant , and we recall that can be thought of as the computed bias term for an infinitesimal timestep . note that even if no bias in or existed , the probability of statistically computing a bias term that would preserve the symmetry of the em system is zero . in the unlikely casethat a bias term is computed that preserves symmetry , or such a bias term is forced , state - dependent correction will break it . assuming that does commute with , and letting be the fully corrected differential equation , then in other words, commutes with if and only if commutes with .this forces the computed to be of the form \ ] ] where the can be any constants .of course the probability of computing such an statistically is also zero .therefore , both bias and state - dependent correction break the symmetry of the em model .undesirable effects of empirical correction include the breaking of system symmetry , and altered stability of equilibrium solutions which results in large regions of state - space for which flow - reversals never occur in corrected model forecasts .we emphasize that the qualitative behavior of the corrected model is substantively different from that of the uncorrected model , which matches the behavior of the cfd simulated truth .this is true despite the fact that the corrected model shows improved average error statistics .however , it is possible to adjust the correction procedure to mitigate this effect by directly incorporating dynamical knowledge of the true system , which is the subject of the next section .note that in doing so , we sacrifice the general applicability of the technique . ) .the uncorrected em model trajectory ( blue ) behaves as it should , winding away from the equilibrium at the center of the lobe , whereas the trajectory of the corrected model ( red ) collapses toward the equilibrium solution , indicating the false stability produced by the correction .the source of the false stability is the overcorrective nudging visible as jumps ( every 30 seconds , the length of the analysis window ) on the red curve .the correction is attempting to lengthen flow regimes , a consequence of the blue regions near the equilibria in the top of fig .[ fig : rcdiffbyic ] . ] to encode dynamical knowledge of the system in the empirical correction procedure , the state - space is partitioned into regions based on the qualitative behavior of the system , and then a separate bias term and state - dependent correction operator is computed _ for each region_. for example , in the context of weather forecasting , the state - space of the atmosphere could be divided by stage in the el nio oscillation , or by day and night , or local season for regional models .[ fig : rcdiffbyic ] suggests two ways to partition the state - space in the present context : ( 1 ) by flow regime direction ( lobe ) ; or ( 2 ) by distance from the nearest convective equilibrium solution . in each case , the state - space is decomposed into two regions , left / right lobe , or near / far from equilibrium , respectively .in addition to testing the correction procedure using each of these strategies individually , a procedure applying them simultaneously , which results in a partition of the state - space into four regions , is also tested . to generate lobe - dependent bias correction terms and state - dependent correction operators , two regions , corresponding to flow regimes of opposite direction , are defined by noting that their union is the entire state - space .physically , represents all states undergoing clockwise ( counter - clockwise ) convection .next , the direct insertion procedure illustrated in fig . [fig : dirins ] is modified to produce two subsequences of the analysis correction time series , and , that correspond to two subsequences of the analysis time series , and , respectively , where and is defined similarly .these subsequences are separated into mean and anomalous components , as in eqs .( [ eq : tsdecomp ] ) , using means over each individual subsequence . finally , the separate correction terms and operators are computed by : for . to apply the lobe - dependent correction , at every timestep of numerical integration the current stateis determined to be in either or , and the appropriate bias correction term and state - dependent operator are applied to advance the model .+ a procedure analogous to that for lobe - dependent correction is applied here .two regions , corresponding to near and far from equilibrium , respectively , are defined by where and are the convective equilibria ( estimated from the parameters of uncorrected model , ) , and the critical value is a parameter for the procedure .we are effectively approximating the neighborhoods attracted to the convective equilibria by spheres of radius .for all results shown in the paper , the critical value was used , though error statistics and dynamical matching capability were virtually unaffected by changing this parameter by 25% in either direction .this range of critical values was tested as estimates to the average radius of the dark red region in the left lobe of fig .[ fig : rcdiffbyic ] ( bottom ) . continuing with the correction scheme ,direct insertion is modified as in the lobe - dependent correction , and the region - specific bias terms and state - dependent operators are calculated as in eqs .( [ eq : lkcorterms ] ) , substituting for .application of the correction to a forecast model also proceeds in the same fashion . defining the lobe regions as in the lobe - dependent section , and the equilibrium regions as in the previous section , we define the four regions for simultaneous lobe and equilibrium - dependent correction by so that we modify direct insertion to produce four subsequences of analysis increments , each paired with the appropriate subsequence of the analysis time series .note that the critical value used in defining the regions does not depend on the lobe in this scheme . allowing a different for each lobe is a possible modification that was not tested .we compute the bias terms and leith operators as in eqs .( [ eq : lkcorterms ] ) , substituting for , where now . again, application of the correction proceeds as in the individual dynamically informed schemes , where the current state is determined to be in one of the four defined regions , and the appropriate bias term and leith operator are used to advance the model .first , we discuss the results of applying the dynamically uninformed , and entirely general correction procedure as described in sec .[ emcor ] , to couple the three - dimensional em forecast model to the high - dimensional cfd simulated thermosyphon. then we report the results of modifying the correction procedure to directly incorporate dynamical knowledge of the system , as detailed in sec .[ sec : idk ] .the ability of the correction procedure to overcome parameter inaccuracies , as suggested by the results in the perfect model scenario , inspired the comparison of three forecast models .* tuned , uncorrected * untuned , corrected * tuned and corrected fig . [fig : emcor ] shows that empirical correction produces forecasts that remain useful longer , and demonstrate reduced error in this mock - operational setting , as well as in the perfect model scenario .verification against both analysis and observations allows more confidence in the success of the procedure in aligning forecasts made by the low - dimensional model with the cfd simulated thermosyphon .additionally , the results constitute evidence that the correction procedure is more effective than fine - tuning of parameters for improving error statistics in this more realistic setting , as well as in the perfect model scenario .this is not to say that parameters should not be tuned , but rather that empirical correction is a cheaper and more effective avenue for reducing forecast uncertainty in the present context. empirical correction of the em forecast model comes at the cost of decoupling qualitative dynamics from those of the true system , however . as shown in fig .[ fig : rcdiffbyic ] , a large region is created in which initial states have trajectories under the forecast model that behave completely different from how they would in the true system .an example of such a trajectory appears in fig .[ fig : falsestab ] . while the general empirical correction procedure reduces forecast error for many initial states , it produces a forecast model that is entirely useless for others . by introducing dynamical knowledge into the correction procedure ,the results of which appear in the next section , we greatly reduce the number of such initial states , and also further reduce average forecast error .+ in an effort to produce forecast models that more accurately reflect the qualitative dynamics of the true system , two types of system - specific dynamical knowledge were directly incorporated into the empirical correction procedure : ( 1 ) flow regime direction of the current system - state , and ( 2 ) distance between the state and the nearest convective equilibrium solution .these dynamical cues were incorporated individually , and also simultaneously , resulting in three dynamically informed , empirically corrected forecast models , as outlined in sec .[ sec : idk ] .[ fig : acmfreall ] shows average ac and mass flow - rate relative error over 5000 trials for the three dynamically informed and corrected models , as compared to the original biased model and the dynamically uninformed , corrected model . as described in the caption to the figure , encoding the current flow regime into the correction procedure results in a forecast model that is useful for almost twice as long as the original biased model , and doubles the improvement that was gained by dynamically uninformed emprical correction . encoding the distance to the nearest equilibrium solution ,on the other hand , does not greatly prolong usefulness , and in fact reduces it slightly when applied simultaneously with lobe - dependent correction . in fig .[ fig : rcdiffbyicall ] , we see how the three forecast models resulting from dynamically informed empirical correction compare with the dynamically uninformed , corrected model , with regard to matching the qualitative behavior of the true system .we see a trend of improvement , characterized by smaller regions of states whose dynamics are different in the models than in the cfd simulated thermosyphon ( dark red regions ) , as we apply first lobe - dependent , then equilibrium - dependent , and finally simultaneously lobe and equilibrium - dependent correction .results shown in figures [ fig : acmfreall ] and [ fig : rcdiffbyicall ] suggest that encoding flow regime direction into the correction procedure primarily enhances forecast statistics , while encoding distance from equilibrium primarily enhances dynamical matching. simultaneous inclusion of the two types of dynamical cues results in the best dynamical matching , with only a slight cost in average forecast accuracy . for further evidence of this summary conclusion ,consider table [ tab:1 ] ..[tab:1 ] median absolute differences between predicted and actual times of first flow reversal ( row 1 ) , along with percentage of trials for which the first flow reversal was predicted within 1 and 2 minutes , ( rows 2 and 3 , respectively ) , for the uncorrected model ( m ) , dynamically uninformed corrected model ( cm ) , lobe - dependent corrected model ( ld ) , equilibrium - dependent corrected model ( ed ) , and simultaneously lobe and equilibrium - dependent corrected model ( ld - ed ) . [ cols="^,^,^,^,^,^ " , ] in the first row , the median absolute differences between predicted and actual times of first flow reversal over the 5000 trials are listed for the uncorrected model ( m ) , dynamically uninformed corrected model ( cm ) , lobe - dependent corrected model ( ld ) , equilibrium - dependent corrected model ( ed ) and simultaneously lobe and equilibrium - dependent model ( ld - ed ) .the means are highly skewed for the cm , ld and ed models , due primarily to the large basins of attraction for the left - lobe equilibrium solution , see fig . [fig : rcdiffbyicall ] top left , right and bottom left , respectively , and thus we show only the medians .the medians show that both the ld and the ld - ed corrected models predict the first flow reversal more accurately than the uncorrected model .the bottom two rows show the percentage of the 5000 trials for which the first flow reversal was predicted within 1 and 2 minutes , respectively , for each of the models . againthe ld and ld - ed corrected models show the best performance .note that the ld - ed model boosts both the 1 and 2-minute success rates by approximately 10% over the uncorrected model .we note , however , that even the ld - ed model exhibits a spuriously stable convective equilibria in each lobe , fig .[ fig : rcdiffbyicall ] . incorporating dynamical knowledge of the true system in the correction procedure , at least through the partitioning of state - space as we have done here , is not enough to avoid the stabilizing effect of empirical correction on the equilibria of the em model explained in sec .[ sec : eqstab ] .it is plausible that models of other systems with weakly repelling ( attracting ) equilibria might be subject to similar stabilizing ( destabilizing ) effects under empirical correction .such effects might be mitigated by the strategy employed in the present work , but probably not avoided .( color online ) difference ( in minutes ) between predicted and actual time of first flow reversal , plotted by initial state , for the ( a ) corrected , ( b ) lobe - dependent corrected , ( c ) equilibrium - dependent , and ( d ) lobe equilibrium - dependent models . as in fig .[ fig : rcdiffbyic ] , the difference was taken , so that positive values ( towards red ) indicate late predictions while negative values ( towards blue ) indicate early predictions .inset partial histograms show the number of forecasts ( out of 5000 ) predicting the first flow reversal within 3 minutes of the truth ( green ) and predicting that a flow reversal will never happen ( red ) ; the axes have the same scale for comparison , and the bar colors correspond to the dot colors .( b ) applying lobe - dependent correction ( as opposed to ( a ) dynamically uninformed correction ) increases the number of initial states for which the first flow reversal is predicted accurately , visible as a larger number of green dots ( bigger green bar ) .also , although the region of dark red ( initial states attracted to convective equilibrium ) in the left lobe has decreased in size , the one in the right lobe has inflated , again as compared to the top left .( bottom left ) equilibrium - dependent correction shrinks the left - lobe red region without inflating the one in the right lobe .however , it maintains the region of initial states for which flow reversal predictions are slightly early ( light blue ) , which was reduced by the lobe - dependent correction .( bottom right ) applying the simultaneously lobe and equilibrium dependent correction , in which there are four different correction regions , the forecast model demonstrates the smallest region of initial states whose qualitative dynamics are different from the cfd simulated thermosyphon . ]it is apparent from the results of this work that the empirical correction technique tested here is successful in reducing forecast error in the low - dimensional setting . in both the perfect model scenario and the mock - operational experiment , improvederror statistics and prolonged usefulness of forecasts were demonstrated by the corrected models .furthermore , the empirical correction procedure was shown to provide greater improvement in average forecast accuracy than fine - tuning of parameters . however, this positive result comes at the cost of altering some important dynamical characteristics of the model .details and implications of these results are discussed in the next two sections . in the third section, we make an observation about the relative impact of state - independent and state - dependent correction , in the present context , to conclude the paper .in the perfect model scenario , empirical correction of models with the greatest parameter error provided forecasts with smaller short - term average error , and that were useful for longer , than those made by the uncorrected models with least parameter error . similarly , empirically correcting the em model with 10% error in _every _ parameter ( measured from tuned values ) produced a forecast model that outperforms the tuned , but uncorrected model in predicting the cfd simulated thermosyphon . in each experiment ,superior performance is evident in terms of anomaly correlation and average forecast error , measured against 3dvar analysis , and is verified against observed mass flow - rate of the cfd simulation in the toy climate experiment .we do not present these results as evidence that empirical correction should replace parameter tuning , nor even that the former is better than the latter in any well - defined sense .however , the results do suggest that empirical correction could be a viable complement to the tuning of model parameters . particularly as degrees of freedom become large , for example in some currently operational numerical weather models , the computational cost of parameter tuning is very large in comparison to that of empirical correction , when appropriately modified for such models ( see ) .an example of such a modification is to compute the first principal components ( pcs ) of by singular value decomposition ( svd ) , where is determined so that a certain percentage of the state covariance is explained by these first pcs .a combined tuning / correction approach could reduce the number of model integrations necessary in the parameter tuning process without sacrificing model accuracy .however , the dynamical ramifications of this strategy must be considered , as we explain in the next section . the reduction in average forecast error provided by empirical correctionbelies fundamental dynamical disturbances born out of the correction procedure .stabilization of equilibrium solutions , which results in large regions of initial states for which the corrected model forecasts behavior in opposition to that of the true system , follows from the dynamical modification imposed by empirical correction .in addition , the symmetry of the model system is broken by empirical correction .though these costs can be mitigated somewhat by hard - wiring system - specific dynamical cues into the correction procedure , they can not be eradicated without more fundamental alterations of the technique , e.g. , forcing the bias term and leith operator to preserve system symmetry . in operational practice, empirical correction is known to introduce imbalances , e.g. violating geostrophy , necessitating some mechanism for smoothing the flow into a physically viable region of state space .in fact , it may be impossible to avoid all dynamical inaccuracies resulting from empirical correction , and even if theoretically possible , it would likely be impractical to do so in any operational setting . in considering the application of the technique in operational settings , then ,it must be determined if the effects of misrepresented dynamics can be reduced to a tolerable level on a case - by - case basis . in the ideal situation ,regions of state - space that would be dynamically misrepresented under empirical correction could be reduced , by minor modifications to the correction procedure , to encompass only unrealistic or unlikely physical states . in any case , the technique presented in this study should not be applied without such considerations . state - independent error correction by itself produces almost no improvement in any of the forecast models in this study ( not shown ) .this is in contrast to what has been observed in operational weather and climate model studies , where state - independent bias correction typically outperforms state - dependent correction in reduction of forecast errors .the inaccuracies of ad - hoc forcings included in such models to compensate for external and/or irresolvable phenomena ( e.g. solar and cloud forcings , respectively ) are likely responsible for a large component of the bias . in light of the lack , or at least minimal nature of such external and sub - gridscale influences in the toy models considered here ,the ineffectiveness of bias correction is logically consistent with this explanation .the state - dependent leith operator is entirely responsible for the success of the corrected models in this study . in the perfect model scenariothis makes sense because the difference between the forecast models and the `` truth '' model are inherently multiplicative , i.e. the parameters are coefficients weighting the interaction between state - variable values and thus resulting errors _ must _ depend on state . for the em model of the cfd system, it seems that errors resulting from the low dimensionality of the forecast model may also be multiplicative in nature .if this is the case , state - dependent correction may reduce error patterns in operational models that result from reduced dimensionality , e.g. coarse resolution .the correction will not likely compensate for processes that are irresolvable due to coarse resolution , but rather may reduce the propagation of error resulting from the omission of such phenomena .this hypothesis is consistent with demonstrated improvement of local behavior in state - dependent corrected atmospheric models with degrees of freedom . in previous studies of state - dependent correction in modelsthat are much more realistic than those considered here , resulting error reduction has been minuscule in comparison to what is achieved by bias correction .however , this is not cause to reject the usefulness of parameterizing state - dependent error . though globally averaged error reduction may not be significant , improvement in the local behavior of modelscan have a large impact on forecast uncertainty , particularly in an ensemble strategy where state - dependent correction can increase the spread in previously unsampled state - space directions .simulations and experiments were performed on the vermont advanced computing center ( vacc ) 1400-processor cluster , an ibm e1350 high performance computing system .the authors would like to thank ross lieb - lappen for assistance in preparing this manuscript .this study was supported by nsf grant # dms0940271 , the mathematics and climate research network , and a national aeronautics and space administration ( nasa ) epscor grant . | improving the accuracy of forecast models for physical systems such as the atmosphere is a crucial ongoing effort . errors in state estimation for these often highly nonlinear systems has been the primary focus of recent research , but as that error has been successfully diminished , the role of model error in forecast uncertainty has duly increased . the present study is an investigation of a particular empirical correction procedure that is of special interest because it considers the model a `` black box '' , and therefore can be applied widely with little modification . the procedure involves the comparison of short model forecasts with a reference `` truth '' system during a training period in order to calculate systematic ( 1 ) state - independent model bias and ( 2 ) state - dependent error patterns . an estimate of the likelihood of the latter error component is computed from the current state at every timestep of model integration . the effectiveness of this technique is explored in two experiments : ( 1 ) a perfect model scenario , in which models have the same structure and dynamics as the true system , differing only in parameter values ; and ( 2 ) a more realistic scenario , in which models are structurally different ( in dynamics , dimension , and parameterization ) from the target system . in each case , the results suggest that the correction procedure is more effective for reducing error and prolonging forecast usefulness than parameter tuning . however , the cost of this increase in average forecast accuracy is the creation of substantial qualitative differences between the dynamics of the corrected model and the true system . a method to mitigate the structural damage caused by empirical correction and further increase forecast accuracy is presented . |
over the past few years there has been a growing interest in the investigation of possible high - energy quantum - gravity - induced deviations from lorentz invariance , that would induce modifications of the energy - momentum dispersion ( on - shell ) relation .this hypothesis finds support in preliminary results obtained in some popular approaches to the study of the quantum - gravity problem , most notably approaches based on spacetime noncommutativity " or inspired by loop quantum gravity " .it is expected that the scale governing such deviations from lorentz symmetry is the scale where the same frameworks predict a breakdown of the familiar description of spacetime geometry , the quantum - gravity energy scale " , here denoted by , which should be roughly of the order of the planck scale ( ev ) . while the study of particles with energies close to the planck scale is far beyond our reach in particle - physics laboratories and even in astrophysical observatories , it is possible to look for the minute effects that planck - scale deviations from lorentz symmetry produce for particles with energies much lower than the planck scale . from this perspective of the search of small leading - order corrections, some astrophysical phenomena could provide meaningful insight , by either producing signals of the modified dispersion , or alternatively , providing constraints on the relevant models .indeed , astronomical observations have already set valuable limits on these leading - order corrections and as such on some of the alternative scenarios for quantum gravity " .most insightful are the studies based on the high - energy modifications of particle speeds and on the modifications to high - energy reaction thresholds ( as relevant for the ultra - high - energy cosmic - rays or very - high - energy -rays ) .the rapid development of astronomical instrumentation over the last years has improved significantly our ability to test these scenarios .in particular the recent results of the fermi telescope collaboration , using a time - of - flight analysis of a short gamma - ray burst grb090510 , achieved a sensitivity to planck - scale effects and set for the first time a limit of ( for the case of effects introduced linearly in the quantum - gravity scale , defined below as ) . the conceptual perspective that guides these studies is consistent with the history of other symmetries in physics , which were once thought to be fundamental but eventually turned out to be violated .there is no essential reason to believe that lorentz symmetry will be spared from this fate .if the symmetry is broken at a scale , we should expect leading - order corrections to arise even at much lower energies , and hence it is just natural to explore their possible implications .however , in view of the importance of lorentz symmetry in the logical consistency of our present formulation of the laws of physics , one should ask which modified dispersion relations can be placed into a viable and consistent theory ?the analysis presented here intends to contribute in this direction .we investigate this question from a rather general perspective , aware of the fact that it is quantum - gravity research that provides the key motivation for these studies , but also in principle open to the possibility that the conjectured modifications of the dispersion relation might have different origin .we consider a generic low - energy " , , leading - order modification of the dispersion relation of the form : is here introduced only for the consistency of in the description of speeds with the analogous parameter most commonly used in the relevant literature . ] the power of this leading correction , , and the parameter are model - dependent and should be determined experimentally .the conventional speed of light constant , , remains here the low - energy limit of massless particles speed , and we put hereafter ( with the exception of only a few formulas where we reinstate it for clarity ) .notice that the fact that we work in leading order in the planck - scale corrections allows us to exchange the ( modulus of ) momentum of a photon with its energy in all planck - scale suppressed terms .the modification of the dispersion relation produces a difference between energy and momentum of a photon , but this is itself a first - order correction and taking it into account in terms that are already suppressed by the smallness of the planck scale would amount to including subleading terms .the approach based on ( [ lidform ] ) , which is adopted here , has been considered by many authors as the natural entry point to the phenomenology of lorentz invariance violation .we are mainly concerned with the conceptual implications of such modifications of the dispersion relation for the way in which the same phenomenon is observed in different reference frames , and for our exploratory purposes it is sufficient to focus on ( [ lidform ] ) .our results apply to all cases in which ( [ lidform ] ) is satisfied to leading order .the findings should apply also to scenarios with birefringence , which could be induced by planck - scale effects ( see , _ e.g. _ , ref .our results may provide a first level of intuition even for the possibility of planck - scale induced fuzziness " , which is the case of scenarios in which there is no systematic modification of the dispersion relation but modifications roughly of the form ( [ lidform ] ) occur randomly , affecting different particles in different ways depending on the quantum fluctuations of spacetime that they experience ( for details see , _e.g. _ , refs . ) . assuming , as commonly done in the related literature , that the standard relation holds , the dispersion relation ( [ lidform ] ) leads to the following energy dependence of the speed of photons : this is an important phenomenological prediction of the modified - dispersion scenario .this effect is the basis for the most generic tests of the lorentz - violation theories .we will consider the case of subluminal motion , corresponding to the ` - ' sign in eq .( [ lawforspeed ] ) .a similar analysis can be performed for the case with superluminal speeds .we here contemplate an experiment where two relatively boosted observers detect two photons , which are emitted simultaneously at the source but arrive separated by a delay due to their different energies according to ( [ lawforspeed ] ) .the basic idea of our study is to compare , under different theoretical scenarios , the time delay measured in the different reference frames .this primarily serves as a gedanken experiment that establishes features which modified - dispersion models must include in order to have a consistent scenario .we show that if in all reference frames photons have a dispersion relation as in ( [ lidform ] ) and a speed law as in ( [ lawforspeed ] ) ( with fixed frame - independent parameters ) , then assuming the transformations between frames are governed by the standard lorentz laws would lead to a contradiction between the measurements of the different observers .this is expected , but we use our explicit derivation to deduce other requirements the models must fulfill for a solution .we consider two classes of modified - dispersion models .the first class consists of theories where the lorentz symmetry is broken ( lsb ) with the existence of a preferred inertial frame ( sometimes attributed to the cosmic microwave background frame ) .the small - scale structure and hence high - energy behavior is defined in this reference frame .therefore , the dispersion relation is allowed to take different forms in other frames . in the followingwe will consider only the most common lsb theories in which the lorentz transformations are still applicable when going from one frame to another .the second class of models describes a scenario where the lorentz symmetry is merely deformed , such that the equivalence of all inertial observers is maintained .this means that the lorentz symmetry makes way for a more complex symmetry , but the theory is still relativistic ( there is no preferred reference frame ) .these are called doubly special relativity " ( dsr ) models . within this scenariothe laws of transformation are necessarily modified from the standard lorentz transformations .this is done in such a way that all observers agree on the physical laws , including the energy - momentum dispersion relation .not all models of dsr predict energy - dependent velocities of photons , but we deal with the analysis of the common dsr framework where the velocities behave as in ( [ lawforspeed ] ) .this is to say that we will have two complementary scenarios - lsb with standard lorentz transformations but allowing for departures between the dispersion relations in different frames , and dsr where the theory is frame - independent but the transformations between reference frames differ from the lorentz ones .we observe that the large effort recently devoted to the phenomenology of modifications to lorentz invariance has focused on analyses performed in a single frame .we believe that deeper insight on the fate of the lorentz symmetry can be gained by also investigating how the same phenomenon is viewed by two different observers .we argue that our study sheds light on several conceptual issues , which in turn may well provide guidance even for the ongoing effort utilizing the standard laboratory frame " tools .we obtain definitive results for our lsb scenario , and for the dsr case we find that some non - classical features of spacetime are required ( while the phenomenological dsr results are severely conditioned by our restrictive assumption of a classical - spacetime implementation ) . in principle( and perhaps one day in practice , after a lorentz - violating effect is observed in one frame ) our description of the effects of the lsb scenario for observations of the same phenomenon in two reference frames with a relative boost could be even exploited experimentally . in spite of the limitations associated with the assumption of a classical - spacetime implementation for the dsr case , our findings forthe comparison between the lsb scenario and the dsr scenario provide some encouragement for the possibility that our scheme could also be exploited to discriminate between these alternative spacetime - propagation models .we turn now to the comparison between different observers which is the key point in our analysis .let us start by stressing that we shall focus for simplicity on flat ( minkowskian ) spacetime .while in actual studies of astronomical signals the cosmological curvature can play a significant quantitative role in the actual time delay , we are here mainly concerned with a conceptual analysis and curvature will not change our qualitative results . in a flat spacetime it follows immediately from ( [ lawforspeed ] ) that two photons with energies and , emitted simultaneously by a distant source , will not reach the observer simultaneously . for a source at distance from the observer, one should find a difference in arrival times at the detector given by : clearly then , if the time of arrival of a low - energy photon is , the time of arrival of a photon with high energy will be .we are interested in the comparison between the measurements of the same photons , emitted from a distant source , by two observers , and , in relative motion .we consider a photon of low energy and a photon of high energy .in addition to the mentioned assumption of flat spacetime , we assume that the two photons are emitted simultaneously from a single spacetime point . for simplicitywe consider a source , , at rest with respect to the observer .we denote by the spatial distance from the source as measured by ( ) .we denote by and the position coordinates of the source and the detector respectively , and we denote by the relative speed between observers and .the two observers are at the same spacetime point , where the faster photon reaches both of them , so they are synchronized there , _i.e. _ at .we also implicitly assume that the synchronization of clocks between the two observers is essentially standard : they would perform clock synchronization using low - energy photons , which are practically unaffected by the modification of the dispersion relation . in fig .[ figure1 ] one can see the space - time diagrams in the two reference frames , the first at rest with respect to and the second at rest with respect to .only the case in which is moving away from is depicted , but in the formulas a `` '' in front of will clarify , where needed , that the formulas are valid also in the case in which is moving towards .each observer measures the delay in the time of arrival of the photon with respect to the photon in its own reference system .this means that measures , while measures .the measurement of appears dilated in the reference frame of , where it is . is the time needed by the photon moving at speed to catch up with the observer , that moves at speed and has an advantage of ( this last relation can be immediately verified by looking at fig .[ figure2 ] ) . , with highlighted.,scaledwidth=50.0% ] the kinematics in the reference frame yields the relation : subtracting both sides by and recalling that and , we get ( the `` '' sign refers to the case of moving away from ) : this relation is central to our analysis , since it was derived in the reference frame ( making no assumptions regarding the behavior of other reference frames ) . the right - hand - side of ( [ conscond ] ) can be expressed using exclusively the law of energy dependence of the speed of photons in the first rest frame ( of and ) .the formula can be used to relate the results of time - delay measurements performed by the two observers and , respectively and .this will impose limits on the free parameters , the speed law in the reference frame and the laws of transformation between observers .regarding the boost transformation that acting on gives , we can write : = \delta t_{i } \left ( \frac{v(e_2)}{v(e_2 ) \mp \beta } \right ) , \ ] ] where we denoted formally by the action of the ( inverse ) boost that takes from to . reduces to the lorentz transformation in the case of lorentz invariance , and if the photons speed is independent of energy this condition is trivially satisfied . is also the usual lorentz transformation in our analysis of lsb .however , it takes a more general form in the case of dsr . the relation ( [ conscondboost ] ) should be viewed as a requirement of logical consistency : if all the assumptions we have made hold and there exist modified dispersion laws and a transformation action , then inevitably they must be connected through ( [ conscondboost ] ) . if the speed of photons depends on their energy , but we keep the classical transformation laws between observers unmodified , it is easy to see that the law giving the quantitative dependence of the speed of photons on energy must be observer - dependent .we demonstrate explicitly in this subsection that if is the standard lorentz transformation action and both and obeyed the same frame - independent but energy - dependent speed law , the consistency condition would not be met .this derivation with lorentz transformations will then be used in sec .[ lsbsec ] as the basis for the lsb analysis . if the transformation laws between the reference frame of and that of are the special - relativistic ones , since the delay is a proper time in the reference frame of , it is related to by the usual time - dilation formula : = \gamma \delta t_{ii}'\ ] ] ( where as usual ) .we use this to rewrite eq .( [ conscond ] ) as follows : next we notice that in this equation is of order ( clearly it must be the case that if ) .therefore , for an analysis in leading order of , where we also assume , it is possible to take on the right - hand - side of ( [ jocz ] ) : where we denoted by the relativistic doppler factor . equation ( [ deltat2primobroken ] ) gives the time delay that must observe for consistency with the time delay seen by .we should describe in terms of quantities measured by itself , so that we can investigate the form of the time - delay formula according to on the basis of the form postulated for .first , using the speed law of ( [ lawforspeed ] ) in the reference frame of , as explained , we explicitate in terms of the energies and the distance measured by : putting this in ( [ deltat2primobroken ] ) we have : now we can use the fact that the relativistic doppler factor also connects the energy of a particle as seen by observer and the corresponding energy seen by observer , , so that : \mathcal{d}^{-n } .\ ] ] since is already multiplied by a factor in ( [ jocw ] ) , we can , consistently to leading - order , ignore all effects of on the relationship between and .accordingly , we can handle as the space component of a light - cone vector ( that is the vector connecting the point with space - time coordinates to the point with coordinates , see fig .[ figure1 ] ) , and so it also transforms through the doppler factor : the same conclusion is of course drawn if we observe that according to the distance between and is ( standard lorentz contraction of the proper length ) , but in order to obtain ( the distance traveled by the photon according to - the distance between and at emission ) one must consider also the distance that according to the source travels during the time : in agreement with ( [ jocv ] ) .substituting ( [ jocu ] ) and ( [ jocv ] ) in ( [ jocw ] ) , we can finally describe the time delay seen by observer in terms of other quantities measured by : comparing ( [ delo ] ) with ( [ sec2final ] ) , we see that in such a scenario the speed law is not frame - independent .the modified dispersion relation is inconsistent with keeping the two familiar ingredients of frame independence and lorentz transformation laws .we have to abandon at least one of these appealing notions in order to have a consistent solution .the following sections discuss two different scenarios that can achieve consistency by allowing for these additional departures from special - relativity : ( i ) breaking the symmetry and having a preferred - frame scenario " ( sec .[ lsbsec ] ) or ( ii ) modifying the lorentz transformations ( sec .[ dsrsec ] ) .we have found that if we use the standard lorentz transformations , we necessarily reach a lsb preferred - frame scenario " .this means that in principle the theory could be formulated in one chosen frame of reference ( typically one where the laws take an appealingly simple form ) and then all other observers see the lorentz - transformed image " of the fundamental laws written in the preferred frame .this is indeed the conceptual perspective adopted in the most studied lorentz - invariance - violation models , where the scale in ( [ lawforspeed ] ) is expected to take different values for different observers .it is valuable to establish how exactly the laws change form in going from one reference frame to another , since this is the only way to combine the results of experiments performed by different observers .the result of the previous derivation turned out to provide a quantitative characterization of the frame dependence of the lsb scenario .we shall here continue that derivation , aware that it must be viewed in the context of an observer - dependent framework , and obtain further characterizations of the consistent broken - symmetry scenario . according to the analysis of sec .[ clasanalsec ] , if we assume that the lorentz transformations hold and in a certain ( preferred ) frame the speed of photons behaves as in ( [ lawforspeed ] ) , the photon speed law must be quantitatively different for relatively boosted observers .the result ( [ sec2final ] ) , derived on the basis of our logical - consistency requirement , implies that starting from the speed law in a frame , one arrives at the following speed law in a frame : we therefore have an explicit characterization of the expected change of the in - vacuo - dispersion between reference frames .the laws in and in have the same functional form , but they differ quantitatively with the addition of the doppler factor parameter in .this parameter is determined in each frame by its relative speed with respect to the privileged frame - for any observer it is fixed and the addition is just a numerical constant .this allows us to treat the difference between observers as a variation in the scale of symmetry - breaking - marking with the scale as measured by , one then finds that the scale as measured by is : this definitive description of the quantum - gravity - scale transformation is in itself an important characterization .it is especially noteworthy as several authors had expected ( see , _ e.g. _ , ref . and references therein ) that should be treated as a length scale , subject to standard lorentz - fitzgerald contraction . equation ( [ jocccc ] ) shows that this is not the case .another potentially valuable characterization is found upon recalling the assumption that the energy dependence of the speed of photons , ( [ lawforspeed ] ) , is describable as the group velocity of photons governed by a deformed dispersion relation : we now deduce from eq .( [ lawforspeedprimebroken ] ) with , that in the reference system of the dispersion relation takes the form : here on the right - hand - side we have highlighted a peculiar consequence of the law of transformation of the scale .its net result is that , for a given photon ( of given energy according to observer ) , the violation of the special - relativistic dispersion relation , , has exactly the same magnitude for all observers .this magnitude is fixed by the formula for the lorentz - violating term in the preferred frame , which is based on the photon energy , measured in that frame : any observer can compute the lorentz - violating term of the dispersion relation by finding the special " energy , according to its boost relative to , and using the same formula .we have evidently found , in the context of our frame - dependent scenario , an invariant term , which has the form of a mass term in special - relativity .we stress that obviously this invariant can not be viewed as an effective mass for photons in the lsb scenario " , as already implied by the fact that the speed law ( [ lawforspeedprimebroken ] ) is not obtained by treating as a mass in the ultrarelativistic limit : is found by observing that is an invariant under passive lorentz transformations but not an invariant under active lorentz transformations , something that is not at all surprising for cases where lorentz symmetry is broken . under passive boosts that establish how the properties of the same particle are measured by different observers, we have indeed found that is an invariant .of course the observation reported in ( [ jocjoc ] ) simply reflects the evident fact that is not invariant under active boosts , which for a single observer map a particle of energy into a different particle of energy .another example of corrections that are invariant under passive lorentz transformations but not invariant under active lorentz transformations is the case of correction terms of the form , with some external " lorentz - symmetry - breaking vector and the four - momentum of the particles : under a passive boost , of course , both and would transform in a way just such is left invariant , but under an active boost only is transformed and the term changes its value . ] of course , a generic correction to the dispersion relation will affect inertia in the way prescribed for mass only if , unlike our , it is independent of energy - momentum ( so that it would provide inertia in standard fashion in the group - velocity calculation , obtained by ) .one similarity between our case and the analogy of a massive - particle dispersion is that in transforming to a reference frame where the energy of the particle is smaller , the effect of the correction to becomes larger .this result for the case of lsb comes in contrast to the intuition , by which the phenomenological effect of lorentz invariance violation should become more significant when a particle has a larger energy which is closer to the quantum - gravity scale . in conclusion , regarding the much - studied quantum - gravity scenario of a high - energy breakdown of lorentz symmetry with the emergence of a preferred frame, we have formulated explicitly the only allowed form of the modified dispersion relation , which is consistent with the lorentz transformations . to have a quantitative appreciation of the phenomenological effect of the lorentz - violating speed law in different reference frames , we observe again the result , ( [ deltat2primobroken ] ) , obtained for the time delay that would be measured in a general frame . for observations made from different reference frames with a relative speed , the time delays between the photons will differ by a factor of the corresponding . since the difference between the reference frames is not suppressed by another power of ( but only by the smallness of the relative speed ) , it may in principle be an observed phenomenon .as an example , we may consider a detector moving at a speed of km / s ( the speed of nasa s helios 2 satellite , which was launched in the 1970s ) : ( when the detector is moving away from the photons ) .this means that if , for instance , a detector at rest " measures a lorentz - violation - induced time delay of s ( which the fermi telescope can measure if indeed a effect exists ) , the moving detector will measure a time delay greater by ms .this is small but not unrealistic to expect a possible test in the future .note that our result is that in the lsb context the time delay in the moving frame is larger even though the photons measured energies are lower .this is in contrast to the intuitive expectation of a smaller time delay in a frame where the photon energy is smaller ( see sec .[ closingsec ] for further discussion ) .a different scenario , which has recently generated interest - the dsr scenario , was essentially introduced to explore the possibility that the quantum - gravity scale may affect the laws of transformation between observers while preserving the relativistic nature of the theory ( no preferred frame ) . within this dsr scenario the departures from ordinary lorentz invariancewould take the shape of a deformation " ( rather than breakdown ) of symmetries .this can be thought of in a close analogy with the transition from galilean invariance to lorentz invariance .galilean invariance ensures the equivalence of inertial observers but makes no room for observer - independent relativistically - non - trivial scales .attempts to accommodate a maximum - speed law first relied on scenarios that would break galilean symmetry ( ether ) , but eventually found proper formulation in the shape of a deformation of galilean invariance .this formulation is lorentz invariance , which is still without a preferred frame , but the laws of transformation between inertial observers are deformed ( with respect to the galilean case ) in such a way that a velocity scale , setting the speed - of - light maximum value , is observer - independent .the dsr framework makes another step with a deformation of lorentz invariance that introduces an additional observer - independent scale .the aim here is exploring the possibility that the existence of a short - distance ( high - energy ) relativistically - invariant scale might eventually become manifest as we probe high - energy regimes more sensitively , just like the presence of a relativistic velocity scale becomes manifest in accurate observations of high - velocity particles .we are interested in considering the case of dsr models in which the new observer - independent law is a law of in - vacuo dispersion , based on a relativistic energy scale .this will offer us a framework for investigation which is complementary to our lsb case . in the lsb scenarioone assumes the laws of transformation to remain unchanged , allowing for an observer - dependent scale of in - vacuo dispersion , while in a dsr scenario with in - vacuo dispersion one must insist on observer independence of the scale of in - vacuo dispersion , allowing the laws of transformation between observers to take a new form .our analysis is somewhat limited by the fact that these dsr scenarios are still work in progress " . among the residual grey areas for a dsr description of in - vacuo dispersionparticularly significant for our purposes are the ones concerning the description of spacetime . in the following we assume a classical - spacetime geometry , within which we adopt , as done by most authors in the field , concepts such as spacetime points , translational symmetries , energy , momentum and group velocity in the familiar manner .we find , below , that in - vacuo dispersion of the form ( [ lidform ] ) is inconsistent with dsr in such a classical spacetime .this provides new and particularly strong elements in favor of a non - classical spacetime picture .these findings are consistent with several independent arguments giving indirect evidence of the necessity to adopt a novel , quantum " , picture of spacetime ( see , _ e.g. _ , refs .in such a quantum spacetime the sharp absolute concept of spacetime would be lost , and in particular one might have that two events that are simultaneous at the same spatial point for one observer are not simultaneous for another observer . however , a satisfactory picture of such a quantum spacetime has not yet been found .the dsr side of our analysis also intriguingly suggests that quantitatively important differences can be found between the lsb and dsr cases when a single phenomenon is observed from two different reference frames .while clearly conditioned by our restrictive assumptions , and therefore subject to further scrutiny in future dsr studies , the magnitude of the effect is large enough that it would be surprising if more refined analyses removed it completely .we shall argue that this might open the way to a future test for the discrimination between alternative in - vacuo dispersion scenarios ( provided that in - vacuo dispersion is detected ) .we proceed , inspecting the possibility of the dsr scenario with in - vacuo dispersion , using the logical - consistency criterion derived earlier .we here consider the difference in time delays seen by two observers as a result of in - vacuo dispersion in the dsr case .of course , as long as we consider a single observer there is no distinction between lsb - case and dsr - case in - vacuo dispersion , and also in the dsr scenario we have for observer : the differences between the lsb and dsr scenarios are instead very significant for what concerns the relationship between this formula for observer and the corresponding formula for an observer boosted with respect to .we now start from the axiom of our dsr scenario , that is an observer - independent scale and the speed law of photons is identical in all reference frames .this is in contrast to the property of the lsb case that enabled a consistent scenario . here ,we seek a consistent description through a new form of transformation laws between observers . )the transformations should converge to the standard lorentz transformations , and in leading order there would be some correction term with .] exploiting the observer independence of the speed law ( [ lawforspeed ] ) , we directly arrive at the time delay measured by observer : the measurements in the different reference frames , ( [ deltat1joc ] ) and ( [ deltat2primopreservedshort ] ) , have to be related through the criterion ( [ conscondboost ] ) for consistency .this clearly requires a departure from the standard lorentz transformations . for the purpose of comparing and we should re - express the formula ( [ deltat2primopreservedshort ] ) in terms of energies and distances measured by observer .we should , generally , write the relations between and and between and with a correction to the lorentz transformations .however , already contains a factor , so within our leading - order analysis the zeroth order expressions will suffice , _i.e. _ we can use + \mathcal{o}\left(e_{qg}^{-n}\right ) , \ ] ] which yields : , with the suggestion of ref . that was based on a qualitative argument . ] utilizing now the consistency criterion , we arrive at a formulation of the connection between the time interval and its transformation to the frame of observer : = \mathcal{d}^{n+1 } ( 1 \mp \beta ) \delta t_{ii } + \mathcal{o}\left(e_{qg}^{-2n}\right ) .\ ] ] this last result poses an immediate challenge for the consistency of this scenario .the corresponding formula for the lorentz transformation of the proper time is : it is obvious that the expression ( [ dsrtranslawdeltat2 ] ) can not be a leading - order correction of ( [ srref ] ) .the crucial observation is that the difference between the transformation laws of ( [ dsrtranslawdeltat2 ] ) and ( [ srref ] ) is of zeroth order , _i.e. _ it is not proportional to any powers of . in our analysiswe tried to meet the consistency requirement by allowing for some deformations of the transformation laws , such that at zeroth order the dsr transformations are the same as the special - relativistic ones .however , higher - order corrections of the transformations were not relevant in our derivation , which consisted only of quantities which are already suppressed by .since the only essential assumption that was used to derive this paradoxical result concerned the classicality of spacetime geometry , we conclude that there can not be any consistent classical - spacetime formulation of dsr with modified dispersion. the comparison of ( [ dsrtranslawdeltat2 ] ) and ( [ srref ] ) allowed us to conclude , consistently with indications that emerged in previous dsr studies , but in our opinion more forcefully than in any previous related investigation , that a classical - spacetime formulation of dsr models with energy - dependent photon velocity can be excluded .we must either adopt a dsr framework with no energy - dependent photon dispersion or alternatively consider non - classical features of the dsr spacetime , such that there is no classical - algebraic - description of the transformations of spacetime coordinates between reference frames .instead the transition to dsr should be one that involves a change to the nature of spacetime .it is , indeed , not very surprising to find a contradiction between the implications of the dsr framework and the properties of classical spacetime , since the quantum - gravity motivation for dsr research is strongly related to the concept of quantum spacetime " .a quantum description of spacetime entails dramatic modifications to common features of spacetime geometry that are intuitive in our classical description .the sharp observer - independent identification of an event ( a spacetime point ) is not available in a quantum spacetime .instead we have some fuzziness " .the new geometry will have a fundamentally different structure .for example , a popular quantum - gravity - inspired description of these spacetimes relies on the formalism of spacetime noncommutativity " , where the coordinates of an event are described in terms of a set of noncommuting observables , governed for example by noncommutativity of the type = i { \cal c}^\alpha_{\mu \nu } \frac{x_\alpha}{e_{qg}}$ ] , with a model - dependent dimensionless matrix . among other approaches that resist the temptation of characterizing spacetime nonclassicality in terms of coordinates ,one of the most popular is the rainbow metric " picture proposed in , which essentially characterizes the nonclassicality of spacetime in terms of a metric tensor which is perceived differently by particles of different energy .this formalization attempts to also find support in some results obtained through direct analysis of dsr scenarios with in - vacuo dispersion .in particular , it is known that observer - independent modifications of the dispersion relation for massive particles are inconsistent with a classical - spacetime picture .this is seen considering the case of two particles of masses and with the same velocity according to some observer .the observer could see two particles with different masses and different energies moving at the same speed and following the same trajectory ( for particles and are near " at all times ) , but , taking into account the dsr - deformed laws of transformation between observers , the same two particles would have different velocities according to a second observer ( so according to they could be near " only for a limited amount of time ) .this would clearly be a manifestation of spacetime nonclassicality , since it amounts to stating that a single spacetime point for is mapped into two ( possibly sizeably distant ) points for .the results so far obtained in this section provide a novel characterization of the virulence " of the modification of spacetime structure required by dsr scenarios with in - vacuo dispersion .our findings confirm previous indications in support of the idea that these scenarios require a _ soft deformation _ of energy - momentum space but a rather drastic change in the description of spacetime .with respect to previous studies that provided support for similar characterizations , our analysis contributes a perspective centered on time measurements , whereas earlier works had focused mainly on spatial - position measurements .accordingly , our findings , particularly the form of eq .( [ dsrtranslawdeltat2 ] ) , could provide motivation for further investigation of the definition of time in dsr scenarios .already in some of the first dsr studies it was realized that several subtle issues may affect such a definition of time , but these challenges have not been pursued further .regarding the peculiar properties of quantum spacetime , of particular relevance for our analysis is the concept of simultaneity .special relativity removes the abstraction of an absolute time " , but still affords us an objective , observer - independent , concept of simultaneity for events occurring at the same spatial point . in a quantum - spacetime setting, however , events which are simultaneous and at the same spatial point for a certain observer are not necessarily simultaneous for another observer .this is easily seen by noticing that most models of spacetime quantization are inspired by the quantization of ( position - velocity ) phase space in ordinary quantum mechanics , where the heisenberg principle obstructs the possibility of two particles objectively having the same position " in phase space ( they can not have sharp values of both position and velocity , so they can not even be attributed sharply to a point of phase space in a classical sense ) .indeed , the related concept of a generalized uncertainty principle " , introducing an absolute limitation to combined sharp determinations of both position and time of an event , has been introduced in most approaches to the spacetime quantization .the fact that simultaneity at the same spatial point might no longer be observer - independent exposes a possibly important limitation of the applicability of our classical - spacetime analysis to a non - classical spacetime : it was in fact crucial for our line of reasoning that we assumed that for both observer and observer the emission of the two photons by the ( point - like ) source was simultaneous .while this does not weaken our conclusion that only quantum - spacetime formulations of dsr are admissible , it might weaken the insight on properties of these quantum spacetimes that our analysis also provides .naively one might think that the needed quantum spacetimes should necessarily match our result ( [ dsrtranslawdeltat2 ] ) , but spacetime classicality was assumed in deriving ( [ dsrtranslawdeltat2 ] ) .therefore it is too restrictive to insist precisely on ( [ dsrtranslawdeltat2 ] ) , if the analysis we here reported is ever generalized to the case of ( some example of ) quantum spacetime .actually we feel that this is the direction where our analysis provides the most serious challenge ( and perhaps even an opportunity ) for dsr research : while it is true that the paradox we highlighted is not necessarily applicable to quantum - spacetime scenarios , it is clearly at this point necessary to address this issue within some specific quantum - spacetime picture and in explicit physical fashion .some explicit candidate quantum - spacetime pictures for dsr with energy - dependent speed of massless particles have been proposed , but their analysis has not gone much further than the merely abstract / mathematical level , whereas we here showed that a serious challenge ( possibly excluding some or even all of these candidates ) will be met only upon a fully physical formulation of these concepts of spacetime quantization .we have analyzed in this work two scenarios of departures from lorentz invariance .demanding the consistency of observations in different reference frames , we arrived at a characterization of the requirements from each scenario . both scenarios require a costly departure from the present formulation of the laws of physics . for lsbwe have the emergence of a privileged rest frame " in which is defined and measured ( and in all other frames we find that is given by a specific non - trivial transformation ) . for dsrwe find no classical solution to the consistency problem , as we have shown that the required transformation laws between boosted frames do not yield the lorentz transformations as a limiting case .a potential resolution of the paradox is the departure from a classical spacetime and the possible emergence of apparently paradoxical spacetime nonclassicality .the current literature on quantum - gravity finds advocates for each of these new - physics descriptions .unfortunately , experiments of this nature , that will tell us which if any of these two possibilities should be adopted , can not be imagined at present . however , we recall the recent history of the quantum - gravity research , when , just fifteen years ago , it was not imagined that experiments would at all be able to test lorentz invariance violation at the planck scale. still , even though an experimental test is unlikely at the immediate future , it is noteworthy what one finds by comparing the results ( [ deltat2primobroken ] ) and ( [ deltat2primopreserved ] ) , describing the behavior of the in - vacuo - dispersion - induced time delay measured by observer w.r.t .the delay measured by , as the relative speed between the two observers changes : in this comparison we want to stress the relevant fact that in the two cases behaves in the opposite way : when in the lsb case it shrinks w.r.t . , in the dsr case it grows , and vice - versa .if we consider again the conservative quantitative example of sec .[ lsbsec ] , we notice that even with present - day technology a possible measurement by two detectors would give a difference at a level of between the two scenarios in ( [ phenomcomparison ] ) . while the result of the dsr analysis was derived in a classical - spacetime environment , which was shown to be an unfit description , it is probably safe to assume that the correct result , which may be derived when we have the full quantum - spacetime description , will admit our result as at least a rough approximation .though we do not claim here that our dsr - case formula should be trusted in detail , it appears likely that our preliminary analysis has uncovered a qualitative feature that could be robust - it might well be a valid assumption that for observables such as the dependence on the boost factor of the dsr case would differ significantly from the corresponding feature of the lsb case . andwhile we have concentrated here on a conceptual analysis , it is not unreasonable to imagine that if evidence of in - vacuo dispersion is ever found , a two - telescope experiment could be conducted to investigate further the departure from lorentz invariance , following the strategy of analysis we advocated here . since the contrast between the expected results of the different modified - dispersion scenarios appears to be a zeroth order effect , when the sensitivity of time - of - flight tests reaches scales above the quantum - gravity scale , such an experiment can in principlebe performed .this would help establish in detail the fate of lorentz symmetry .u. j. thanks the university of rome la sapienza " for hospitality while some of this work was done .g. a .- c .is supported in part by grant rfp2 - 08 - 02 from the foundational questions institute ( fqxi.org ) .t. p. and u. j. are supported in part by an erc advanced research grant and by the center of excellence in high energy astrophysics of the israel science foundation .u. j. is also supported by the lev - zion fellowship of the israel council for higher education .we are grateful to lee smolin for encouraging feedback on a draft of this manuscript and for forwarding to us a draft of the manuscript in .g. amelino - camelia , j. ellis , n. e. mavromatos , d. v. nanopoulos and s. sarkar , astro - ph/9712103 , _ nature _ * 393 * , 763 ( 1998 ) .r. gambini and j. pullin , gr - qc/9809038 , _ phys .d _ * 59 * , 124021 ( 1999 ) .j. alfaro , h. a. morales - tcotl and l. f. urrutia , gr - qc/9909079 , _ phys . rev .lett . _ * 84 * , 2318 ( 2000 ) .r. aloisio , p. blasi , p. l. ghia and a. f. grillo , astro - ph/0001258 , _ phys . rev .* 62 * , 053010 ( 2000 ) .j. magueijo and l. smolin , gr - qc/0305055 , _ class .quantum grav . _* 21 * , 1725 ( 2004 ) .r. aloisio , a. galante , a. grillo , e. luzio and f. mndez , gr - qc/0501079 , _ phys .b _ * 610 * , 101 ( 2005 ) . g. amelino - camelia , gr - qc/0210063 , _ int . j. mod .d _ * 11 * , 1643 ( 2002 ) . | we investigate the implications of energy - dependence of the speed of photons , one of the candidate effects of quantum - gravity theories that has been most studied recently , from the perspective of observations in different reference frames . we examine how a simultaneous burst of photons would be measured by two observers with a relative velocity , establishing some associated conditions for the consistency of theories . for scenarios where the lorentz transformations remain valid these consistency conditions allow us to characterize the violations of lorentz symmetry through an explicit description of the modification of the quantum - gravity scale in boosted frames with respect to its definition in a preferred frame . when applied to relativistic scenarios with a deformation of lorentz invariance that preserves the equivalence of inertial observers , we find an insightful characterization of the necessity to adopt in such frameworks non - classical features of spacetime geometry , e.g. events that are at the same spacetime point for one observer can not be considered at the same spacetime point for other observers . our findings also suggest that , at least in principle ( and perhaps one day even in practice ) , measurements of the dispersion of photons in relatively boosted frames can be particularly valuable for the purpose of testing these scenarios . |
for more than 2 decades , one of the most important goals in high - energy nuclear physics has been the study of the quark - gluon plasma ( qgp for short ) .the existence of this deconfined phase of quarks and gluons has been unequivocally shown in many lattice qcd ( quantum chromodynamics ) studies ( for a recent study see ref . ) . in those studies ,the phase transition ( cross - over ) temperature between the hadronic phase and the qgp phase is shown to be roughly about 200 mev when 3 light species ( ) of dynamic quarks are taken into account .this value of transition temperature is actually somewhat of a conundrum . on the one hand , this temperature , although exceeding two trillion kelvin , is nonetheless low enough to be accessible in accelerator experiments . on the other hand ,the energy scale corresponding to this temperature is too low for qcd to be perturbative , making analytic calculations difficult . nevertheless ,combining the results from lattice qcd calculations , insights from perturbative thermal qcd calculations and also thermodynamics , we do know much about qualitative features of this new phase of the matter .accordingly , many researchers have proposed many different signals of the formation of the qgp in heavy ion collisions .many of these proposed signals utilize the fact that the energy density of qgp is extremely high . at ,the energy density easily exceeds .systems created at the relativistic heavy ion collider ( rhic ) can reach maximum temperature of about . at this temperature , the energy density of qgp ( composed of gluons and quarks ) can be as high as . not surprisingly , the three most prominent qgp signals that emerged from rhic experiments the strong elliptic flow , the quenching of the high - energy ( jet- ) particles and the emergence of the medium - generated photons are all measures of the high energy density and corresponding high pressure . among them, the jet - quenching phenomenon is perhaps the most direct observation of the high energy density .intuitively , it is easy to see that it will take an extremely dense matter to stop a particle with an extremely high energy . in this proceeding , a short summary of the various theoretical concepts that go into calculating the jet quenching ( equivalently , parton energy loss ) is presented .it is , of course , impossible to do justice to the vast amount of work performed by many different researchers in this short proceeding .what i will do mainly is to use the mcgill - amy approach that i am most familiar with as an illustrative example and highlight the differences between this and other approaches where possible .interested readers are directed to a more comprehensive review already in print and let me apologize here to people whose interesting works i can not fully cover here for the lack of space .\(a ) ( b ) the idea behind the jet - quenching phenomenon is rather intuitive . as depicted in figure [ fig : schematics ] , partons with high ( jets ) are produced when two hard partons from colliding hadrons undergo a hard scattering . in hadron - hadron collisions ( figure [ fig : schematics]-a ) , the jets then propagate and evolve in vacuum until they produce showers of particles in the detector. on the other hand , in heavy ion collisions jets produced by relatively rare hard collisions must propagate and evolve within the hot and dense medium ( qgp ) that is created by the rest of the system ( figure [ fig : schematics]-b ) . by measuring the difference between the outcome of these two experiments, one can then learn about the properties of the medium .the simplest consequence of having a dense medium is the energy - loss of the propagating parton , that is , `` jet - quenching '' .the usual way of showing the effect of energy - loss is to show the ratio of the momentum spectrum from scatterings and suitably normalized momentum spectrum from scatterings .this ratio is usually referred to as the `` '' and defined as where denotes the observed hadron species and is the average number of hard binary collisions at the specified impact parameter .( as far as the present author can find out , the first use of appeared in 1990 in publications by m. gyulassy and m. plmer and m. gyulassy and x .-wang . )experimental data from phenix and star collaborations spectacularly confirm that jet quenching is real .there is a sharp reduction of high energy particles in the away - side of the trigger particle and has a surprisingly small value of about for all measured range of the high in central collisions .theoretically , the calculations of the spectrum and the spectrum proceed as follows . for the case ,the jet cross - section is given by the following schematic perturbative - qcd formula where is the parton distribution function , is the parton level cross - section and is the fragmentation function . for collisions , we have where is the parton distribution function of the nucleus and the conditional probability contains the effect of the medium that changes the parton from to . here and denote the temperature and the flow velocity profiles throughout the evolution .we also need to integrate over the nuclear and collision geometry .since the geometry is more or less fixed once the impact parameter range is known , the main task of a theoretician is then to calculate given the profiles of the temperature and the flow velocity .there are , however , some caveats that go with eq.([eq : sigmaaa ] ) . the formula ( [ eq : sigmapp ] ) is firmly based on the factorization theorem in qcd .in contrast , the factorization theorem that would put eq.([eq : sigmaaa ] ) on a firm setting is not yet fully proven .( gelis , lappi and venugopalan have taken promising initial steps in this direction . )another caveat is that the medium is finite and it is evolving all the time .the lifetime of the qgp created in a heavy ion collision is about 5fm / c .the size of the system is about 10fm .the hydrodynamic time scale is about .the mean free path of a particle in a qgp is also of order 1fm . none of these are very large or very small compared to the others .full accounting of these similar yet different length and time scales is therefore not an easy task .inevitably , one needs to make some assumptions and approximations . with these caveats , nearly every theoretical approach tothe energy - loss assumes that the above formula ( [ eq : sigmaaa ] ) is at least a good approximation . and that s what will be assumed in this paper as well .as mentioned above , the main task of a theoretician working on parton energy loss is to calculate the in - medium modification function .collisions with the thermal particles cause the changes in the propagating parton .hence , the fundamental quantity to calculate is the scattering cross - sections and the associated collision rates .if perturbation theory is valid , this would be a relatively straightforward calculation at least at the leading order . in a hot and dense matter , this is no - longer true : the dispersion relationship of a particle is no longer that in the vacuum .the scattering potentials are screened .extra divergences appear due to the frequent soft exchanges .all these conspire to make the loop expansion invalid . for elastic scatterings , one can still regard the tree diagrams as the leading order . for radiational processes ,the above complications make non - trivial resummations necessary even for the leading order. one must include _ both _ the elastic and the inelastic scattering processes for phenomenology . here ,for brevity , we only discuss the inelastic radiational process .the first study of radiational energy loss was conducted in refs .this method is often referred to as the bdmps - z approach .the main thesis of this approach is as follows .consider a medium where the temperature is high enough that perturbation theory is valid . in this case , soft exchanges between the medium and the propagating parton result in the radiation of a hard collinear gluon . at the same time , the effect of multiple collisions is reduced because within the coherence length ( or formation time ) of the emitted gluon , all soft scatterings basically count as a single one .this landau - pomeranchuck - migdal ( lpm ) effect then necessitates the resummation of all diagrams depicted in in figure [ fig : the_big_picture2 ] in calculating the leading order gluon radiation rate . in the original bdmps - z approachsome simplifications and approximations were made .specifically , the medium was assumed to be composed of random static scatterers . also ,the thermal screening effect was only approximately taken into account using the approach advocated in ref. .the analysis was then carried out in the deep lpm regime and multiple emission is treated with the poisson ansatz .subsequent approaches to the energy loss calculations use more or less the same starting point .one still needs to calculate and sum the diagrams depicted in figure [ fig : the_big_picture2 ] .the difference between various approaches may be classified by the way each approach ( i ) treats the scattering centers , ( ii ) resums the diagrams , and ( iii ) deals with the evolving medium .these differences may be summarized as follows .the acronyms in the following list are the initials of the main authors except ht ( higher - twist ) and mcgill - amy . also , the references given are not exhaustive but just a starting point for further reading .* scattering centers * * heavy static scattering centers ( cold medium ) bdmps , zakharov , glv , asw , * * dynamic scatterers ( hot medium ) mcgill - amy , dglv , whdg * * general nuclear medium with a short correlation length ht * resummation schemes * * sum over diagrams with all possible soft interactions bdmps ,mcgill - amy * * path integral representation of hard parton propagation asw , zakharov * * reaction operator method with opacity expansion glv , dglv , whdg * evolution schemes * * poisson ansatz bdmps , asw , glv , dglv , whdg * * fokker - planck equation mcgill - amy * * modified dglap equation ( momentum space evolution ) ht in this proceeding , i am going to use the mcgill - amy approach as the illustrative example to show what is involved in calculating each of the above items , simply because that is the one i am most familiar with .when the temperature is high enough , the asymptotic freedom property of the qcd makes it possible to treat qgp within the perturbation theory . the usual loop expansion , however , is no longer valid . the coherence effect ( the lpm effect )makes it necessary to resum an infinite number of generalized ladder diagrams shown in figure [ fig : rate_calc_amy]-a .\(a ) ( b ) technically , this comes about because connecting any of the three hard lines with a soft space - like gluon line ( labeled with ) introduces a pair of pinching poles in the loop - frequency space as illustrated in figure [ fig : rate_calc_amy]-b .the resulting pinching - pole singularity , when regulated by the hard - thermal - loop self - energies cancels the factors of the coupling constants introduced by the additional interaction vertices . within thermal field theory ,arnold , moore and yaffe rigorously proved that resummation of these generalized ladder diagrams is ncessary to get the leading order result ( see and references therein ) .luckily , the resummation of the leading order contribution organizes itself into the schwinger - dyson type equation for the radiation vertex .figure [ fig : linear_integral_equation_gluon ] shows a diagrammatic representation of the sd equation ; the corresponding integral equation is \nonumber\\ & & \qquad + ( c_{\rm a}/2)[{\bf f}({\bf h})-{\bf f}({\bf h}{+}p\,{\bf q}_\perp ) ] + ( c_{\rm a}/2)[{\bf f}({\bf h})-{\bf f}({\bf h}{-}(p{-}k)\ , { \bf q}_\perp ) ] \big\ } \end{aligned}\ ] ] where here is the original hard parton momentum and is the momentum of the radiated hard gluon .the masses appearing in the above equation are the medium induced thermal masses .the 2-d vector is defined to be where is the chosen longitudinal direction and is a measure of the acollinearity .the differential cross - section is the result of the hard - thermal - loop resummation .the open end of the vertex is then closed off with the bare vertex and the appropriate statistical factors to get the gluon radiation rate . in this approach ,the medium consists of fully dynamic thermal quarks and gluons .furthermore , at least within the perturbation theory , the radiation rate so calculated is fully leading order in thermal qcd which sums an infinite number of the generalized ladder diagrams .the original amy calculation of the radiation rates was carried out in order to obtain the leading order transport coefficients in hot qgp within the kinetic theory . in ref. is generalized to the case of propagating hard parton and its kinetic theory equation .the initial momentum distributions of the hard partons now evolve according to the following set of fokker - planck equations which also includes the effect of absorbing thermal energy from the medium . for the full phenomenological study ,this has to be supplemented by the local temperature and the flow information by independent hydrodynamics calculations . since there is nothing intrinsically boost - invariant about our formulation ,there is no restriction on what the underlying soft matter evolution should be .the rates appearing above then become time and space dependent through the local temperature and the flow velocity .finally , the resulting parton distribution at the final time is convoluted with the geometry and the vacuum fragmentation function to yield the medium modified fragmentation function : the resulting for is shown in figure [ fig : raa]-a which includes both the effects of the radiational energy loss and the collisional energy loss . for calculated within the mcgill - amy approach . the panel ( a ) is from ref. and the panel ( b ) is a preliminary result from martini an event generator being developed based on the mcgill - amy energy loss mechanism . in panel ( a ) , the red solid line is the full calculation .the green broken line includes the effect of radiative energy loss only while the blue dash - dot line includes the effect of the collisional energy loss only . in panel ( b ), the green line is the main result with both the collisional and the radiational energy loss . herewe set and the underlying soft - evolution is taken from 3 + 1 d hydrodynamics calculation of nonaka and bass .the data points are taken from ., title="fig:",scaledwidth=47.0% ] for calculated within the mcgill - amy approach .the panel ( a ) is from ref. and the panel ( b ) is a preliminary result from martini an event generator being developed based on the mcgill - amy energy loss mechanism . in panel ( a ) , the red solid line is the full calculation .the green broken line includes the effect of radiative energy loss only while the blue dash - dot line includes the effect of the collisional energy loss only . in panel ( b ), the green line is the main result with both the collisional and the radiational energy loss . herewe set and the underlying soft - evolution is taken from 3 + 1 d hydrodynamics calculation of nonaka and bass .the data points are taken from ., title="fig:",scaledwidth=50.0% ] \(a ) ( b ) one thing to notice is that does not validate perturbative treatment since this implies . at this point ,mcgill - amy approach becomes a phenomenological model .however , i believe that this is the best one can do with current analytical tools since it at least includes all dynamic effects such as the energy - momentum conservation , broadening , thermal push , flavor change , elastic and inelastic energy losses except the interference between the vacuum and the in - medium processes .the last one is currently being worked on by the members of the mcgill - amy team .in the previous section , a particular approach mcgill - amy was described and shown to yield results that compare very favorably with the experimental data .as far as the nuclear modification factor is concerned , other approaches can equally well describe the experimental with a reasonable set of assumptions . in this sense ,the theoretical efforts to understand the jet quenching has been successful .however , the question remains : what do we really learn about the medium when different approaches with different assumptions about the medium can describe the experimental data equally well ? for instance , consider the temperature .since we are dealing with a thermalized system , finding the value of the temperature goes a long way to characterize that system .it sounds like measuring temperature ought be a simple task .even this , however , becomes a non - trivial task among different models .temperature appears as an explicit parameter of the calculation only in some models . in others ,the medium is treated as dense , but cold to simplify the calculation .the connection to the qgp property is then made through the transport coefficient where is the soft momentum exchange scale and is the mean free path .when soft exchange dominates the dynamics of the propagating hard parton , this should not be a bad approximation .however , this is still an indirect connection since and are independent free parameters instead of functions of and .combined with the differing treatments of the multiple emissions , it makes the determination of the temperature within such models rather fragile especially since the value of seems to be only weakly dependent on the value of .recently , a collective effort to resolve this and other issues has been initiated by the tec - hqm collaboration .right now the collaboration is performing standardized tests to see where exactly the differences between models lie .since the shape of is so featureless , i believe that the true test of models will come later when other observables such as the hard photon spectrum are calculated coherently within each approaches ( for instance the spectrum ) .it is therefore quite encouraging that monte - carlo event generators based on the current works on the energy - loss mechanisms start to appear on the scene . in these proceedings ,some results from yajem , jewel , and q - pythia are reported . in the remaining space ,let me introduce the event generator being worked on by mcgill - amy team martini ( modular algorithm for relativistic treatment of heavy ion interactions ) .the first , very preliminary result from martini is shown in figure [ fig : raa]-b .martini is a modular modification of pythia 8.1 to take into account the energy loss of hard partons before they can hadronize ( fragment ) into showers .hopefully , many more tests of the mcgill - amy approach can be performed on many different hadronic and electro - magnetic probes with this new development .first of all , i would like to express my gratitude to the members of mcgill - amy team c. gale , g. moore , s. turbide , g. qin , j. ruppert , b. schenke .much help from u. heinz , e. frodermann , c. nonaka , s. bass , m. mustafa , and d. srivastava is also greatly appreciated .s. turbide , c. gale , s. jeon and g. d. moore , phys . rev .c * 72 * ( 2005 ) 014906 g. y. qin , j. ruppert , c. gale , s. jeon , g. d. moore and m. g. mustafa , phys .* 100 * ( 2008 ) 072301 c. gale , s. turbide , e. frodermann and u. heinz , j. phys .g * 35 * , 104119 ( 2008 ) s. turbide , c. gale , e. frodermann and u. heinz , phys .c * 77 * , 024909 ( 2008 ) g. y. qin , j. ruppert , c. gale , s. jeon and g. d. moore , arxiv:0906.3280 [ hep - ph ] .b. schenke , c. gale and g. y. qin , arxiv:0901.3498 [ hep - ph ] .s. a. bass , c. gale , a. majumder , c. nonaka , g. y. qin , t. renk and j. ruppert , phys .c * 79 * ( 2009 ) 024901 m. gyulassy and m. plumer , phys .b * 243 * , 432 ( 1990 ) .x. n. wang and m. gyulassy , in bnl rhic workshop 1990:0079 - 102 j. adams _ et al . _ [ star collaboration ] , phys .* 91 * ( 2003 ) 072304 a. adare _ et al . _[ phenix collaboration ] , phys .* 101 * ( 2008 ) 232301 f. gelis , t. lappi and r. venugopalan , phys .d * 78 * ( 2008 ) 054020 f. gelis , t. lappi and r. venugopalan , phys .d * 78 * ( 2008 ) 054019 s. mrowczynski , phys .b * 269 * ( 1991 ) 383 .m. g. mustafa and m. h. thoma , acta phys . hung . a * 22 * ( 2005 ) 93 [ arxiv : hep - ph/0311168 ] .r. baier , y. l. dokshitzer , a. h. mueller , s. peigne and d. schiff , nucl .b * 483 * ( 1997 ) 291 b. g. zakharov , jetp lett .* 65 * ( 1997 ) 615 r. baier , y. l. dokshitzer , a. h. mueller and d. schiff , nucl .b * 531 * ( 1998 ) 403 m. gyulassy and x. n. wang , nucl . phys .b * 420 * ( 1994 ) 583 r. baier , y. l. dokshitzer , a. h. mueller and d. schiff , jhep * 0109 * ( 2001 ) 033 [ arxiv : hep - ph/0106347 ] . m. gyulassy , i. vitev , x. n. wang and b. w. zhang , arxiv : nucl - th/0302077 .m. gyulassy , p. levai and i. vitev , nucl .b * 571 * ( 2000 ) 197 n. armesto , c. a. salgado and u. a. wiedemann , phys .d * 69 * , 114003 ( 2004 ) m. djordjevic and m. gyulassy , nucl .a * 733 * ( 2004 ) 265 m. djordjevic and u. heinz , phys .c * 77 * ( 2008 ) 024905 s. wicks , w. horowitz , m. djordjevic and m. gyulassy , nucl .a * 784 * ( 2007 ) 426 a. majumder and x. n. wang , arxiv:0806.2653 [ nucl - th ] .a. majumder , phys .c * 75 * ( 2007 ) 021901 c. nonaka and s. a. bass , phys .c * 75 * , ( 2007 ) 014902 k. j. eskola , h. honkanen , c. a. salgado and u. a. wiedemann , nucl .a * 747 * , ( 2005 ) 511 the tec - hqm collaboration wiki , https://wiki.bnl.gov / techqm/. i. vitev and b. w. zhang , phys .b * 669 * ( 2008 ) 337 t. renk , phys . rev .c * 78 * ( 2008 ) 034908 k. zapp , g. ingelman , j. rathsman , j. stachel and u. a. wiedemann , eur .j. c * 60 * ( 2009 ) 617 n. armesto , l. cunqueiro and c. a. salgado , arxiv:0907.1014 [ hep - ph ] . | a short summary of different approaches to the parton energy loss problem is given . a particular attention is paid to the differences between various models . a possible solution to the problem of distinguishing competing approaches is discussed . |
opportunistic networks are very dynamical systems , characterized by intermittent contacts among mobile nodes and frequent partitions .the model of interaction differs from classic communication paradigms , usually based on a prolonged end - to - end connectivity . at a given instant , a complete path from a source to a destinationmight not exist .rather , nodes communicate as soon as they have the opportunity to do it .these occasional contacts are employed to share and disseminate information , route messages towards some destination , etc .delay tolerance is thus a main characteristics of these networks , and in general the temporary contacts among nodes must be exploited to disseminate contents , due to the uncertainty of future communications .an open research topic is concerned with the topology of an opportunistic network . due to its evolving nature, several works simply define the topology of opportunistic networks as unpredictable , others approximate them as small worlds , others argue that temporal connection models are better suited than spatial mobility models . as a matter of fact , at a given instant an opportunistic network appears as a classic mobile ad - hoc network ( manet ) , composed of a set of links which are constrained by the geographical location of mobile users and the limitations imposed by the signal strength of the employed wireless technology .however , while manets consider links as connections active at a given instant , opportunistic networks have a coarse grained time model .hence , the concept of link in an opportunistic network should reflect the fact that temporal constraints are relaxed .this suggests to model opportunistic networks as evolving graphs where links characterize the interactions of a node during a ( non - instantaneous ) time interval .thus , the set of links of a node at time is not simply composed of the contacts active at the instant , but rather , it represents the aggregate of the contacts arose during ] .we assume that only messages received in a time window can be relayed to a neighbour though an active link .this constraint reflects the fact that there is a limit on the temporal validity of messages and also to consider the limited computational capabilities of mobile nodes which can not hold messages forever .the implementation of a link , as thought in this context , may be realized at the bundle and application layers , rather than as an open communication flow between two nodes ( as usually thought in manets or p2p architectures ) . in practice ,a link from a node to another node means that stores on its memory the information related to , such as s i d and its profile , together with the contents is trying to disseminate , or the type of contents is trying to retrieve ( depending on the service running on top of the opportunistic network ) .this approach of modeling a link reflects the fact that some content disseminated from to may be delivered afterwards to another node , if has the opportunity to interact with it in a time interval . due to constraints of mobile devices ,each node would maintain a limited number of nodes as neighbours , and a node might decide to replace a neighbour with another .hence , it may happen that as the network evolves , a node has an entry for in its `` neighbour table '' , while does not have any entry for , i.e. links are directed .we define the desired topology of an opportunistic network by specifying the probability distribution of the degree ( i.e. number of links ) that a node may have . through this choice , it is possible to give a statistical characterization of the network , which permits to automatically estimate important metrics , such as the net diameter ( i.e. the max of the shortest paths among nodes in a net ) , the average number of neighbours at a given distance from a specific node , and so on .the algorithm for the link management is very simple and it is as follows .at the beginning of its interactions , each node randomly selects a desired degree , locally computed through the degree probability distribution associated to the desired topology .note that in order to randomly selecting a degree , the node might need an estimation of the network size .schemes exist that do it .moreover , when the network size is very large , just an approximate value is sufficient ; it is required that , in case of a significant variation , nodes can detect it , and in this case they might change their desired degree . during the evolution of the network , based on the active contactsa node has , the node stores entries related to the nodes it encounters ( i.e. it creates links ) , till reaching its desired degree . in general , we assume that links can be established only when the number of active contacts ( within a time interval ) surpasses a predefined threshold . this parameter can be tuned depending on the type of service to be executed on top of the opportunistic network .for instance , a single contact ( ) may be employed for simple dissemination services where contents must be broadcast through the net .higher values might be set for more sophisticated services , e.g. queries to be distributed that need answers . in this case , the communication would require multiple contacts .the link holds for a limited time , and it is removed after a time of no contacts . when a novel contact arises with a non - neighbour node ( say ) , andif the node has already reached its desired degree , then the novel node may become a neighbour with a certain probability . in this case , a random entry ( e.g. one among those with the lowest number of active contacts during the time window ) is replaced with . in the next sectionwe evaluate whether an opportunistic network , modeled as described above , can assume a scale - free topology .a scale - free network possesses the distinctive feature of having nodes with a degree distribution that can be well approximated by a power law function .hence , the majority of nodes have a relatively low number of neighbors , while a non - negligible percentage of nodes ( `` hubs '' ) exists with higher degrees .the peculiarity of these networks is that they possess a very small diameter , thus allowing to propagate information in a low number of hops .they are quite robust to node faults ( departures ) , something usual in a wireless network . on the other hand, the presence of hubs might represent a drawback in the context of opportunistic networking , since it corresponds to an unbalanced load distribution .coupling scale - free and ( opportunistic ) mobile networks is unusual , due to the issue mentioned above and to the fact that in a manet nodes connect to those which are directly reachable through the networking technology in use .therefore , the instantaneous topology strongly depends on the geographical distribution of the nodes .however , in this work links are considered as the aggregate of active contacts arising during a time interval .hence , the role of hubs might be played by those nodes that have in time a number of contacts higher than others .an example of a possible hub , in a real opportunistic network , might be a ticket inspector in some public transportation system ( equipped with a mobile terminal ) , a postman , or even a dedicated totem ( i.e. `` information sprinkler '' ) placed to relay contents to mobile nodes in a square or in a mall .the idea of having dedicated nodes in points of interest , which act as hubs able to relay contents , would solve also the issue of the unfair load at hubs . in any case, while we evaluate whether a scale - free overlay can be built over an opportunistic network , the same machinery can be employed also for other topologies .it suffices to change the distribution to compute the desired degree .a discrete event simulator was built to mimic the algorithm executed at mobile nodes .nodes movements and contacts were obtained from real data traces . in what follows, we describe these traces and explain the metrics of interest , together with the methodology to collect the results , which are provided in the next section .we employed real data traces available from .these are the patterns among 22341 students , inferred from their behavior during class schedules for the spring semester of 2006 in national university of singapore .the national university of singapore is composed of colleges and departments . as reported in the description of the traces , all lessons were conducted on the main campus , spanning an area of 146 hectares .in essence , these traces represent a wide , real scenario of typical students everyday life .it is hence an ideal use case to test algorithms on urban opportunistic networks .user contacts were traced in time intervals .for instance , students meeting during a lecture where considered to be in contact during a single time interval , regardless of its duration ( which is abundantly longer than time usually needed to exchange information between two mobile nodes ) .hence , the time interval exploited to manage links in the network is expressed as a multiple of these intervals .the desired topology was generated through a probability distribution for nodes desired degree , following a power law function , with $ ] .we considered a minimum and a maximum value of the desired degrees that a node might want to have , , , respectively .we varied these parameters .figure [ fig : dd ] shows an example of the distribution of the desired degree of network nodes , generated during our tests .the log - log chart shows a linear curve , hence confirming that such distribution follows a power law distribution . during our testswe varied the time interval size .we varied also the minimum number of active contacts required so that to two nodes may become neighbours , and the value of that controls the probability that a node replaces a link with another , once its degree is equal to its desired degree .for the sake of conciseness , we present only results related to a setting with , , , while varying the values , , .similar outcomes were obtained for other values of these parameters ( not shown in this paper ) .it goes without saying that , depending on the service to be executed on top of the opportunistic network , and most of all , on the contents to be disseminated , a degree value such as that of might be too high for a mobile node .other scenarios are probably more realistic .it is however worth mentioning that : i ) the number of such hubs would be very low and in certain application scenarios such a role might be played by those `` information sprinklers '' ; ii ) this setting is a test to see whether such a threshold might be reached using a real trace such as that employed in these tests .figure [ fig : res_min_5 ] reports the results when , , varying the duration of , ( i.e. trace intervals ) .each chart refers to a particular value of , and reports the distribution of nodes degree obtained when the proposed algorithm is executed .the probability that a node has a certain degree is reported in a log - log scale . in this case , a linear trend may be appreciated , meaning that the distribution follows a power law function .this confirms that the approach is able to configure the topology as a scale - free .moreover , an higher ( rightmost chart ) guarantees a higher probability that nodes reach their desired degree .only some few non - zero probabilities are obtained for some degree values lower than .this means that there are certain nodes that have a few number of contacts with other nodes , whatever the duration of .a similar result is reported in figure [ fig : res_min_10 ] , where and trace intervals .figure [ fig : res_d_max_variato ] reports the degree probability obtained when trace intervals , , while varying .again , it is confirmed that a scale free topology inside an opportunistic network is obtained in all cases , since there is a linear distribution of degree probabilities ( in log - log scale ) .when , the probability that a node has a high degree is very low .this is however due to the fact that only few nodes might have the opportunity to meet a number of nodes during a time window . moreover ,such values are inappropriate as number of links a mobile node might have in a real opportunistic network ; such simulation setting serves only to demonstrate that the framework scales up to those numbers .the reader may also notice the presence of non - zero degree probabilities lower than , which remain , despite the settings of the algorithm executed ( i.e. value of ) .the presence of these values comes from the fact that the data traces exploited to do the simulations were the same in all the considered scenarios .in this work we have presented a discussion on how opportunistic networks might be modeled . due to the evolving nature of the network and its delay tolerance , links among nodesshould not be considered as contacts which are active simultaneously .rather , the aggregate of contacts arising during a time interval should be preferred . to create a link, the duration of the contact might be sufficient for a data exchange .such amount of time might depend on the application to be run on top of the network .in particular , a link in the evolving graph represents the fact that a node maintains on its `` neighbour table '' some application - related contents on the behalf of its neighbour , so that such contents can be relayed as soon as there is the opportunity to do it .we have presented a simple algorithm that allows each node to set and manage its own degree , in order to shape the net based on a desired topology .this influences the way contents can be disseminated through the network .the scheme has been employed on real data traces , and outcomes confirm that a desired topology ( in this case a scale - free ) can be obtained . to the best of our knowledge ,this is the first attempt to model opportunistic networks taking into consideration contacts active in a given time interval , instead of instantaneous contacts . in future works, the scheme will be employed on other data traces and with different desired topologies . | the coupling of scale - free networks with mobile unstructured networks is certainly unusual . in mobile networks , connections active at a given instant are constrained by the geographical distribution of mobile nodes , and by the limited signal strength of the wireless technology employed to build the ad - hoc overlay . this is in contrast with the presence of hubs , typical of scale - free nets . however , opportunistic ( mobile ) networks possess the distinctive feature to be delay tolerant ; mobile nodes implement a store , carry and forward strategy that permits to disseminate data based on a multi - hop route , which is built in time , when nodes encounter other ones while moving . in this paper , we consider opportunistic networks as evolving graphs where links represent contacts among nodes arising during a ( non - instantaneous ) time interval . we discuss a strategy to control the way nodes manage contacts and build `` opportunistic overlays '' . based on such an approach , interesting overlays can be obtained , shaped following given desired topologies , such as scale - free ones . opportunistic networks , scale - free networks , self - organization |
_ program title : _ ` mnpbem ` toolbox supplemented by a collection of demo files + _ programming language : _ matlab 7.11.0 ( r2010b ) + _ computer : _ any which supports matlab 7.11.0 ( r2010b ) + _ operating system : _ any which supports matlab 7.11.0 ( r2010b ) + _ ram required to execute with typical data : _ + _ has the code been vectorised or parallelized ? : _yes + _ keywords : _ plasmonics , electron energy loss spectroscopy , boundary element method + _ cpc library classification : _ optics + _ external routines / libraries used : _ ` mesh2d ` available at ` www.mathworks.com ` + _ nature of problem : _ simulation of electron energy loss spectroscopy ( eels ) for plasmonic nanoparticles + _ solution method : _ boundary element method using electromagnetic potentials + _ running time : _ depending on surface discretization between seconds and hours +plasmonics has emerged as an ideal tool for light confinement at the nanoscale .this is achieved through light excitation of coherent charge oscillations at the interface between metallic nanoparticles and a surrounding medium , the so - called _ surface plasmons _ , which come together with strongly localized , evanescent fields .while the driving force behind plasmonics is downscaling of optics to the nanoscale , conventional optics can not be used for mapping of plasmonic fields because to the abbe diffraction limit of light . to overcome this limit , various experimental techniques , such as scanning near field microscopy or scanning tunneling spectroscopy ,have been employed . in recent years , electron energy loss spectroscopy ( eels )has become an extremely powerful experimental device for the minute spatial and spectral investigation of plasmonic fields . in eels ,electrons with a high kinetic energy pass by or penetrate through a metallic nanoparticle , excite particle plasmons , and lose part of their kinetic energy . by monitoring this energy loss as a function of electron beam position , one obtains a detailed map about the localized plasmonic fields .this technique has been extensively used in recent years to map out the plasmon modes of nanotriangles , nanorods , nanodisks , nanocubes , nanoholes , and coupled nanoparticles ( see also refs . for the interpretation of eels maps ) .simulation of eel spectra and maps has primarily been performed with the discrete - dipole approximation and the boundary element method ( bem ) approach . within the latter scheme ,the boundary of the metallic nanoparticle becomes discretized by boundary elements , and maxwell s equations are solved by attaching artificial surface charges and currents to these elements which are chosen such that the proper boundary conditions are fulfilled .the methodology for eels simulations within the bem approach has been developed in refs . , and has been successfully employed in comparison with experimental eels data .the purpose of the eels software described in this paper is to allow for a simple and efficient computation of electron energy loss spectroscopy of plasmonic nanoparticles and other nanophotonic structures .the software consists of two classes ` eelsret ` and ` eelsstat ` devoted to the simulation of eel spectroscopy and mapping of plasmonic nanoparticles , see fig .[ fig : flowchart ] , which can be used in combination with the ` mnpbem ` toolbox that provides a generic simulation platform for the solution of maxwell s equations .our implementation for eels simulation of plasmonic nanoparticles relies on a bem approach that has been successfully employed in various studies .a typical simulation scenario consists of the following steps . 1 .first , one sets up the particle boundaries and the dielectric environment within which the nanoparticle is embedded .this step has been described in detail in our previous ` mnpbem ` paper .2 . we next initialize an ` eelsret ` or ` eelsstat ` object which defines the electron beam .this object stores the beam positions and the electron velocity . for a given electron loss energy, it then returns the external scalar and vector potentials and which can be used for the solution of the bem equations .3 . for given and , we solve the full maxwell equations or its quasistatic limit , using the classes ` bemret ` or ` bemstat ` of the ` mnpbem ` toolbox .the solutions are provided by the surface charge and current distributions and , which allow to compute the potentials and fields at the particle boundary and everywhere else ( using the green function of the helmholtz equation ) .finally , we use and to compute the electron energy loss probabilities , which can be directly compared with experimental eel data . rather than providing the additional classes separately ,we have embedded them in a new version of the ` mnpbem ` toolbox which supersedes the previous version .the main reason for this policy is that also the mie classes ` mieret ` and ` miestat ` had to be modified , which allow a comparison with analytic mie results and can be used for testing .the new version of the toolbox also corrects a few minor bugs and inconsistencies .however , we expect that all simulation programs that performed with the old version should also work with the new version .we have organized this paper as follows . in sec .[ sec : start ] we discuss how to install the toolbox and give a few examples demonstrating the performance of eels simulations .the methodology underlying our approach as well as a few implementation details are presented in sec .[ sec : theory ] . finally , in sec .[ sec : results ] we present results of our eels simulations and provide a detailed toolbox description .to install the toolbox , one must simply add the path of the main directory ` mnpbemdir ` of the ` mnpbem ` toolbox as well as the paths of all subdirectories to the matlab search path .this can be done , for instance , through addpath(genpath(mnpbemdir ) ) ; to set up the help pages , one must once change to the main directory of the ` mnpbem ` toolbox and run the program ` makemnpbemhelp ` > > cd mnpbemdir ; > > makemnpbemhelp ; once this is done , the help pages , which provide detailed information about the toolbox , are available in the matlab help browser .note that one may have to call ` start > desktop tools > view start button configuration > refresh ` to make the help accessible . under matlab 2012 the help pages can be found on the start page of the help browser under _ supplemental software_. the toolbox is almost identical to our previously published version .the only major difference concerns the inclusion of eels simulations , which will be described in more detail in this paper .the ` mnpbem ` toolbox comes together with a directory ` demoeels ` containing several demo files . to get a first impression, we recommend to work through these demo files . by changing to the demo directory and typing > > demotrianglespectrum at the matlabprompt , a simulation is performed where the eel spectra are computed for a triangular silver nanoparticle . the run timeis reported in table [ table : examples ] , and the simulation results are shown in fig .[ fig : trianglespectrum ] .one observes a number of peaks associated with the different plasmon modes of the nanoparticle .note that through ` plot(p,'edgecolor','b ' ) ` one can plot the nanoparticle boundary . by running next ` demotrianglemap.m ` we obtain the spatial eels maps at the plasmon resonance energies indicated by dashed lines in fig .[ fig : trianglespectrum ] . figure [ fig : trianglemap](a ) reports the map for the degenerate dipolar modes , whereas panels ( b d ) show the eels maps for higher excited plasmon modes ( see ref . for experimental results ) . are computed . ]lrx demo program & runtime & description + ` demomie.m ` & 26.3 sec & comparison of bem simulations with analytic mie results + ` demomiestat.m ` & 14.7 sec & same as ` demomie.m ` but for quasistatic limit + ` demodiskspectrum.m ` & 29.4 sec & eel spectra for nanodisk at selected beam positions + ` demodiskspectrumstat.m ` & 14.6 sec & same as ` demodiskspectrum.m ` but for quasistatic limit + ` demodiskmap.m ` & 105.6 sec & spatial eels maps for nanodisk at selected loss energies + ` demotrianglespectrum.m ` & 173.5 sec & eel spectra for nanotriangle at selected beam positions + ` demotrianglemap.m ` & 104.6 sec & spatial eels maps for nanotriangle at selected loss energies + at the plasmon resonances indicated by dashed lines .the maps have been computed with the demo program ` demotrianglemap.m ` , with color axes scaled to the maxima of the respective maps ( see fig . [fig : diskmap ] for color bar ) . ] in the following we briefly discuss the demo file ` demotrianglespectrum.m ` ( see sec .[ sec : results ] for a detailed description of the software ) .we first set up a ` comparticle ` object ` p ` for the nanotriangle , which stores the particle boundary and the dielectric materials , as well as a bem solver ` bem ` for the solution of maxwell s equations .these steps have been described in some length in a previous paper .additional information is also provided by the help pages . to set up the eels simulation , we need the impact parameters of the electron beam , a broadening parameter for the triangle integration ( see sec . [sec : penetrate ] for details ) , and the electron velocity in units of the speed of light .initialization is done through b = [ - 45 , 0 ; 0 , 0 ; 26 , 0 ] ; vel = 0.7 ; width = 0.2 ; exc = eelsret ( p , b , width , vel ) ; the ` exc ` object returns through ` exc(enei ) ` the external potentials for a given loss energy , which allow for the solution of the bem equations by means of artificial surface charges and currents ` sig ` . from `sig ` we can obtain the surface and bulk loss probabilities for the electron .sig = bem exc ( enei ) ; [ psurf , pbulk ] = exc.loss ( sig ) ; finally , the eel spectrum can be computed by performing a loop over loss energies , and eel maps can be obtained by providing a rectangular grid of impact parameters .a more detailed description of the eels classes will be given in sec .[ sec : results ] .for the sake of completeness , we start by briefly summarizing the main concepts of the bem approach for the solution of maxwell s equation ( see refs . for a more detailed discussion ) .we consider dielectric nanoparticles , described through local and isotropic dielectric functions , which are separated by sharp boundaries . throughout , we set the magnetic permeability and consider maxwell s equations in frequency space . in accordance to refs . , we adopt a gaussian unit system .the basic ingredients of the bem approach are the scalar and vector potentials and , which are related to the electromagnetic fields via here and are the wavenumber and speed of light in vacuum , respectively .the potentials are connected through the lorentz gauge condition . within each medium, we introduce the green function for the helmholtz equation defined through with being the wavenumber in the medium . for an inhomogeneous dielectric environment , we then write down the solutions of maxwell s equations in the _ ad - hoc _form [ eq : adhoc ] where and are the scalar and vector potentials characterizing the external perturbation . owing to eq . , these expressions fulfill the helmholtz equations everywhere except at the particle boundaries . and are surface charge and current distributions , which are chosen such that the boundary conditions of maxwell s equations at the interfaces between regions of different permittivies hold .this leads to a number of integral equations .upon discretization of the particle boundaries into boundary elements , one obtains a set of linear equations that can be inverted , thus providing the solutions of maxwell s equation in terms of surface charge and current distributions and . through eqs .( [ eq : adhoc]a , b ) one can compute the potentials everywhere else .for further details about the working equations of the bem approach the reader is referred to refs . . in the followingwe consider the situation where an electron passes by or penetrates through a metallic nanoparticles , and loses energy by exciting particle plasmons .we assume that the electron kinetic energy is much higher than the plasmon energies ( for typical electron microscopes operating with electron energies of several hundreds of kev this assumption is certainly fulfilled ) .we can thus discard in the electron trajectory the small change of velocity due to plasmon excitation , and describe the loss process in lowest order perturbation theory .we emphasize that our approach is correspondingly not suited for low electron energies or thick samples .for an electron trajectory , with , the electron charge distribution reads here and are the charge and velocity of the electron , respectively , is the impact parameter in the -plane , and is a wavenumber .the potentials associated with the charge distribution of eq . can be computed in infinite space analytically ( linard - wiechert potentials ) , and we obtain is the modified bessel function of order zero , and . within our bem approach , we can directly insert the expressions of eq .for the unbounded medium into eq . since the calculated surface charge and current distributions and will automatically guarantee that the proper boundary conditions at the interfaces are fulfilled .we next turn to the calculation of the electron energy loss . ignoring the small change of the electron velocity caused by the interaction with the plasmonic nanoparticle, the energy loss can be computed from the work performed by the electron against the induced field \,dt=\int_0^\infty \hbar\omega \gamma_{\rm eels}(\bm r,\omega)\,d\omega\,,\ ] ] with the loss probability , given per unit of transferred energy , \bigr\}\,dt+ \gamma_{\rm bulk}(\omega)\,.\ ] ] note that eq .is a classical expression , where has been introduced only to relate energy and frequency . is the induced electric field , which can be computed from the potentials originating from the surface charge and current distributions and alone . is the bulk loss probability for electron propagation inside a lossy medium , see eq .( 18 ) of ref . . within the quasistatic approximtion ,it is proportional to the loss function $ ] and the propagation length inside the medium .expressions similar to eq . but derived within a full quantum approach , based on the born approximation , can be found in ref . .quite generally , can be computed by calculating the induced electric field along the electron trajectory and evaluating the expression given in eq . .in what follows , we describe a computationally more efficient scheme .insertion of the induced potentials of eq . into the energy loss expression of eq .yields \,.\ ] ] here and are the entrance and exit points of the electron beam in a given medium , and parameterizes the electron trajectory .we next introduce a potential - like term , associated with the electron propagation inside a given medium . performing integration by parts, we can rewrite the second term in parentheses of eq . as the first term on the right - hand side gives , upon insertion into eq ., .the integral expression precisely corresponds to the scalar potential at the crossing points of the trajectory with the particle boundary . as the potential is continuous across the boundaries , the contributions of all crossing points sum up to zero .thus , we arrive at our final result \,.\ ] ] in comparison to eq . , this expression has the advantage that the integration is only performed over the particle boundary , where the surface charge and current distributions and are readily available , and we do nt have to compute the induced electric field along the electron trajectory . in the calculation of the external potential , eq . , and the eels probability of eq .the points where the electron trajectory crosses the boundary have to be treated with care .for small distances , the potential scales with .when integrating this expression within our bem approach over a small area , we find in polar coordinates that remains finite .the same is true for the surface derivative of the potential . in a computational approachit is somewhat tedious to perform such integration properly , in particular for crossing points that are located close to the edges or corners of boundary elements .for this reason , we suggest a slightly different approach . the main idea is to replace the delta - like transversal trajectory profile by a smoothened distribution .the potential at the transverse position then reads \,.\ ] ] here [ see eq . ] and the term in brackets is our smoothing function , with being a parameter that determines the transversal extension . for small arguments, we can expand , where is the euler constant , and perform all integrations analytically to obtain .this suggests replacing the potential of eq . by the smoothened function for large arguments this expression coincides with the linard - wiechert potential , but remains finite for small arguments .a corresponding smoothening is also performed in the potential - like function of eq . .in the quasistaic approximation one assumes that the size of the nanostructure is significantly smaller than the wavelength of light , such that .this allows us to keep in the simulations only the scalar potential and to set in the green function .we are thus left with the solution of the laplace or poisson equation , rather than the helmholtz equation , but we keep the full frequency - dependence of the permittivities . the calculation of eels probabilities with the bem approach has been described in some detail in ref . . in the following we briefly describe the basic ingredients .first , we compute the external potential from the solution of poisson s equation with being the charge distribution of eq . .we next compute the surface charge distribution from the solution of the boundary integral equation , which , for a nanoparticle described by a single dielectric function embedded in a background of dielectric constant , reads here is the static green function and denotes the surface derivative , where is the outer surface normal of the boundary . for materials consisting of more than one material , eq .has to be replaced by a more general expression .finally , we compute the electron energy loss probability from ( see also eq . ( 18 ) of ref . ) where is the bulk loss probability ( see eq . ( 19 ) of ref .we first discuss the demo file ` demomie.m ` that simulates the energy loss probability for an electron passing by a silver nanosphere .figure [ fig : mie ] shows results of our bem simulations which are in good agreement with analytic mie results .let us briefly work through the demo program . in the first lines we define the dielectric materials and the nanosphere ( for a more detailed discussion of the ` mnpbem ` toolbox see ref . ) .epsm = epstable ( silver.dat ) ; epstab = epsconst ( 1 ) , epsm ; diameter = 80 ; p = comparticle ( epstab , trisphere ( 256 , diameter ) , [ 2 , 1 ] , 1 ) ; we next define the excitation of the electron beam . for the solution of the full maxwell equations , the excitation and the calculation of the eels probabilityis performed by the ` eelsret ` class , which is initialized through exc = eelsret ( p , impact , width , vel , propertyname , propertyvalue , ... ) here ` p ` is the previously computed ` comparticle ` object , which stores the particle boundaries and the dielectric functions at both sides of the boundary . `impact ` is a vector ` [ x , y ] ` for the impact parameter of the electron beam defined in eq . .if simulations for various impact parameters are requested , as is usually the case for the simulation of eels maps , ` impact ` can also be an array ` [ x1,y1;x2,y2 ; ... ] ` . `width ` is the broadening parameter of the electron beam , see eq . , which will be discussed in more detail below , and ` vel `is the electron velocity to be given in units of the speed of light in vacuum .the optional pairs of property names ( ` ' cutoff ' ` , ` ' rule ' ` , or ` ' refine ' ` ) and values allow to control the performance of the toolbox , as detailed in sec .[ sec : penetrate ] . in the `demomie.m ` program we next set up the eels excitation with b = 10 ; vel = eelsbase.ene2vel ( 200e3 ) ; [ width , cutoff ] = deal ( 0.5 , 8 ) ; exc = eelsret ( p , [ diameter / 2 + b , 0 ] , width , vel , cutoff , cutoff ) ; note that ` eelsbase.ene2vel ` allows to convert a kinetic electron energy in ev to the electron velocity in units of the speed of light in vacuum . in the above example, a kinetic energy of 200 kev corresponds to a velocity of approximately .we next set up the solver ` bemret ` for the solution of the bem equations and compute the loss probabilities of eq .for various loss energies .bem = bemret ( p ) ; ene = linspace ( 2.5 , 4.5 , 80 ) ; psurf = zeros ( size ( ene ) ) ; for ien = 1 : length ( ene ) sig = bem exc ( ev2 nm / ene ( ien ) ) ; psurf ( ien ) = exc.loss ( sig ) ; end here ` sig ` is a ` compstruct ` object that stores the surface charges and current distributions and , inside and outside the particle boundaries , as computed for the eels excitation of eq . .with ` exc.loss(sig ) ` we finally compute the loss probabilities according to eq . .note that ` ev2 nm ` defined in ` units.m ` allows to convert between energies given in electronvolts and wavelengths given in nanometers , the latter being the units used by the ` mnpbem ` toolbox . and of eq .are integrated over the boundary element .the boundary element integration is controlled by the ` refine ` and ` rule ` properties , as described in more detail in the text . in the figure we set `refine=2 ` .( b ) eel spectra for a silver nanodisk with a diameter of 60 nm and a height of 10 nm .the impact parameters of the electron beams for the different spectra are reported in the inset , and the beam propagation direction is the -direction perpendicular to the shaded disk .we investigate different ` width ` parameters of 0.1 nm ( solid lines ) , 0.2 nm ( dashed lines ) , and 0.5 nm ( dashed - dotted lines ) , finding practically no differences in the results .the ` cutoff ` parameter is set to 10 nm.,title="fig : " ] and of eq .are integrated over the boundary element .the boundary element integration is controlled by the ` refine ` and ` rule ` properties , as described in more detail in the text . in the figure we set `refine=2 ` .( b ) eel spectra for a silver nanodisk with a diameter of 60 nm and a height of 10 nm .the impact parameters of the electron beams for the different spectra are reported in the inset , and the beam propagation direction is the -direction perpendicular to the shaded disk .we investigate different ` width ` parameters of 0.1 nm ( solid lines ) , 0.2 nm ( dashed lines ) , and 0.5 nm ( dashed - dotted lines ) , finding practically no differences in the results .the ` cutoff ` parameter is set to 10 nm.,title="fig : " ] to summarize , in the following we list the most important properties of the ` eelsret ` class exc = eelsret ( p , impact , width , vel ) ; pot = exc ( enei ) ; [ psurf , pbulk ] = exc.loss ( sig ) ; we emphasize that the functionality of the ` eelsret ` class is very similar to that of the ` planewaveret ` and ` dipoleret ` classes , which account for plane wave and dipole excitations .the only major difference is that ` eelsret ` requires the particle boundaries ` p ` of the ` comparticle ` object already in the initialization .this is because upon initialization ` eelsret ` computes the crossing points between the particle boundaries and the electron trajectories ( if the electron passes by the nanoparticle no crossing points are found ) , and these crossing points are used in subsequent calculations of the potentials and the loss probabilities to speed up the simulation .we next investigate the situation where the electron beam passes through the nanoparticle .the working principle is almost identical to the previous case where the electron passes by the nanoparticle , but the ` width ` parameter and the different optional properties have to be set with more care . in the following ,we briefly discuss these quantities in more detail . * ` width ` .we have discussed in sec .[ sec : refine ] that the integration of the external potentials over the boundary elements can be facilitated if we use a smoothening parameter in the calculation of the external potentials , see eq . .it is important to stress that the integration could be also performed for and that the finite value only facilitates the computation . in general , ` width ` should be chosen smaller than the average size of the boundary elements , as also shown in fig .[ fig : beam](a ) . * ` cutoff ` .the ` cutoff ` parameter determines those boundary elements where the external potentials become integrated . in more detail, we select all boundary elements fully or partially located within a circle with radius ` cutoff ` and centered around the impact parameter , as shown in fig .[ fig : beam](a ) by the red circle . ` cutoff ` should be set such that at least all direct neighbours of the boundary element crossed by the electron beam are included . * ` refine ` and ` rule ` .the integration over the boundary elements is controlled by ` refine ` and ` rule ` . `refine ` gives the number of integration points within a triangle .quadfaces are divided into two triangles . on default ` rule=18 ` is used ( see ` doc triangle_unit_set ` for details ) , and we recommend to use this value throughout . with ` refine ` one can split the triangles into subtriangles . usually the default value ` refine=1 ` should give sufficiently accurate results . . ]figure [ fig : beam](b ) shows the eel spectra of a nanodisk ( for parameters see figure caption ) for three impact parameters , which are described in the inset . in fig .[ fig : diskmap ] we show a density plot of identical loss spectra for a whole range of impact parameters .one observes a number of peaks , attributed to the dipolar and quadrupolar modes at 2.6 ev and 3.1 ev , respectively , a breathing mode at 3.5 ev , and the bulk losses at 3.8 ev . .for ` width=0 ` in panel ( a ) the computed maps show spikes at certain points , when the impact parameter is too close to a collocation point . for the moderate ` width ` parameters of panels ( b , c )the results are sufficiently smooth and almost independent on the chosen value , whereas for too large parameters the eels map become smeared out , see panel ( d ) . ]how should one chose the ` width ` , ` cutoff ` , and ` refine ` parameters ?quite generally , the results depend rather unsensitively on the chosen parameters . in fig . [fig : beam ] we show results for various ` width ` parameters listed in the figure caption , which are almost indistinguishable . in fig .[ fig : diskbeam ] we depict eels maps for the dipolar disk mode at 2.6 ev and for various ` width ` parameters . for ` width=0 ` in panel ( a ) one observes for certain impact parameters spikes ( some of them indicated with arrows ) , where the loss probabilities becomes significantly enhanced or reduced in comparison to neighbour points .this indicates that the impact parameter is located too closely to the collocation point of the boundary element , and the numerical integration fails . for ` width ` parameters of 0.1 or 0.2 nm , panels ( b , c ) , these spikes are absent and the results are almost indistinguishable .finally , in panel ( d ) we report results for a too large smoothening parameter with a significant smearing of the features visible in panels ( a c ) . thus , ` width ` should be chosen significantly smaller than the size of the boundary elements but large enough to avoid spikes in the computed eels maps .let us finally briefly discuss the simulation of eels maps for a nanotriangle , as computed with the demo program ` demotrianglemap.m ` .results are shown in fig .[ fig : trianglemap ] .first we set up an array of impact parameters and initialize the ` eelsret ` object .[ x , y ] = meshgrid ( linspace ( - 70 , 50 , 50 ) , linspace ( 0 , 50 , 35 ) ) ; impact = [ x ( : ) , y ( : ) ] ; vel = eelsbase.ene2vel ( 200e3 ) ; [ width , cutoff ] = deal ( 0.2 , 10 ) ; exc = eelsret ( p , impact , width , vel , cutoff , cutoff ) ; note that in the initialization of ` exc ` we pass a matrix ` [ x(:),y ( : ) ] ` of impact parameters .as for the bem solver , we recommend to use for the boundary element integration the same or larger ` ' cutoff ' ` and ` ' refine ' ` values as for the ` eelsret ` object ( see ref . and toolbox help pages for further details ) .op = green.options ( cutoff , 20 , refine , 2 ) ; bem = bemret ( p , [ ] , op ) ; finally , once the loss probabilities are computed one should reshape ` psurf ` and ` pbulk ` to the size of the impact parameter mesh .p = reshape ( psurf + pbulk , size ( x ) ) ; -direction for dipolar disk mode .the simulation and disk parameters are identical to those of fig .[ fig : beam ] , the fields are computed with ` demodiskfield.m ` . ] in some cases it is useful to plot the electromagnetic fields induced by the electron beam .we briefly discuss how this can be done .a demo program is provided by ` demodiskfield.m ` , and the simulation results are shown in fig .[ fig : diskfield ] .we first set up the ` eelsret ` object and the bem solver , following the prescription given above , and compute the surface charge and current distributions .exc = eelsret ( p , [ b , 0 ] , width , vel , cutoff , cutoff ) ; bem = bemret ( p , [ ] , op ) ; sig = bem exc ( enei ) ; next , we define the points where the electric field should be computed , using the ` compoint ` class of the ` mnpbem ` toolbox , and define a green function object ` compgreen ` .z = linspace ( - 80 , 80 , 1001 ) . ; pt = compoint ( p , [ b + 0 * z , 0 * z , z ] , mindist , 0.1 ) ; g = compgreen ( pt , p , op ) ; field = g.field ( sig ) ; e = pt ( field.e ) ; in the last two lines we compute the electromagnetic fields , and extract the electric field .the command ` e = pt(field.e ) ` brings ` e ` to the same form as ` z ` , setting fields at points too close to the boundary ( which we have discarded in our ` compoint ` initialization with the parameter ` mindist ` ) to ` nan ` .figure [ fig : diskfield ] shows simulation results .for the electron beam passing through the nanodisk , the field amplitude increases strongly in vicinity of the nanoparticle , which we attribute to evanescent plasmonic fields , and is very small inside the nanodisk because of the efficient free - carrier screening inside conductors . with the ` mnpbem ` toolbox it is also possible to compute the light emitted from the nanoparticles , the so - called _cathodoluminescence_. to this end , we first set up a ` spectrumret ` object for the calculation of scattering spectra , determine the electromagnetic fields at infinity , and finally compute the scattering spectra .spec = spectrumret ; field = farfield ( spec , sig ) ; sca = scattering ( spec , field ) ; in the initialization of ` spec ` one could also use a sphere segment rather than the default unit sphere , e.g. , to account for finite angle coverages of photodetectors .efficient parallelization can be achieved for typical energy loops of the form : for ien = 1 : length ( enei ) sig = bem exc ( enei ( ien ) ) ; [ psurf ( : , ien ) , pbulk ( : , ien ) ] = exc.loss ( sig ) ; end we can replace this loop with : matlabpool open ; parfor ien = 1 : length ( enei ) sig = bem exc ( enei ( ien ) ) ; [ psurf ( : , ien ) , pbulk ( : , ien ) ] = exc.loss ( sig ) ; end the important point is that all computations inside the loop can be performed independently , as is the case for the bem simulation as well as the calculation of the external potentials and loss probabilities . , but computed within the quasistatic limit using the demo program ` demodiskspectrumstat.m ` . the solid lines report simulation results for the full maxwell equations ( same as fig .[ fig : beam ] ) , the dashed lines report results for the quasistatic approximation . for the dipolar mode at lowest energy ,the peak position and width somewhat differ due to retardation effects and radiation damping . ]the implementation of the quasistatic limit within the ` eelsstat ` class closely follows the retarded case .the demo program ` demodiskspectrumstat.m ` is very similar to ` demodiskspectrum.m ` discussed in sec .[ sec : penetrate ] .we first set up a disk - like nanoparticle and specify the electron beam parameters .next , we initialize an ` eelsstat ` object by calling exc = eelsstat ( p , b , width , vel , cutoff , cutoff , refine , 2 ) ; the definition of the various parameters is identical to the retarded case . finally , we set up the quasistatic ` bemstat ` or ` bemstateig ` bem solver , and compute the surface charge distribution and the energy loss probability using the equations presented in sec .[ sec : quasistatic ] bem = bemstat ( p , [ ] , op ) ; for ien = 1 : length ( enei ) sig = bem exc ( enei ( ien ) ) ; [ psurf ( : , ien ) , pbulk ( : , ien ) ] = exc.loss ( sig ) ; end simulation results are shown in fig .[ fig : diskspectrumstat ] .we observe that the results of the full and quasistatic simulations are very similar , and only for the dipolar mode at lowest energy the peak position and width somewhat differ due to retardation effects and radiation dampingi am grateful to andi trgler for most helpful discussions , and thank him as well as toni hrl , jrgen waxenegger , and harald ditlbacher for numerous feedback on the simulation program .this work has been supported by the austrian science fund fwf under project p24511n26 and the sfb nextlite . | within the ` mnpbem ` toolbox , we show how to simulate electron energy loss spectroscopy ( eels ) of plasmonic nanoparticles using a boundary element method approach . the methodology underlying our approach closely follows the concepts developed by garca de abajo and coworkers [ for a review see rev . mod . phys . 82 , 209 ( 2010 ) ] . we introduce two classes ` eelsret ` and ` eelsstat ` that allow in combination with our recently developed ` mnpbem ` toolbox for a simple , robust , and efficient computation of eel spectra and maps . the classes are accompanied by a number of demo programs for eels simulation of metallic nanospheres , nanodisks , and nanotriangles , and for electron trajectories passing by or penetrating through the metallic nanoparticles . we also discuss how to compute electric fields induced by the electron beam and cathodoluminescence . plasmonics , metallic nanoparticles , boundary element method , electron energy loss spectroscopy ( eels ) |
intermittency is well - known to be an ubiquitous phenomenon in nonlinear science .its arousal and main statistical properties have been studied and characterized already since long time ago , and different types of intermittency have been classified as types i iii intermittencies , on off intermittency , eyelet intermittency and ring intermittency . despite of some similarity ( the presence of two different regimes alternating suddenly with each other in the time series ) ,every type of intermittency is governed by its own certain mechanisms and characteristics of the intermittent behavior ( such as the dependence of the mean length of the laminar phases on the control parameter , the distribution of the laminar phase lengths , etc . ) of different intermittency types are distinct .there are no doubts that different types of intermittent behavior may take place in a wide spectrum of systems , including cases of practical interest for applications in radio engineering , medical , physiological , and other applied sciences .this article is devoted to the comparison of characteristics of type - i intermittency in the presence of noise and eyelet intermittency taking place in the vicinity of the phase synchronization boundary .although these types of intermittency are known to be characterized by different theoretical laws , we show here for the first time that these two types of the intermittent behavior considered hitherto as different phenomena are , in fact , the same type of the system dynamics .the structure of the paper is the following . in sec .[ sct : relation ] we give the brief theoretical data concerning both the type - i intermittency with noise and eyelet intermittency observed in the vicinity of the phase synchronization boundary as well as arguments confirming the equivalence of these types of the dynamics .the next sections [ sct : numericalverification][sct : vdpunderrsslr ] aim to verify the statement given in the sec .[ sct : relation ] by means of numerical simulations of the dynamics of several model systems such as a quadratic map , rssler oscillators , etc . eventually , in sec .[ sct : boundary ] we discuss the problem of the upper boundary of the intermittent behavior .the final conclusions are given in sec .[ sct : conclusions ] .first , we consider briefly both eyelet intermittency in the vicinity of the phase synchronization boundary and type - i intermittency in the presence of noise following conceptions accepted generally . the main arguments confirming equivalence of these types of the intermittent behavior are given afterwards . the intermittent behavior of type - i is known to be observed below the saddle - node bifurcation point , with the mean length of laminar phases being inversely proportional to the square root of the criticality parameter , i.e. where is the control parameter and is its bifurcation value corresponding to the bifurcation point .the influence of noise on the system results in the transformation of characteristics of intermittency , with the intermittent behavior being observed in this case both below and above the saddle - node bifurcation point . in the supercritical region of the control parameter values ( i.e. , above the point of bifurcation , ) the mean length of the laminar phases is given by where , is the intensity of a delta - correlated white noise [ , , with equation ( [ eq : typeilawmeanlength ] ) being applicable in the region of the control parameter plane . in this regionthe criticality parameter is large enough and , therefore , the approximate equation ( where ) is used typically ( see for detail ) instead of ( [ eq : typeilawmeanlength ] ) . in turn, the distribution of the laminar phase lengths is governed by the exponential law for the chaotic systems in the vicinity of the phase synchronization boundary ( if the natural frequencies of oscillator and external signal are detuned slightly ) two types of the intermittent behavior and , correspondingly , two critical values are reported to exist .below the boundary of the phase synchronization regime the dynamics of the phase difference features time intervals of the phase synchronized motion ( laminar phases ) persistently and intermittently interrupted by sudden phase slips ( turbulent phases ) during which the value of jumps up by . for two coupled chaotic systems there are two values of the coupling strength being the characteristic points which are considered to separate the different types of the dynamics . below the coupling strength value the type - i intermittency is observed , with the power law taking place for the mean length of the laminar phases , whereas above the critical point the phase synchronization regime is revealed . for the coupling strength the super - long laminar behavior ( the so called _ `` eyelet intermittency '' _ ) should be detected .for eyelet intermittency ( see , e.g. ) the dependence of the mean length of the laminar phases on the criticality parameter is expected to follow the law or ( , and are the constants ) given for the first time in for the transient statistics near the unstable - unstable pair bifurcation point . the analytical form of the distribution of the laminar phase lengths has not been reported anywhere hitherto for eyelet intermittency .the theoretical explanation of the eyelet intermittency phenomenon is based on the boundary crisis of the synchronous attractor caused by the unstable - unstable bifurcation when the saddle periodic orbit and repeller periodic orbit join and disappear .this type of the intermittent behavior has been observed both in the numerical calculations and experimental studies for the different nonlinear systems , including rssler oscillators .although type - i intermittency with noise and eyelet intermittency taking place in the vicinity of the chaotic phase synchronization onset seem to be different phenomena , they are really the same type of the dynamics observed under different conditions .the difference between these types of the intermittent behavior is only in the character of the external signal . in case of the type - i intermittency with noise the stochastic signal influences on the system , while in the case of eyelet intermittency the signal of chaotic dynamical system is used to drive the response chaotic oscillator . at the same time , the core mechanism governed the system behavior ( the motion in the vicinity of the bifurcation point disturbed by the stochastic or deterministic perturbations ) is the same in both cases . to emphasize the weak difference in the character of the driving signal we shall further use the terms `` type - i intermittency with noise '' and `` eyelet intermittency '' despite of the fact of the equivalence of these types of the intermittent behavior . indeed , the phenomena observed near the synchronization boundary for periodic systems whose motion is perturbed by noise ( in other words , the behavior in the vicinity of the saddle - node bifurcation perturbed by noise ) have been shown recently to be the same as for chaotic oscillators in the vicinity of the phase synchronization boundary .thus , both for two coupled chaotic rssler systems and driven van der pol oscillator the same scenarios of the synchronous regime destruction have been revealed .moreover , for two coupled rssler systems the behavior of the conditional lyapunov exponent in the vicinity of the onset of the phase synchronization regime is governed by the same laws as in the case of the driven van der pol oscillator in the presence of noise .additionally , when the turbulent phase begins the phase trajectory demonstrates motion being close to periodic both for the eyelet intermittency observed in the vicinity of the phase synchronization boundary ( see ) and for type - i intermittency with noise .finally , the repeller and saddle periodic orbits of the same period in the vicinity of the parameter region corresponding to the intermittent behavior tend to coalesce with each other ( see , e.g. ) for both these types of the intermittent behavior .obviously , if the phenomena observed near the saddle - node bifurcation point for the systems whose motion is perturbed by noise are the same as for chaotic oscillators in the vicinity of the phase synchronization onset , one can expect that the intermittent behavior of two coupled chaotic oscillators near the phase synchronization boundary ( eyelet intermittency ) is also exactly the same as intermittency of type - i in the presence of noise in the supercritical region .so , if type - i intermittency with noise and eyelet intermittency taking place in the vicinity of the chaotic phase synchronization onset are the same type of the system dynamics , the theoretical equations ( [ eq : typeilawmeanlengthapprox ] ) and ( [ eq : eyeletlawmeanlength ] ) obtained for these types of the intermittent behavior are the approximate expressions being the different forms of eq .( [ eq : typeilawmeanlength ] ) describing the dependence of the mean length of the laminar phases on the criticality parameter .therefore , eq . ( [ eq : eyeletlawmeanlength ] ) can be deduced from eq .( [ eq : typeilawmeanlengthapprox ] ) and vice versa . as a consequence , the coefficients , and , in ( [ eq : typeilawmeanlengthapprox ] ) and ( [ eq : eyeletlawmeanlength ] )are related with each other .obviously , the mean length of the laminar phases must obey eq .( [ eq : typeilawmeanlengthapprox ] ) and eq .( [ eq : eyeletlawmeanlength ] ) simultaneously , independently whether the system behavior is classified as eyelet intermittency or type - i intermittency with noise . additionally , the laminar phase length distribution for the considered type of behavior must satisfy the exponential law ( [ eq : lampahselengthdistribution ] ) . the intermittent behavior under study is considered in the coupling strength range . in the case of the system driven by external noise ( type - i intermittency with noise ) the lower boundary value corresponds to the saddle - node bifurcation point when external noise is switched off .for the dynamical systems demonstrating the chaotic behavior ( eyelet intermittency ) the lower boundary may be found , e.g. , in the way described in .as far as the choice of the upper boundary value is concerned , this subject is discussed in detail in sec .[ sct : boundary ] of this paper both for chaotic and stochastic external signals . to find the relationship between coefficients in ( [ eq : typeilawmeanlengthapprox ] ) and ( [ eq : eyeletlawmeanlength ] ) we introduce the auxiliary variable and expand determined by eq .( [ eq : typeilawmeanlengthapprox ] ) ( type - i intermittency with noise ) into taylor series in the vicinity of the point , where , i.e. having neglected the term in ( [ eq : teylorseries ] ) one can write eq .( [ eq : teylorseries ] ) in the form having required we obtain , that and , therefore , equation ( [ eq : typeimodified ] ) describing the dependence of the mean length of the laminar phases on the criticality parameter for type - i intermittency with noise coincides exactly with eq .( [ eq : eyeletlawmeanlength ] ) corresponding to eyelet intermittency .correspondingly , in terms of eq .( [ eq : c2coefficientvalue ] ) the relationship between coefficients , and , in ( [ eq : typeilawmeanlengthapprox ] ) and ( [ eq : eyeletlawmeanlength ] ) is the following simulating the theoretical law ( [ eq : typeilawmeanlengthapprox ] ) and its approximation by the curve corresponding to law ( [ eq : eyeletlawmeanlength ] ) for eyelet intermittency .the parameter values are , , , , .the points corresponding to are shown by symbols .the theoretical law ( [ eq : eyeletlawmeanlength ] ) for eyelet intermittency is shown by the solid line ] fig .[ fgr : twolaws ] illustrates the relationship of two theoretical laws ( [ eq : typeilawmeanlengthapprox ] ) and ( [ eq : eyeletlawmeanlength ] ) in the region . herefunction simulates the theoretical law ( [ eq : typeilawmeanlengthapprox ] ) ( the critical point is supposed to be ) , whereas the curve corresponds to law ( [ eq : eyeletlawmeanlength ] ) for eyelet intermittency .the value of coefficients , , and have been selected according to eq .( [ eq : coefficients ] ) .one can see that in the region of the study both curves coincide with each other .it means that the mean length of the laminar phases obeys eq .( [ eq : typeilawmeanlengthapprox ] ) and eq .( [ eq : eyeletlawmeanlength ] ) simultaneously , independently whether the system behavior is classified as eyelet intermittency or type - i intermittency with noise .to confirm the concept of the equivalence of intermittencies being the subject of this study we consider several examples of the intermittent behavior classified both as eyelet intermittency taking place in the vicinity of the phase synchronization onset ( two coupled rssler systems ) and type - i intermittency with noise ( quadratic map and driven van der pol oscillator ) . as we have mentioned above , the intermittent behavior of two coupled chaotic oscillators in the vicinity of the phase synchronization boundary is classified traditionally as _ eyelet intermittency_ .nevertheless , the behavior of two coupled rssler oscillators close to the phase synchronization onset was considered from the point of view of type - i intermittency with noise for the first time in , whereas the same dynamics from the position of eyelet intermittency was studied in . according to different works the mean length of laminar phases happens to satisfy both eq .( [ eq : typeilawmeanlengthapprox ] ) ( ref . ) and eq .( [ eq : eyeletlawmeanlength ] ) ( ref .recently the distribution of the laminar phase lengths has been found to obey the exponential law ( [ eq : lampahselengthdistribution ] ) corresponding to type - i intermittency with noise . to give the complete picture we replicate the consideration of two coupled rssler systems near the onset of the phase synchronization regime for the different type of coupling between oscillators and another set of the control parameter values andshow that the observed intermittent behavior may be classified both as eyelet intermittency and type - i intermittency with noise .the system under study is represented by a pair of unidirectionally coupled rssler systems , whose equations read as where [ are the cartesian coordinates of the drive [ the response ] oscillator , dots stand for temporal derivatives , and is a parameter ruling the coupling strength. the other control parameters of eq .( [ eq : roesslers ] ) have been set to , , , in analogy with our previous studies .the ( representing the natural frequency of the response system ) has been selected to be ; the analogous parameter for the drive system has been fixed to .for such a choice of the control parameter values , both chaotic attractors of the drive and response systems are phase coherent. the instantaneous phase of the chaotic signals can be therefore introduced in the traditional way as the rotation angle on the projection plane of each system . )are shown by symbols `` '' .the theoretical laws ( [ eq : eyeletlawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) are shown by the solid lines .( _ a _ ) _ eyelet intermittency : _ the dependence of on the parameter ; , , .( _ b _ ) _ type - i intermittency with noise : _ the dependence of the mean laminar phase length on the parameter , with the ordinate axis being shown in the logarithmic scale ; , , in fig. [ fgr : rsslrsgraph ] one and the same result of the numerical simulation of two coupled rssler systems ( [ eq : roesslers ] ) is shown in different ways to compare obtained data with the analytical predictions ( [ eq : eyeletlawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) for eyelet intermittency taking place near the phase synchronization boundary ( fig .[ fgr : rsslrsgraph],_a _ ) and type - i intermittency with noise ( fig .[ fgr : rsslrsgraph],_b _ ) , respectively .the dependence of on is shown in the whole range of the coupling parameter strength values ( fig .[ fgr : rsslrsgraph],_a _ ) to make evident the deviation of numerically obtained data from law ( [ eq : eyeletlawmeanlength ] ) far away from the onset of the phase synchronization .the coupling strength plays the role of the control parameter .the critical point relates to the onset of the phase synchronization regime in two coupled rssler systems .the point used in ( [ eq : typeilawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) corresponds to the saddle - node bifurcation point if the chaotic dynamics being the analog of noise could be switched off .the value of this point has been found from the dependence of the zero conditional lyapunov exponent on the coupling strength ( see for detail ) .one can see , that the intermittent behavior of two coupled rssler systems may be treated both as eyelet and noised type - i intermittency with the excellent agreement between numerical data and theoretical curve in both cases . moreover , the coefficients , and , of the theoretical equations ( [ eq : eyeletlawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) agree very well with each other according to eq .( [ eq : coefficients ] ) .it allows us to state that both these effects are the same type of the system dynamics . nevertheless , to be totaly convinced of the correctness of our decision we have to consider other examples of the intermittent behavior classified traditionally ( contrary to the previous case of two coupled rssler systems ) as type - i intermittency with noise .the second sample dynamical system to be considered is van der pol oscillator driven by the external harmonic signal with the amplitude and frequency with the added stochastic term .the values of the control parameters have been selected as , .for the selected values of the control parameters and the dynamics of the driven van der pol oscillator becomes synchronized when that corresponds to the saddle - node bifurcation .the probability density of the random variable is where . to integrate eq .( [ eq : drivenvdposcillatorandnoise ] ) the one - step euler method has been used with time step , the value of the noise intensity has been fixed as . )are shown by symbols `` '' .the theoretical laws ( [ eq : eyeletlawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) are shown by the solid lines .( _ a _ ) _ eyelet intermittency : _ the dependence of on the parameter ; , , .( _ b _ ) _ type - i intermittency with noise : _ the dependence of the mean laminar phase length on the parameter ; , , on the one hand , as it has been discussed above , the intermittent behavior in this case have to be classified as type - i intermittency with noise .the corresponding dependence of the mean length of laminar phases on the criticality parameter is shown in fig .[ fgr : vdpgraph],_b_. if the amplitude of the external signal exceeds the critical value the exponential law is expected to be observed . to make this law evident the abscissa in fig .[ fgr : vdpgraph],_b _ has been selected in the -scale and the ordinate axis is shown in the logarithmic scale .one can see again the excellent agreement between the numerically calculated data and theoretical prediction ( [ eq : typeilawmeanlengthapprox ] ) .the distribution of the lengths of the laminar phases obtained for also confirms the theoretical curve ( [ eq : lampahselengthdistribution ] ) , see fig . 7 in .on the other hand , trying to choose the corresponding values of for the driven van der pol oscillator ( [ eq : drivenvdposcillatorandnoise ] ) one can find out that the intermittent behavior of this system also may be identified as eyelet intermittency . indeed , in fig .[ fgr : vdpgraph],_a _ one can see a very good agreement between the numerically obtained mean length of the laminar phases for the different values of the coupling parameter and theoretical law ( [ eq : eyeletlawmeanlength ] ) corresponding to the eyelet intermittency .note also , that for the well chosen values of the dependence in the axes behaves in the same way as the corresponding function in the axes for two coupled rssler systems ( [ eq : roesslers ] ) .again , as well as for two coupled rssler oscillators , the coefficients , and , of the theoretical equations ( [ eq : eyeletlawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) agree very well with each other according to eq .( [ eq : coefficients ] ) .the next example is the quadratic map where the operation of `` '' is used to provide the return of the system in the vicinity of the point , and the probability density of the stochastic variable is distributed uniformly throughout the interval }$ ] . if the intensity of noise is equal to zero the saddle - node bifurcation is observed for .the intermittent behavior of type - i is observed for , whereas the stable fixed point takes place for .having added the stochastic force ( ) in ( [ eq : quadraticmap ] ) we suppose that the intermittent behavior must be also observed in the area of the positive values of the criticality parameter , with the mean length of the laminar phases obeying law ( [ eq : typeilawmeanlengthapprox ] ) . )are shown by symbols `` '' .the theoretical laws ( [ eq : eyeletlawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) are shown by the solid lines .( _ a _ ) _ eyelet intermittency : _ the dependence of on the parameter ; , , .( _ b _ ) _ type - i intermittency with noise : _ the dependence of the mean laminar phase length on the parameter ; , , , although in this case we deal with type - i intermittency with noise , the numerically obtained points corresponding to the mean length of laminar phases are approximated successfully both by eq .( [ eq : eyeletlawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) ( see fig .[ fgr : qmapgraph ] ) , with the coefficients , and , of the theoretical equations ( [ eq : eyeletlawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) agreeing with each other according to eq .( [ eq : coefficients ] ) .these findings confirm our statement about identity of the considered types of the intermittent behavior .so , having studied the intermittent behavior of different systems which ( based on the prior knowledge ) should be classified either eyelet intermittency in the vicinity of the phase synchronization boundary or type - i intermittency with the presence of noise , we can conclude that the obtained characteristics are exactly the same in all cases described above .two next sections are devoted to the consideration of another systems to give the additional proofs of the correctness of the introduced concept .in this section we consider van der pol oscillator driven by the chaotic signal of rssler system where , , , , are the control parameters .the auxiliary parameters and alter the characteristics ( the amplitude and main frequency ) of the chaotic signal influencing on van der pol oscillator . from the formal point of view the behavior of system ( [ eq : vdpunderrsslr ] ) can be classified neither eyelet intermittency nor type - i intermittency with noise .indeed , since the response oscillator is periodic there are no unstable periodic orbits embedded into its attractor to be synchronized , therefore , the system dynamics can not be considered as eyelet intermittency . alternatively , due to the presence of chaotic perturbations there is no pure saddle - node bifurcation in this system to say about type - i intermittency .nevertheless , it is intuitively clear that this example is nearly related to all cases considered above and one can expect to observe here the same type of intermittency as before . )are shown by symbols `` '' .the theoretical laws ( [ eq : eyeletlawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) are shown by the solid lines .( _ a _ ) _ eyelet intermittency : _ the dependence of on the parameter ; , , .( _ b _ ) _ type - i intermittency with noise : _ the dependence of the mean laminar phase length on the parameter ; , , fig .[ fgr : vdpunderrsslrgraph ] makes this statement evident .indeed , the numerically obtained data obey both laws ( [ eq : eyeletlawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) , with the coefficients , and , of the theoretical equations ( [ eq : eyeletlawmeanlength ] ) and ( [ eq : typeilawmeanlengthapprox ] ) agreeing with each other in accordance with eq .( [ eq : coefficients ] ) . additionally , the distribution of the lengths of the laminar phases follows the exponential law , that allows us to say that we deal here with the same type of the dynamics as in the cases of quadratic map ( [ eq : quadraticmap ] ) , driven van der pol oscillator ( [ eq : drivenvdposcillatorandnoise ] ) and two coupled rssler systems ( [ eq : roesslers ] ) considered above .all arguments given above may be considered as the evidence of the proposed statement on the equivalence of both types of the intermittent behavior . at the same time , one great difference between type - i intermittency with noise and eyelet intermittency taking place near the onset of the phase synchronization seems to exist .this difference is connected with the upper boundary of the intermittent behavior and this point could refute the main statement of this manuscript . indeed , for type - i intermittency with noise in the supercritical region there is no an upper threshold ( see eq .( [ eq : type - iintermittencypowerlaw ] ) ) and the intermittent behavior may be ( theoretically ) observed for arbitrary values of the criticality parameter , although the length of the laminar phases may be extremely long in this case , depending on the ratio between the criticality parameter value and the noise intensity .alternatively , the existence of the boundary of the phase synchronization regime being the upper border of the eyelet intermittency is believed to be undeniable , since there is a great amount of works where the boundary of the phase synchronization had been observed and determined .so , this circumstance along with the arguments given above in sections [ sct : relation][sct : numericalverification ] involve a seeming contradiction .to resolve this disagreement we consider the probability to observe the turbulent phase in the time realization of the system demonstrating type - i intermittency with noise in the supercritical region during the observation interval with the length . the probability to detect the turbulent phase depends on the length of the observation interval and the length of the laminar phase being realized at the beginning of the system behavior examination .obviously , if the turbulent phase is detected with the probability and , in turn , the probability for the laminar phase with length to be realized is where is given by eq .( [ eq : lampahselengthdistribution ] ) . otherwise , when the probability to detect the turbulent phase is , whereas the laminar phase with the length takes place with the probability .correspondingly , the probability to observe the turbulent phase is where is the incomplete gamma function , is the mean length of laminar phases depending on the criticality parameter and given by eq .( [ eq : typeilawmeanlength ] ) . and corresponding to it the level curves with the step .the level curves demarcating the regions with the high and low probabilities to detect the turbulent phase are shown by solid lines .( _ a _ ) the theoretical expression ( [ eq : probabilitysurface ] ) , for the simplicity the values of the control parameters in eq .( [ eq : typeilawmeanlength ] ) are taken , , .( _ b _ ) the probability surface for the circle map with noise ( [ eq : circlemapseries ] ) .( _ c _ ) the probability surface for two coupled rssler systems ( [ eq : roesslers ] ) ] the surface determined by the analytical expression ( [ eq : probabilitysurface ] ) and level curves corresponding to it are shown in fig .[ fgr : probabilitysurfaces],_a_. it is clear that the probability to detect the turbulent phase for type - i intermittency with noise during one observation grows with the increase of the examination length but decreases when the criticality parameter is enlarged . obviously , if one examines ( experimentally or numerically ) the system behavior in the time interval with the length varying the control parameter , one observes the alternation of the laminar and turbulent phases for the relatively small values of the -parameter , where is close to one , and only the laminar behavior for the relatively large ones , where is close to zero .having no information about the kind of intermittency ( e.g. , when the experimental study of some system is carried out ) one can suppose the presence of the boundary separating two different types ( intermittent and steady ) of dynamics and , moreover , find a value corresponding to the `` onset '' of the laminar behavior . evidently , this `` boundary point '' would be correspond to the low probability , say , e.g. .in addition , one can perform `` more careful '' measurements with the increased length of the observation to determine the value of the boundary point more precisely . in this case a new value would be obtained ( ) , with it being slightly larger than the previous one .the schematic location of the `` boundary '' curve on the plane is shown in fig .[ fgr : probabilitysurfaces],_a _ by the solid line .it is clearly seen , that for the -level the length grows extremely rapidly with the increase of the -value . in other words , the major extensions of the observation interval result in the minor corrections of the `` boundary '' point .since the resources of the both experimental and numerical studies are always limited , some final value with the maximal possible accuracy will be eventually found .so , despite the fact , that for the type - i intermittency with noise in the supercritical region the turbulent phases can always be observed theoretically , from the practical point of view ( in the experimental studies or numerical calculations ) the boundary point exists , above which only the laminar behavior is observed .moreover , with the further development of the experimental and computational resources the additional studies would result only in the insufficient increase of the boundary value . to illustrate the drawn conclusion we consider the circle map in the interval , where is the control parameter , , is supposed to be a delta - correlated gaussian white noise [ , .if the intensity of noise is equal to zero , the saddle - node bifurcation is observed in ( [ eq : circlemap ] ) for , when the stable and unstable fixed points annihilate at .obviously , for the selected value of the control parameter the evolution of system ( [ eq : circlemap ] ) in the vicinity of the bifurcation point may be reduced to the quadratic map allowing an easy comparison with the results given in the previous sections .the intermittent behavior of type - i is observed for , whereas the stable fixed point takes place for . for the added stochastic force ( ) in circle map ( [ eq : circlemap ] ) the intermittent behavioris also observed in the supercritical region of the criticality parameter , with the mean length of the laminar phases and the distribution of the laminar phase lengths obeying laws ( [ eq : typeilawmeanlengthapprox ] ) and ( [ eq : lampahselengthdistribution ] ) , respectively . the surface of the probability to observe the turbulent phase for the circle map ( [ eq : circlemap ] ) as well as the corresponding level curvesare shown in fig .[ fgr : probabilitysurfaces],_b_. to obtain this surface we have made observations for every point taken with the steps , on the parameter plane . the probability was calculated as , where is the number of observations for which the turbulent phase has been detected .one can see the excellent agreement between the results of numerical calculations and theoretical predictions ( compare fig .[ fgr : probabilitysurfaces],_a _ and _ b _ ) . similarly , the analogous probability surface and the level curves shown in fig .[ fgr : probabilitysurfaces],_c _ have been calculated for two coupled rssler systems ( [ eq : roesslers ] ) in the vicinity of the phase synchronization boundary , where eyelet intermittency is observed . in this case observations have been made for every point to be examined , with these points being taken with the steps and on the plane .it is easy to see that for the eyelet intermittency the probability surface as well as the level curves are exactly the same as for type - i intermittency with noise in the supercritical region . as a consequence, we can draw a conclusion , that the eyelet intermittency taking place in the vicinity of the phase synchronization boundary and type - i intermittency with noise in the supercritical region are the same type of the dynamics observed under different conditions .another consequence of the made consideration is the fact , that the phase synchronization boundary point can not be found absolutely exactly , since it separates the regions with the high and low probabilities to observe the phase slips in the coupled chaotic systems with the help of the experimental and computational resources existing at the moment of study . if someone , using a more powerful tools , tried to refine , say , the value of the coupling strength corresponding to the phase synchronization boundary reported in the earlier paper, one would obtain a new value being close to the previous one , but larger .exactly the same situation may be found , e.g. , in the work , where two mutually coupled rssler systems have been considered . in this work the refined boundary value reported with the reference to the earlier work , where the value was given .having considered two types of the intermittent behavior , namely eyelet intermittency taking place in the vicinity of the phase synchronization boundary and type - i intermittency with noise , supposed hitherto to be different , we have shown that these effects are the same type of the dynamics observed under different conditions . the analytical relation between coefficients of the theoretical equations corresponding to both types of the intermittent behavior has been obtained .the difference between these types of the intermittent behavior is only in the character of the external signal . in case of the type - i intermittencythe stochastic signal influences on the system , while in the case of eyelet intermittency the signal of chaotic dynamical system is used to drive the response chaotic oscillator . at the same time , the core mechanism governed the system behavior as well as the characteristics of the system dynamics are the same in both cases .we thank the referees for useful comments and remarks .this work has been supported by federal special - purpose programme `` scientific and educational personnel of innovation russia ( 20092013 ) '' and the president program ( nsh-3407.2010.2 ) .a. s. pikovsky , g. v. osipov , m. g. rosenblum , m. zaks , j. kurths , attractor repeller collision and eyelet intermittency at the transition to phase synchronization , phys .79 ( 1 ) ( 1997 ) 4750 .s. boccaletti , e. allaria , r. meucci , f. t. arecchi , experimental characterization of the transition to phase synchronization of chaotic laser systems , phys .89 ( 19 ) ( 2002 ) 194101 .a. e. hramov , a. a. koronovskii , m. k. kurovskaya , a. ovchinnikov , s. boccaletti , length distribution of laminar phases for type - i intermittency in the presence of noise , phys .e 76 ( 2 ) ( 2007 ) 026206 . | in this article we compare the characteristics of two types of the intermittent behavior ( type - i intermittency in the presence of noise and eyelet intermittency taking place in the vicinity of the chaotic phase synchronization boundary ) supposed hitherto to be different phenomena . we show that these effects are the same type of dynamics observed under different conditions . the correctness of our conclusion is confirmed by the consideration of different sample systems , such as quadratic map , van der pol oscillator and rssler system . consideration of the problem concerning the upper boundary of the intermittent behavior also confirms the validity of the statement on the equivalence of type - i intermittency in the presence of noise and eyelet intermittency observed in the onset of phase synchronization . fluctuation phenomena , random processes , noise , synchronization , chaotic oscillators , dynamical system , intermittency 05.45.xt , 05.45.tp , 05.40.-a |
more often than not the agents of a social system prefer to combine their efforts in order to achieve results that would be otherwise unattainable by a single agent alone .a relevant role in the organisation of such systems is therefore played by the emerging patterns of collaboration within a group of individuals , which have been widely and thoroughly investigated in the last few decades . in a collaboration network ,two individuals are considered to be linked if they are bound by some form of partnership .for instance , in the case of scientific collaborations , the nodes of the networks correspond to scientists and the relationship between two authors is testified by the fact that they have co - authored one or more papers .another well - known example of collaboration network is that of co - starring graphs , where the nodes represent actors and there is a link between two actors if they have appeared in the same movie .the study of large collaboration systems has revealed the presence of a surprisingly high number of triangles in the corresponding networks .this indicates that two nodes with a common neighbour have a higher probability to be linked than two randomly chosen nodes .this effect , known as _ transitivity _ , can be easily explained in terms of a basic mechanism commonly referred to as _triadic closure _ , according to which two individuals of a collaboration network have a high probability to connect after having been introduced to each other by a mutual acquaintance .some other works have pointed out that triadic closure can also explain other empirical features of real - world collaboration networks , including fat - tailed degree distributions and correlations between the degrees of neighbouring nodes .another remarkable feature often observed in social and collaboration networks is the presence of meso - scale structures in the form of _ communities _ , i.e. groups of tightly connected nodes which are loosely linked to each other .interestingly , structural communities quite often correspond to functional groups .an important observation is that not all the links of a collaboration network are equal , since collaborations can often be classified into a number of different categories .for instance , scientific co - authorship can be classified according to the research field , while actors often appear in movies of different genres . in these cases , a collaboration network is better described in terms of a _ multi - layer _ or _multiplex network _ where links representing collaborations of a specific kind are embedded on a separate layer of the network , and each layer can have in general a different topology .great attention has been recently devoted to the characterisation of the structure and dynamics of multi - layer networks .in particular , various models to grow multiplex networks have appeared in the literature , focusing on linear or non - linear preferential attachment , or on weighted networks .less attention has been devoted to define and extract communities in multiplex networks , for instance by mean of stochastic block models . in this workwe investigate the multiplex nature of communities in collaboration networks and we propose a simple model to explain the appearance , coexistence and co - evolution of communities at the different layers of a multiplex .our hypothesis is that the formation of communities in collaboration networks is an intrinsically multiplex process , which is the result of the interplay between intra - layer and inter - layer triadic closure .for instance , in the case of scientific collaborations , multiplex communities naturally arise from the fact that scientists may collaborate with other researchers in their principal field of investigation and with colleagues coming from other scientific disciplines .analogously , actors can prefer either to specialise in a specific genre or instead to explore different ( sometimes dissonant ) genres , and these two opposite behaviours undoubtedly have an impact on the kind of meso - scale structures observed on each of the layers of of the system .the generative model we propose here mimics two of the most basic processes that drive the evolution of collaborations in the real world , namely intra- and inter - layer triadic closure , and is able to explain the appearance of overlapping modular organisations in multi - layer systems .we will show that the model is able to reproduce the salient micro- , meso- and macro - scale structure of different real - world collaboration networks , including the multi - layer network of co - authorship in journals of the american physical society ( aps ) and the multiplex co - starring graph obtained from the internet movie database ( imdb ) .we start by analysing the structure of two multiplex collaboration networks from the real world .the first multiplex is constructed from the aps co - authorship data set , and consists of four layers representing four sub - fields of physics ( respectively , nuclear physics , particle physics , condensed matter i , and interdisciplinary physics ) . in particular , we considered only scientists with at least one publication in each of the four sub - fields , and we connected two scientists at a certain layer if they had co - authored at least a paper in the corresponding sub - field .the second multiplex is constructed from the internet movie database ( imdb ) and consist of four layers respectively representing the co - starring networks of actors with at least one participation in four different genres , namely action , crime , romance , and thriller movies .the basic structural properties of each layer of the two multiplexes are summarised in table [ tab : exapsimdbsingle ] ( see methods for more information about the data sets ) ..[tab : exapsimdbsingle]*basic properties of real - world multiplex collaboration networks . *we report the number of nodes , the average degree and the clustering coefficient for each layer of a subset of the aps and imdb data sets . in particular , we focus on the multiplex collaboration network of all scientists active in nuclear , particle , condensed matter i and interdisciplinary physics , and the multiplex collaboration network of all actors starring in action , crime , romance and thriller movies. all the layers of aps have a clustering coefficient in the range ] . [ cols="<,<,<,<",options="header " , ]human collaboration patterns are inherently multifaceted and often consist of different interaction layers .scientific collaboration is probably the most emblematic example . as a ph.d .student you usually join the scientific collaboration network by publishing the first paper with your supervisor in a specific field .afterwards , you start being introduced by your supervisor to other researchers in the same field , e.g. to some of his / her past collaborators , and you might end up working with them , creating new triangles in the collaboration network of your field ( what we called intra - layer triadic closure ) .but it is also quite probable that some of your past collaborators will in turn introduce you to researchers working in another -possibly related- area ( what we called an inter - layer triadic closure ) , so that you will easily find yourself participating in more than just one field , and the collaboration network around you will become multi - dimensional .such multi - level collaboration patterns appear not to be specific of scientific production only , but are instead found in many aspects of human activity .the multi - layer network framework provides a natural way of modelling and characterising multidimensional collaboration patterns in a comprehensive manner .in particular , we have argued that one of the classical mechanisms responsible for the creation of triangles of acquaintances , i.e. triadic closure , is indeed general enough to give also account for another interesting aspect of multi - level collaboration networks , namely the formation of cohesive communities spanning more than a single layer of interaction .it is quite intriguing that the simple model we proposed in this work , based just on the interplay between intra- and inter - layer triadic closure , is actually able to explain much of the complexity observed in the micro- meso- and macroscopic structure of multidimensional collaboration networks of different fields ( science and movies ) , including not just transitivity but also intra- and inter - layer degree correlation patterns and the correspondence between the community partitions at difference layers .we also remark that such levels of accuracy in reproducing the features of real - world systems have been obtained without the introduction of ad - hoc ingredients .the results reported in this paper suggest that , despite the apparent differences in the overall dynamics driving scientific cooperation and movie co - starring , triadic closure is a quite generic mechanism and might indeed be one of the fundamental processes shaping the structure of multi - layer collaboration systems .these findings fill a gap in the literature about modelling growing multidimensional networks , and pave the way to the exploration of other simple models which can help underpinning the driving mechanisms responsible for the emergence of complex multi - dimensional structures ._ data sets . _ we considered data from the aps and the imdb collaboration networks .the aps collaboration data set is available from the aps website ` http://journals.aps.org/datasets ` in the form of xml files containing detailed information about all the papers published by all aps journals .the download is free of charge and restricted to research purposes , and aps does not grant to the recipients the permission to redistribute the data to third parties .we parsed the original xml files to retrieve , for each paper , the list of authors and the list of pacs codes .the pacs scheme provides a taxonomy of subjects in physics , and is widely used by several journals to identify the sub - field , category and subject of papers .we used the highest level of pacs codes to identify the ten main sub - fields of physics , and we considered only the papers published in nuclear physics , particle physics , condensed matter i and interdisciplinary physics , respectively associated to high - level pacs codes starting with 1 ( particle physics ) , 2 ( nuclear physics ) , 6 ( condensed matter i ) and 8 ( interdisciplinary physics ) .we focused only on the authors who had at least one publication in each of the four sub - fields .the co - authorship network of each of those four sub - fields constitutes one of the four layers of the aps multiplex .in particular , two authors are connected on a certain layer only if they have co - authored at least one paper in the corresponding sub - field . in the construction of the collaboration network of each sub - fieldwe purposely left out papers with more than ten authors , which represent big collaborations whose driving dynamics might be more complex than just triadic closure .the imdb data set is made available at the website ` ftp://ftp.fu-berlin.de/pub/misc/movies/database/ ` for personal use and research purposes .the data set comes in the form of several compressed text files , and we used those containing information about actors , actresses , movies and genres .we focused only on the co - starring networks of four movie genres , namely action , crime , romance , and thriller , obtained by merging information about participation of actors and actresses to each movie .in particular , two actors are connected by a link on a given layer ( genre ) only if they have co - starred in at least one movie of that genre .we considered only the actors who had acted in at least one movie of each of the four genres .we chose to restrict our analysis to just four layers for both the aps and the imdb data set , which allowed us to consider the simplest formulation of our model , in which all the layers have the same clustering coefficient .the use of the aps and the imdb data sets does not require any ethical approval ._ transitivity and community structure . _ we measured the transitivity of each level by mean of the clustering coefficient , where : the similarity of two community partitions can be measured through the normalised mutual information ( nmi ) . in particular , given the two partitions and respectively associated to layer and layer , we denote the normalised mutual information ( nmi ) between them as where is the number of nodes in common between module of partition and module of partition , while and are respectively the number nodes in module and in module .the partition in communities on each layer has been obtained through the algorithm infomap ._ synthetic multiplex networks . _ we created synthetic networks according to our multi - layer network model by starting , on each layer , from a seed graph consisting of a triangle of nodes and simulating the intra- and inter - layer triadic closure mechanism for nodes , for different values of the parameters and . for each pair of values we computed the mean clustering coefficient on each single layer and the normalised mutual information nmi of the community partitions of the two layers over 30 different realisations .as observed from simulations , once the parameters are fixed , the values of nmi and do not vary substantially as the order of the network increases .notice that since the most simple formulation of the model we have set an identical value of on both layers , the two layers will end up having the same clustering coefficient ( up to small finite - size fluctuation ) ._ degree correlations . _ we study the assortativity of real multiplex collaboration networks in terms of intra - layer , inter - layer and mixed degree correlations .the trend for intra - layer correlations is analysed by mean of the function }_{nn}(k^{[\alpha ] } ) \rangle ] on that layer . in particular , }_{nn } \rangle ] over all nodes with the same degree } ] , where }_{ij} ] is a constant , while }_{nn}(k^{[\alpha ] } ) \rangle ] if assortative ( resp ., disassortative ) degree correlations are present . to quantify inter - layer degree correlations we considered the quantity }(k^{[\alpha ] } ) \rangle ] on layer .again , }(k^{[\alpha ] } ) \rangle ] if nodes tend to have similar degrees on both layers ( assortative inter - layer correlations ) , while }(k^{[\alpha ] } ) \rangle ] if a hub on one layer will preferentially have small degree on the other layer , and vice - versa . finally , we measured the presence of mixed correlations through the function }_{nn}(k^{[\alpha ] } ) \rangle ] on layer . in analogy with the case of intra - layer correlations , the node term is }_{nn , i}=\frac{\sum_{j \neq i } a^{[\beta]}_{ij}k_j^{[\beta]}}{k_i^{[\alpha]}} ] and are not shown in the text .in general , correlation functions might be affected by the degree sequence at each layer of the multiplex . in the simple scenario considered at first , however, we do not fit the parameter from the data , to reduce as much as possible the complexity of the model . instead , in order to still perform an accurate comparison between the synthetic multiplex networks constructed by our model and the real ones , in a second step we divided all the correlation functions by their ( constant ) value expected in the corresponding configuration model network .the correct normalisation for the intra - layer correlation function is }})^2 \rangle}}{{\langle k{^{[\alpha ] } } \rangle}} ] by } } \rangle} ]f.b . , v.n . and v.l .acknowledge support by the project lasagne , contract no.318132 ( strep ) , funded by the european commission .de domenico m , lancichinetti a , arenas a , rosvall m. identifying modular flows on multilayer networks reveals highly overlapping organization in interconnected systems .physical review x. 2015;5(1):011027 .iacovacci j , wu z , bianconi g. mesoscopic structures reveal the network between the layers of multiplex data sets .phys rev e. 2015 oct;92:042806 .available from : http://link.aps.org/doi/10.1103/physreve.92.042806 . | community structures in collaboration networks reflect the natural tendency of individuals to organize their work in groups in order to better achieve common goals . in most of the cases , individuals exploit their connections to introduce themselves to new areas of interests , giving rise to multifaceted collaborations which span different fields . in this paper , we analyse collaborations in science and among movie actors as multiplex networks , where the layers represent respectively research topics and movie genres , and we show that communities indeed coexist and overlap at the different layers of such systems . we then propose a model to grow multiplex networks based on two mechanisms of intra and inter - layer triadic closure which mimic the real processes by which collaborations evolve . we show that our model is able to explain the multiplex community structure observed empirically , and we infer the strength of the two underlying social mechanisms from real - world systems . being also able to correctly reproduce the values of intra - layer and inter - layer assortativity correlations , the model contributes to a better understanding of the principles driving the evolution of social networks . |
the classical secretary problem is a well known optimal stopping problem from probability theory .it is usually described by different real life examples , notably the process of hiring a secretary .imagine a company manager in need of a secretary .our manager wants to hire only the best secretary from a given set of candidates , where is known .no candidate is equally as qualified as another .the manager decides to interview the candidates one by one in a random fashion .every time he has interviewed a candidate he has to decide immediately whether to hire her or to reject her and interview the next one . during the interview process he can only judge the qualities of those candidates he has already interviewed .this means that for every candidate he has observed , there might be an even better qualified one within the set of candidates yet to be observed .of course the idea is that by the time only a small number of candidates remain unobserved , a recently interviewed candidate that is relatively best will probably also be the overall best candidate .there is abundant research literature on this classical secretary problem , for which we refer to ferguson for an historical note and an extensive bibliography .the exact optimal policy is known , and may be derived by various methods , see for instance dynkin and yushkevich , and gilbert and mosteller .also , many variations and generalizations of the original problem have been introduced and analysed .one of these generalizations is the focus of our paper , namely the problem to select one of the best , where is some preassigned number ( notice that is the classical secretary problem ) .originally , this problem was introduced by gusein - zade , who derived the structure of the optimal policy : there is a sequence of position thresholds such that when candidate is presented , and judged to have relative rank among the first candidates , with rank 1 being the best , rank 2 being second best , etc . ], then the optimal decision says furthermore , gave an algorithm to compute these thresholds , and derived asymptotic expressions ( as ) for the case .also frank and samuels proposed an algorithm , and gave the limiting ( as ) probabilities and limiting proportional thresholds .the algorithms of are based on dynamic programming , which means that the optimal thresholds , and the optimal winning probability are determined numerically .the next interest was to find analytic expressions .to our best knowledge , this has been resolved only for by gilbert and mosteller , and for by quine and law .although the latter claim that their approach is applicable to produce exact results for any , it is clear that the expressions become rather untractable for larger .this has inspired us to develop approximate results for larger .we consider two approximate policies for the general case : single - level policies , and double - level policies .a single - level policy is given by a single position threshold in conjuction with a rank level , such that when candidate is presented , and judged to have relative rank among the first candidates , then the policy says a double - level policy is given by two position thresholds in conjuction with two rank levels , such that when candidate is presented , and judged to have relative rank among the first candidates , then the policy says we shall derive the exact winning probability for these two approximate policies , when the threshold and level parameters are given .these expressions can then used easily to compute the optimal single - level and the optimal double - level policies , i.e. , we optimize the winning probabilities ( under these level policies ) with respect to their threshold and level parameters .the most important result is that the winning probabilities of the optimal double - level policies are extremely close to the winning probabilities of the optimal policies ( with the thresholds ) , specifically for larger . in other words, we have found explicit formulas that approximate closely the winning probabilities for this generalized secretary problem . as an example , we present in table [ t : resdlp ] the relative errors in percentages for a few combinations ( a more extended table can be found in section [ s : num ] ) ..__relative errors ( % ) of the optimal double - level policies ._ _ [ cols="<,>,>,>",options="header " , ]for the considered generalized secretary problem of selecting one of the best out of a group of we have obtained closed expressions for the probability of success for all possible single- and double - level policies . forany given finite values of and these expressions can be used to obtain the optimal single - level policy respectively optimal double - level policy in a straightforward manner .moreover , asymptotically for we have also obtained closed expressions for the winning probability for relevant families of single - level and double - level policies .optimizing this expression for the family of single - level policies an asymptotic optimal rank level and corresponding optimal position threshold fraction and asymptotic winning probability are easily obtained .similarly we have done such asymptotic analysis and optimization for the relevant family of double - level policies . both for the single - level and double - level policies we confirmed numerically for that the winning probabilities for optimal finite and double level policies for finite values of converge if increases to the ( respectively single - level and double - level ) optimal asymptotic winning probabilities . finally , we computed for varying and the optimal single - level and double - level policies and corresponding winning probabilities and compared the results to the overall optimal policy which is determined by position thresholds .we found that the single - level policies and especially the double - level policies perform nearly as well as the overall optimal policy .in particular for a generalized secretary problem with a larger value of applying the optimal single - level or double - level policy could be considered , because implementation of the overall optimal policy using different thresholds is unattractive compared to using only one or two thresholds for implementing the policy .besides for large the gain in performance of the overall optimal policy over the optimal double - level policy is very small . 999 dynkin , e.b . and yushkevich , a. 1969 ._ markov processes : theorems and problems _ , plenum press , new york .ferguson , t.s .1989 . who solved the secretary problem ?_ statistical science _ 4 , 282 - 296 .frank , a.q . and samuels , s.m . 1980 . on an optimal stopping problem of gusein - zade ._ stochastic processes and their applications _ 10 , 299 - 311 .gilbert , j.p . and mosteller , f. 1966 .recognizing the maximum of a sequence ._ journal of the american statistical association _ 61 , 35 - 73 .gusein - zade , s.m .the problem of choice and the optimal stopping rule for a sequence of indpendent trials ._ theory of probability and its applications _ 11 , 472 - 476 .quine , m.p . andlaw , j.s .exact results for a secretary problem ._ journal of applied probability _ 33 , 630 - 639 .the dynamic programming method might be applied to find numerically the optimal single- , double- and multiple - level ( ` full ' ) policies . here , we summarize the algorithm for the single - level policy ; it is straightforward how to generalize the algorithm to the double - level , and the multiple - level cases .define the single - level policy with threshold and level , denoted , by its actions where 1 means to continue , and 0 means to stop and select this candidate .we restrict to . denote by the probability of winning when is applied .given level we determine the optimal threshold , defined by we use dynamic programming to find it .define for the value to be the maximal probability of winning when is observed , and to be the maximal probability of winning when .the optimality equations are : for and for ( since then surely and ) : one can show that the result of this dp recursion is indeed a slp by setting .moreover , note that probabilities occuring in the optimality equations can easily be obtained , for example by applying lemma [ l : hypprob ] and bayes rule . | a version of the classical secretary problem is studied , in which one is interested in selecting one of the best out of a group of differently ranked persons who are presented one by one in a random order . it is assumed that is a preassigned number . it is known , already for a long time , that for the optimal policy one needs to compute position thresholds , for instance via backwards induction . in this paper we study approximate policies , that use just a single or a double position threshold , albeit in conjunction with a level rank . we give exact and asymptotic ( as ) results , which show that the double - level policy is an extremely accurate approximation . * keywords : * secretary problem ; dynamic programming ; approximate policies |
this article is devoted to the study of the nonlinear integro - differential problem for .here , denotes an open and bounded set , , and .the _ range _kernel is given as a rescaling of a kernel satisfying the usual properties of nonnegativity and smoothness .we shall give the precise assumptions in section [ sec.results ] .we shall refer to problem ( [ eq.orig])-([eq.orig2 ] ) as to problem p .the main results contained in this article are : * theorem [ th.existenceu ] .the well - posedness of problem p , the stability property of its solutions with respect to the initial datum , and the time invariance of the level set structure of its solutions . * theorem [ th.equiv ] .the equivalence between solutions of problem p and the one - dimensional problem p , where , and is the decreasing rearrangement of , see section [ sec.dec_re ] for definitions .* theorem [ th.pde ] .the asymptotic behavior of the solution of problem p with respect to the window size parameter , , as a shock filter . problem is related to some problems arising in image analysis , population dynamics and other disciplines .the general formulation in ( [ eq.orig ] ) includes , for example , a time - continuous version of the neighborhood filter ( nf ) operator : where is a positive constant , and is a normalization factor . in terms of the notation introduced for problem the nf is recovered setting and .this well known denoising filter is usually employed in the image community through an iterative scheme , with .it is the simplest particular case of other related filters involving nonlocal terms , notably the yaroslavsky filter , the bilateral filter , and the nonlocal means filter .these methods have been introduced in the last decades as efficient alternatives to local methods such as those expressed in terms of nonlinear diffusion partial differential equations ( pde s ) , among which the pioneering nonlinear anti - diffusive model of perona and malik , the theoretical approach of lvarez et al . and the celebrated rof model of rudin et al .we refer the reader to for a review comparing these local and non - local methods .another image processing task encapsulated by problem is the _ histogram prescription _ , used for image contrast enhancement : given an initial image , find a companion image such that and share the same level sets structure , and the histogram distribution of is given by a prescribed function .a widely used choice is , implying that has a uniform histogram distribution . in this case , and is related to the image size and its dynamic range , see sapiro and caselles for the formulation and analysis of the problem .nonlinear integro - differential of the form and other nonlinear variations of it have also been recently used ( andreu et al . ) to model diffusion processes in population dynamics and other areas .more precisely , if is thought of as a density at the point at time and is thought of as the probability distribution of jumping from location to location , then is the rate at which individuals are arriving at position from all other places and is the rate at which they are leaving location . in the absence of external or internal sourcesthis consideration leads immediately to the fact that the density satisfies the equation ( [ mazon ] ) .these kind of equations are called nonlocal diffusion equations since in them the diffusion of the density at a point and time depends not only on but also on the values of in a set determined ( and weighted ) by the space kernel .a thoroughfull study of this problem may be found in the monograph by andreu et al .observe that in problem , the space kernel is taken as , meaning that the influence of nonlocal diffusion is spread to the whole domain . as noticed by sapiro and caselles for the histogram prescription problem , and later by kindermann et al . for the iterative neighborhood filter ( [ def.gfv ] ) , or by andreu et al . for continuous time problems like ( [ mazon ] ) , these formulations may be deduced from variational considerations .for instance , in , the authors consider , for , the functional with an appropriate spatial kernel , and a differentiable filter function .then , the authors formally deduce the equation for the critical points of .these critical points coincide with the fixed points of the nonlocal filters they study .for instance , if and , the critical points satisfy which can be solved through a fixed point iteration mimicking the iterative neighborhood filter scheme ( [ def.gfv ] ) .on the other hand , choosing ( or some suitable nonlinear variant ) and considering a gradient descent method to approximate the stationary solution , equation ( [ mazon ] ) is deduced .similarly , and leads to the histogram prescription problem . although the functional ( [ def.functional ] ) is not convex in general , kindermann et al .prove that when is the gaussian kernel then the addition to of a convex fidelity term , e.g. gives , for large enough , a convex functional , see ( * ? ? ?* theorem 3.1 ) .thus , the functional may be seen as the starting point for the deduction of problem , representing the continuous gradient descent formulation of the minimization problem modeling gaussian image denoising .notice that although the convexity of is only ensured for large enough , the results obtained in this article are independent of such value , and only the usual non - negativity condition on is assumed . the outline of the article is as follows . in section [ sec.dec_re ] , we introduce some basic notation and the definition of _ decreasing rearrangement _ of a function .this is later used to show the equivalence between the general problem and the reformulation p in terms of a problem with a identical structure but defined in a one - dimensional space domain .this technique was already used in for dealing with the time - discrete version of problem , in the form of the iterative scheme ( [ def.gfv ] ) .see also for the problem with non - uniform spatial kernel .in section [ sec.results ] , we state our main results .then , in section [ sec.numerics ] , we introduce a discretization scheme for the efficient approximation of solutions of problem , and demonstrate its performance with some examples . in section [ sec.proofs ] , we provide the proofs of our results , and finally , in section [ sec.conclusions ] , we give our conclusions .given an open and bounded ( measurable ) set , let us denote by its lebesgue measure and set . for a lebesgue measurable function , the function is called the _ distribution function _ corresponding to .function is , by definition , non - increasing and therefore admits a unique generalized inverse , called its _decreasing rearrangement_. this inverse takes the usual pointwise meaning when the function has not flat regions , i.e. when for any . in general , the decreasing rearrangement is given by : notice that since is non - increasing in , it is continuous but at most a countable subset of . in particular , it is right - continuous for all ] . for the extension of the decreasing rearrangement to families of functions depending on a parameter ,e.g. ] of problem . in addition , if and ;{{\cal x}}) ] .the existence and stability results of theorem [ th.existenceu ] may be extended to more general zero - order terms in the equation ( [ eq.orig ] ) of problem . for instance , we can consider a function \times\o\times{\mathbb{r}}\to{\mathbb{r}} ] be the solution of problem p , for some nonincreasing . then 1 . a.e . in , for all .2 . for , and .3 . if is odd then , for .if and , for and , then ;w^{m , p}(\o_*)) ] is a solution of if and only if ;{{\cal x}}_*) ] . when image processing applications are considered , by property 1 of corollary [ th.existencev ] , the solution to may be understood as a _contrast change _ of the initial image , .indeed , this property also implies that if , initially , has no flat regions , and therefore is decreasing , then the solution of p verifies this property for all .then , theorem [ th.existencev ] implies that the solution of has no flat regions for all .the last theorem is an extension of a result given in for the discrete - time formulation with . in it , we deduce the asymptotic behavior of the solution of problem p ( and thus of of problem ) in terms of the window size parameter , .although we state it for the gaussian kernel , more general choices are possible , see ( * ? ? ?* remark 2 ) .[ th.pde ] assume ( h ) with and having no flat regions .suppose , in addition , that . then , for all \times \o_* ] of p satisfies with and with , and .two interesting effects captured by ( [ app.nfstar ] ) are the following : 1 . the border effect ( range shrinking ) .function is _ active _ only when is close to the boundaries , and . for , contributes to the decrease of the largest values of while for we have , increasing the smallest values of .therefore , this term tends to flatten . in image processing terms , a loss of contrast is induced .the term is anti - diffusive , inducing large gradients on in a neighborhood of inflexion points . in this sense ,the scheme ( [ app.nfstar ] ) is related to the shock filter introduced by lvarez and mazorra where is a smoothing kernel and function satisfies for any . indeed , neglecting the fidelity , the border and the lower order terms , and defining , we render ( [ app.nfstar ] ) to the form ( [ alvarez ] ) .+ this property can be exploited to produce a partition of the image so the model can be interpreted as a tool for fast segmentation and classification .an example is proposed in the numerical experiments where a time - continuous version of the nf is implemented .for the discretization of problem , for , we take advantage of the equivalence result stated in theorem [ th.equiv ] .thus , we first calculate a numerical approximation , , to the decreasing rearrangement and consider the problem p . then , we discretize this one - dimensional problem and compute a numerical approximation , \times\o_*\to{\mathbb{r}} ] is a solution to problem . then , we finally recover an approximation , , to by defining inspired by the image processing application of problem , we consider a piecewise constant approximation to its solutions .let be , for simplicity , a rectangle domain and consider a uniform mesh on enclosing square elements ( pixels ) , , of unit area , with barycenters denoted by , for and .given , we consider its piecewise constant interpolator if .the interpolator has a finite number , , of quantized levels that we denote by , with .that is where are the level sets of , since is piecewise constant , the decreasing rearrangement of is piecewise constant too , and given by with for , and , , ,, .let be a candidate to solve problem p .due to the time - invariance of the level sets structure of the solution to this problem , see theorem [ th.existenceu ] , we may express as with , for ] satisfying ( [ eq.dis ] ) and follows . for the time discretization, we take a uniform mesh of the interval ] , the piecewise constant and piecewise linear interpolators using the uniform estimates of and , we deduce the corresponding uniform estimates for , and , implying the existence of and such that , at least in a subsequence ( not relabeled ) , as , in particular , by compactness \times \bar \o).\ ] ] since , for ] , we may rewrite ( [ eq.discrete ] ) as and due to the convergence properties ( [ conv.1 ] ) and ( [ conv.2 ] ) , we may pass to the limit in ( [ eq.final ] ) to deduce that is a solution of ( [ eq.aux ] ) ._ continuation of the solution to an arbitrary time ._ given the solution , , of problem ( [ eq.aux ] ) in , we may consider the same problem for the initial datum .since and the constant only depends on , and , see ( [ est.w ] ) and ( [ def.t0 ] ) , we obtain a new solution .clearly , this procedure may be extended to an arbitrarily fixed .once this is done , a boot - strap argument allows us to deduce , implying that ;w^{1,\infty}(\o)) ] such that strongly in , for all , and a.e . in .similarly to the smooth case , this convergence allows to pass to the limit in ( [ eq.orig ] ) ( with replaced by ) and identify the limit as a solution of . again , the property and a boot - strap argument leads to ._ stability and uniqueness ._ let and ;{{\cal x}}) ] be the corresponding solution to problem .assume , and set , .then , from equation ( [ eq.orig ] ) we get then , the lipschitz continuity of and gronwall s lemma allow us to deduce the result . * proof of corollary [ th.existencev ] . * to prove point 1 , notice that from ( [ dsv ] ) ( in dimension ) we deduce a property that also holds in the limit .point 2 of the theorem follows from evaluating equation ( [ eq.orig ] ) in and , using that is decreasing for all , and gronwall s inequality .point 3 is a consequence of the assumption on the symmetry of , under which the integral term in ( [ eq.orig ] ) vanishes when it is integrated in .point 4 is easily deduced by successive derivation of ( [ dsv ] ) ( which also holds for , under regularity assumptions ) . point 5 is again deduced from ( [ dsv ] ) and the decreasing character of and . since , and , using point 2 , for all , we have that the integral term in ( [ eta ] ) is evaluated inside a closed interval . therefore ,using the assumptions of point 5 , we get uniformly in .finally , we obtain the result from ( [ dsv ] ) in the limit and . * proof of theorem [ th.equiv ] .* we split the proof in two steps .* step 1 . *first we treat the case in which has no flat regions , that is when for any . by the invariance of the level sets structure proven in theorem [ th.existenceu ] we deduce that neither the solution of has flat regions .then and are strictly decreasing , implying for any . according to ( * ? ? ? * theorem 9.2.1 ) , we have where and we used the notation . integrating ( [ eq.orig ] ) in get due to the and level sets equi - measure , it is immediate that the equi - measurability property ( [ prop.1 ] ) implies from where we deduce to deal with the term we observe that due to the invariance of the level set structure , as stated in theorem [ th.existenceu ] , we have that , for all ] .we define and for all ] is a solution of ( without flat regions ) if and only if is a solution of .now we perform the limit .let ;{{\cal x}}) ] , respectively .let us consider the inverse of , the distribution function of , . using the change of variable and writing , we obtain from ( [ eq.i ] ) using the explicit form of and integrating by parts , we obtain with given by ( [ def.ktilde ] ) . by assumption ,function is bounded in $ ] and by point 4 of corollary [ th.existencev ] it is continuously differentiable in . consider the interval . by well known properties of the gaussian kernel , we have and in particular , from ( [ gauss.2 ] ) we get taylor s formula implies therefore , from ( [ th2.i1 ] ) , ( [ th2.4 ] ) and ( [ gauss.1 ] ) we deduce , using , then , the result follows from ( [ th2.3 ] ) substituting by . this paper we studied a general class of nonlinear integro - differential operators with important imaging applications , such as the denoising - segmentation neighborhood filtering .although the corresponding pde problem is multi - dimensional , we showed that it can be reformulated as a one - dimensional problem by means of the notion and properties of the decreasing rearrangement function .we proved the well - posedness of the problem and some stability properties of the solution , as well as the equivalence between the multi - dimensional and the one - dimensional solutions to the problem .some other interesting properties were deduced for the rearranged one - dimensional version of the problem , such as the time invariance of the level sets of the solution ( inherited by the multi - dimensional equivalent solution ) , and the asymptotic behavior of the solution as a shock - type filter .future work will point to the use of rearranging techniques for the generalization of the model to include nonlocal effects induced by non - homogeneous spatial kernels , like in equation ( [ mazon ] ) . as already showed for the discrete time problem , this situation is much more involved suggesting the consideration of the _ relative rearrangement _ functional .g. galiano , j. velasco , some nonlocal filters formulation using functional rearrangements , scale space and variational methods in computer vision , lecture notes in computer science 9087 , ( 2015 ) 166177 .d. m. ushizima , a. g. bianchi , c. m. carneiro , segmentation of subcellular compartments combining superpixel representation with voronoi diagrams , http://cs.adelaide.edu.au//isbi14_challenge /results_release.html , 2014 .l. zhi , g. carneiro , a. p. bradley , automated nucleus and cytoplasm segmentation of overlapping cervical cells http://cs.adelaide.edu.au//isbi14_challenge /results_release.html , 2014 . | we study the existence and uniqueness of solutions of a nonlinear integro - differential problem which we reformulate introducing the notion of the decreasing rearrangement of the solution . a dimensional reduction of the problem is obtained and a detailed analysis of the properties of the solutions of the model is provided . finally , a fast numerical method is devised and implemented to show the performance of the model when typical image processing tasks such as filtering and segmentation are performed . _ keywords : _ integro - differential equation , existence , uniqueness , neighborhood filters , decreasing rearrangement , denoising , segmentation . |
recent years have seen a surge of interest in wireless networks . unlike the traditional point - to - point communication ,elementary modes of cooperation such as relaying are needed to improve both the throughput and reliability in a wireless network .although capacity of a relay channel is still unknown in general , considerable progress has been made on several aspects , including some achievable capacity results and capacity scaling laws of large networks . in parallel , research on the cooperative diversity , where the relays help the source exploit the spatial diversity of a slow fading channel in a distributed fashion , has attracted significant attention . in small relay networks where the source signal can reach the destination terminal via a direct link ,many results have been known in both the channel capacity and the cooperative diversity .the capacity results are mostly based on the decode - and - forward ( df ) and the compress - and - forward ( cf ) strategies .the amplify - and - forward ( af ) scheme , however , is rarely considered in this scenario due to the noise accumulation at the relays . on the other hand ,the af scheme is widely used for cooperative diversity .it has been shown in that the af scheme is as good as the df scheme at high snr as far as the diversity is concerned .furthermore , it is pointed out in that not needing to decode the source signal makes the relays more capable of protecting the source signal in some cases . the cf scheme , which works with perfect global channel state information ( csi ) , is usually excluded in the cooperative diversity scenario for practical considerations . in larger relay networks , where direct source - destination links are generally absent, substantial results on the capacity scaling laws have been obtained in the large network size regime . however ,much less is known about the cooperative diversity than in the case of small networks .this paper analyzes the cooperative diversity in relay networks with a single multi - antenna source - destination terminal pair .the source signal arrives at the destination via a sequence of hops through layers of relays .similar channel setting with a single layer has been studied in in different contexts . using large random matrix theory ,the ergodic capacity results of some particular relaying schemes have been established for large networks .recently , the study has been extended to the case with multiple layers of relays and the case with multiple source - destination pairs .cooperative diversity in this setting was first studied in for the single - antenna case then in for the multi - antenna case , with distributed space - time coding .all the mentioned works assume linear processing at the relays and the df scheme is not considered . actually , one can figure out immediately that the df scheme is not suitable for the multi - antenna setting due to the suboptimality in terms of degrees of freedom .requiring the relays to decode the source signal restricts the achievable degrees of freedom .this is one of the fundamental differences between the large networks and small networks : the degrees of freedom of the latter are determined by the source - destination link and not by the relaying strategy . in this work ,we suppose that the network size is arbitrary ( but fixed ) and the signal - to - noise ratio ( snr ) is large . the multihop channel is investigated in terms of the diversity - multiplexing tradeoff ( dmt ) .the dmt was introduced in for the point - to - point multi - antenna ( mimo ) channels to capture the fundamental tradeoff between the throughput and reliability in a slow fading channel at high snr .it was then extensively used in multiuser channels such as the multiple access channels and the relay channels as performance measure and design criterion of different schemes .our main contributions are summarized in the following paragraphs .first , we use the information theoretic cut - set bound to derive an upper bound on the dmt of any relaying strategy . in the clustered case where the relays in the same layer can fully cooperate ,this bound is shown to be tight .an optimal scheme is the cooperative df scheme , where the clustered relays perform joint decoding and joint re - encoding . while the clustered channel is equivalent to a series - channel and does not feature the distributed nature of wireless networks , the non - clustered case is studied as the main focus of the paper .since no within - layer cooperation is considered , linear processing at the relays is assumed .we start by the af strategy , which seems to be the natural first choice as a linear relaying scheme .we show that the af scheme is , in the dmt sense , equivalent to the rayleigh product ( rp ) channel , a point - to - point channel whose channel matrix is defined by a product of gaussian matrices .that being said , we examine the rp channel in great detail .it turns out that the dmt of a rp channel has a nice recursive structure and lends some intuitive insights into the typical outage events in such channels .the study of the rp channel leads directly to an exact dmt characterization for the af scheme in multihop channels of arbitrary size .the closed - form dmt provides simple guidelines on how to efficiently use the available relays with the af scheme .one such example is how to reduce the number of relays while keeping the same diversity .while the maximum multiplexing gain is achieved , the achievable diversity gain of the af scheme can be far from maximum diversity gain suggested by the cut - set bound .specifically , the dmt of the af scheme is limited by a virtual `` bottleneck '' channel .the following question is then raised : is the dmt cut - set bound tight in the non - clustered case ?the question is partially answered in this work : there exists a scheme that achieves both extremes of the cut - set bound , that is , the maximum diversity extreme and maximum multiplexing extreme . in order to achieve the maximum diversity gain ,the key is space - time relay processing . noting that the af scheme is space - only, we incorporate the temporal processing into the af scheme . the first scheme that we proposeparallel af _ scheme . by partitioning the multihop channel into `` af paths '' , we create a set of parallel sub - channels in the time domain .a packet that goes through the parallel channel attains an improved diversity if the partition is properly designed .it is shown that there is at least one partition such that the maximum diversity is achieved .however , the parallel af scheme does not have the maximum multiplexing gain in general , since the achievable degrees of freedom by the scheme are restricted by those of the individual af paths .in most cases , the af paths are not as `` wide '' as the original channel in terms of the degrees of freedom . in order to overcome the loss of degrees of freedom , we linearly transform the set of parallel af channels into another set in which each sub - channel has the same degrees of freedom as the multihop channel . in the new parallel channel ,each relay only need to flip the received signal in a pre - assigned mode , hence the name _ flip - and - forward _ ( ff ) .it is shown that the ff scheme achieves both the maximum diversity and multiplexing gains .furthermore , the dmt of the ff scheme is lower - bounded by that of the af scheme . using the results obtained in the non - clustered case , we revisit the clustered case by pointing out that the cooperative df operation might not be needed in all clusters to get the maximum diversity .we also indicate that cross - antenna linear processing in each cluster helps to improve the dmt only when both transmitter csi and receiver csi are known to the relays .finally , coding schemes are proposed for all the studied relaying strategies . in the clustered case ,a series of perfect space - time block codes ( stbcs ) with appropriate rates and dimensions are used at the source and each relay cluster that performs the cooperative df operation . in the non - clustered case ,construction of perfect stbcs for general parallel mimo channels is first provided .the constructed codes can be applied directly to the parallel af scheme and the ff scheme . allsuggested coding schemes achieve the dmt despite of the fading statistics and are thus approximately universal . regarding the notations , we use boldface lower case letters to denote vectors , boldface capital letters to denote matrices . represents a complex gaussian random variable with mean and variance . ] respectively denote the matrix transposition and conjugated transposition operations . is the vector norm . is the frobenius matrix norm .we define for any matrices s .the square root of a positive semi - definite matrix is defined as a positive semi - definite matrix such that . and denote respectively the maximum and minimum eigenvalues of a semi - definite matrix . means . ( respectively , ) is the closest integer that is not smaller ( respectively , not larger ) than . means . stands for the base- logarithm . for any quantity , and similarly for and .the tilde notation is used to denote the ( increasingly ) ordered version of .let and be two vectors of respective length and , then means , . means that is a sub - vector of some permutated version of .the rest of this paper is organized as follows .section [ sec : system - model ] describes the system model and some basic assumptions in our work . the dmt cut - set bound and the clustered case with the df scheme are presented . in section [ sec : af ] , we study the non - clustered case with the af scheme . the parallel af and the ff schemesare proposed in section [ sec : para - partition ] . in section[ sec : clustered ] , the clustered case is revisited .the approximately universal coding schemes are proposed in section [ sec : stc ] .section [ sec : nr ] provides some selected numerical examples .finally , a brief conclusion is drawn in section [ sec : conclusion ] .most detailed proofs are deferred to the appendices .the considered -hop relay channel model is composed of one source ( layer ) , one destination ( layer ) , and layers of relays ( layer to layer ) . each terminal is equipped with multiple antennas . the total number of antennas in layer is denoted by . for convenience , we define , , and .we assume that the source signal arrives at the destination via a sequence of hops through the layers and that terminals in layer can only receive the signal from layer . the fading sub - channel between layer and layer is denoted by the matrix .sub - channels are assumed to be mutually independent , flat rayleigh - fading and quasi - static .that is , the channel coefficients are independent and identically distributed ( i.i.d . )complex circular symmetric gaussian with unit variance .and they remain constant during a coherence interval of length and change independently from one coherence interval to another .furthermore , the transmission is supposed to be perfectly synchronized . under these assumptions , the signal model within a coherence intervalcan be written as = { { { \uppercase{{\pmb{h}}}}}}_{i}{\,}{{{\lowercase{{\pmb{x}}}}}}_{i-1}[l ] + { { { \lowercase{{\pmb{z}}}}}}_{i}[l],\quad l=1,\ldots l,\]]where ,{{{\lowercase{{\pmb{y}}}}}}_i[l]\in{{\mathbb{c}}}^{n_i\times1} ] is the additive white gaussian noise ( awgn ) at layer with i.i.d . entries .since we consider the non - ergodic case where the coherence time interval is large enough , we drop the time index hereafter .it is assumed that all relays work in full - duplex mode and the transmission is subject to the short - term power constraint being the average transmitted snr per layer .all terminals are supposed to have perfect channel state information at the receiver and no csi at the transmitter . from now on, we denote the channel as a multihop channel .slow fading channels are outage - limited , i.e. , there is an _ outage probability _ that the channel can not support a target data rate of bits per channel use at signal - to - noise ratio . in the high snr regime, this fundamental interplay between throughput and reliability is characterized by the diversity - multiplexing tradeoff .the _ multiplexing gain _ and _ diversity gain _ of a fading channel are defined by a more compact form is note that in the definition we use the outage probability instead of the error probability , since it is shown in that the error probability of any particular coding scheme with maximum likelihood ( ml ) decoding is dominated by the outage probability at high snr and that the thus defined dmt is the best that one can achieve with any coding scheme . in the rayleigh mimo channel , the dmt has the following closed form .[ lemma : dmt - rayleigh ] the dmt of a rayleigh channel is a piecewise - linear function connecting the points , , where in the following , we will use the dmt as our performance measure . for convenience of presentation , we provide the following definition .two channels are said to be _ dmt - equivalent _ or _ equivalent _ if they have the same dmt . before studying any specific relaying strategy, we establish an upper bound on the dmt of the multihop system as a benchmark .[ prop : dmt - ub ] for any relaying strategy , we have with where is the dmt of the point - to - point channel between layer and layer . in particular , by defining the maximum diversity gain and multiplexing gain as and , respectively , we have from the information theoretic cut - set bound , the mutual information between the source and the destination satisfies for any relaying strategy .thus , the outage probability using a relaying scheme is where is the outage probability of the sub - channel . from ( [ eq : dmt - compact ] ) and ( [ eq : tmp999 ] ) , we prove ( [ eq : dmt - ub ] ) . finally , ( [ eq : dmax ] ) and ( [ eq : rmax ] ) are from the direct application of lemma [ lemma : dmt - rayleigh ] .if we assume that the relays within the same layer are clustered , i.e. , they can perform joint decoding and joint re - coding operations , then each layer can act as a virtual multi - antenna terminal .this could happen either when the relays are controlled by a central unit via wired links or when they are close enough to each other to exchange information perfectly . in this case, the relay channel model is equivalent to a serial concatenation of independent mimo channels. let us consider the following cooperative decode - and - forward scheme .each layer tries to cooperatively decode the received signal .when a successful decoding is assumed , the embedded message is re - encoded and then forwarded to the next layer .we can show that this simple scheme is dmt optimal .[ prop : df ] when the relays are clustered , the cooperative df scheme achieves the dmt cut - set bound defined in ( [ eq : dmt - ub ] ) . to show the achievability , note that the cooperative df scheme being in outage implies the outage of at least one of the sub - channels . by the unionbound , at high snr , the probability is dominated by the largest term in the sum of the right - hand side ( rhs ) . from ( [ eq : dmt - compact ] ) , we get in the high snr regime , the union bound defined by the sum operation coincides in the snr exponent with the cut - set bound defined by the minimum operation .hence , the dmt cut - set bound is tight in the clustered case .however , relays in wireless networks are not clustered in general . in fact , one of the important and interesting attributes of wireless networks is the distributed nature . in the following two sections ,we will concentrate on the non - clustered case and analyze the achievable dmt .in this section , we consider the non - clustered case , where the relays work in a distributed manner and no within - layer communication is allowed . in this case , applying the df scheme at each individual relay might incur loss of degrees of freedom . to see this ,take the single - layer channel as an example . in the best case where all the relays succeed in decoding ,they transmit the message using a pre - assigned codebook .this scheme transforms the relays - destination channel into a virtual mimo channel . before thiscould possibly happen , however , the success decoding at the relays must be guaranteed with high probability .this constraint imposes that the degrees of freedom in this scheme must not be larger than with being the number of antennas at the relay .while this scheme achieves the maximum multiplexing gain in the single - antenna case , it could fail in the multi - antenna case . since we do not know how to cooperate efficiently in this case , we start by the most obvious and naivest relaying scheme : the amplify - and - forward scheme .this scheme in the considered setting has been studied in for the capacity scaling laws , and in for the dmt .it is worth noting that , in , a lower bound on the dmt of the af scheme in a symmetric network ( , ) was obtained , while our work derives the exact dmt for a network of arbitrary dimension with a different approach . in the considered af scheme, each antenna node normalizes the received signal to the same power level and then retransmits it .this linear operation can be expressed as , by the power constraint ( [ eq : power - constraint ] ) , the scaling matrix is diagonal due to the antenna - wise nature of the relaying scheme , with the normalization factors in ( [ eq : md ] ) by s . ] , the signal model of the end - to - end channel is , for the sake of simplicity , we defined and .the whitened form of this channel is is the whitened noise and is the whitening matrix with being the covariance matrix of the noise in ( [ eq : multihop_af1 ] ) .since it can be shown that , can be neglected in the dmt analysis and the af channel is equivalent to the mimo channel defined by the following matrix rest of the section is devoted to the dmt analysis of this channel .[ def : rp ] let , , be independent complex gaussian matrices with i.i.d . entries . a _ rayleigh product _( rp ) channel is a mimo channel defined by where is the channel matrix ; is the transmitted signal with normalized power , i.e. , ; and is the received signal ; is the awgn ; is the snr per receive antenna . is called the _ dimension _ of the channel and is called the _ length _ of the channel .while this channel model has been studied in terms of the asymptotic eigenvalue distribution in the large dimension regime , we are particularly interested in the fixed dimension case in the high snr regime . in this regime , we can define a more general rp channel as [ prop : pdf - general - rp ] the general rp channel is equivalent to * a rp channel , if all the matrices s are square and their singular values satisfy , ; * a rp channel , with being the rank of the matrix , if the matrices s are constant .see appendix [ sec : proof - pdf - general - rp ] .hence , we can consider the rp channel from definition [ def : rp ] without loss of generality . recall that is the ordered version of with and .[ thm : dmt - rp ] the dmt of a rp channel is a piecewise - linear function connecting the points , , where with the dmt depends on the `` near zero '' probability of the singular values of channel matrix . while this probability for the given product matrix is intractable , we can characterize it by induction on the length .the main idea is that , conditioned on a given product matrix , is gaussian whose singular distribution is tractable .see appendix [ sec : proof - thm - dmt - rp ] for details .the following corollaries are given without proofs .[ coro : per - inv ] the dmt of a rp channel depends only on the ordered dimension .[ coro : increase ] the dmt is monotonic in the following senses : * if , then * if , then [ coro : sym ] when , we have where and .corollary [ coro : per - inv ] implies that rp channels with the same ordered dimension belong to the same dmt equivalent class . in the following ,a precise characterization of the dmt class is obtained . before that, we need the following definitions . a rp channelis said to be a _ reduction _ of a rp channel if 1 ) they are equivalent , 2 ) , and 3 ) .in particular , if , then it is called a _ vertical reduction_. similarly , if ] such that then , the decoding set defines the partition of minimum size that achieves a given diversity ( ) . from ( [ eq : dmt - mixed ] ) , it is easy to show that the proposed partition achieves diversity .now , we would like to show that the size of the proposed partition is minimized .to this end , it is enough to show that for any set of decoding points that achieves diversity , we have , .this is obviously true for , since the diversity of the af channel degrades with the number of hops . by induction on , it is shown that because otherwise and the corresponding diversity of the af scheme can not be larger than according to the monotonicity of the dmt ( corollary [ coro : increase ] ) .the proposition matches the intuition that we should only decode when we have to , in the diversity sense .in other words , we allow for the degradation of diversity introduced by the af operation , as long as the resulting diversity is larger than the target . another option is to linear process the received signal at each cluster without decoding it . unlike the af scheme in the non - clustered case , where trivial antenna - wise normalization is performed, we can run inter - antenna processing based on the available csi at the cluster .with receiver csi at the relays , let us consider the following project - and - forward ( pf ) scheme . at layer ,the received signal is first projected to the signal subspace spanned by the columns of the channel matrix .the dimension of the subspace is , the rank of .after the component - wise normalization , the projected signal is transmitted using ( out of ) antennas .it is now clear that is actually composed of the columns of the previously defined , with .more precisely , the is an orthogonal basis with .we can rewrite with . for simplicity , we let be obtained by the qr decomposition of if and be identity matrix if .the spirit of the pf scheme is not to use more antennas than necessary to forward the signal .since the useful signal lies only in the -dimensional signal subspace , the projection of the received signal provides sufficient statistics and reduces the noise power by a factor . in this case , only antennas are needed to forward the projected signal .let us define . then , as in the af case , the pf multihop channel is equivalent to the channel defined by the following proposition states that receiver csi and inter - antenna processing do not improve the dmt of the af scheme .[ thm : pf ] the pf scheme is equivalent to the af scheme . see appendix [ sec : proof - thm - pf ] .while the pf and af have the same dmt , the pf outperforms the af in power gain for two reasons .one reason is , as stated before , that the projection reduces the average noise power .the other reason is that the accumulated noise in the af case is more substantial than that in the pf case .this is because in the pf case , less relay antennas are used than in the af case .since the power of independent noises from different transmit antennas add up at the receiver side , the accumulated noise in the af case `` enjoys ''a larger `` transmit diversity order '' than in the pf case . on the other hand , if we could have receiver _ and_ transmitter csi at the clusters , the dmt could be improved as shown by the following example .[ ex : mf ] for a clustered multihop channel , the dmt cut - set bound can be achieved by linear processing within clusters if both transmitter and receiver csi are available at each cluster .the optimum linear relaying scheme is defined by the processing matrices s with where we assume that is the singular value decomposition of .the diagonal elements in the singular value matrix are in increasing order .this scheme matches the adjacent hops by aligning the singular values in the same order .it is then equivalent to the channel defined by , whose dmt can be shown to be as the rayleigh channel .now , we need codes that actually attain the dmt promised by the studied relaying strategies . to this end , the construction of perfect stbcs for mimo channels is extended to the multihop relay channels .the constructed codes are approximately universal . the relay clusters that perform the cooperative df operation partition the multihop channel into a series of mimo channels , say , with .an obvious coding scheme that achieves the dmt is described as follows .let be the target multiplexing gain .first , the source terminal encodes the message of bits with a perfect stbc .then , in a successive manner , layer tries to decode the message .when a success decoding is assumed , the bits are encoded with a perfect stbc and forwarded .we can show that as long as with the series of perfect stbcs can be found .with the union bound , the end - to - end error probability is upper - bounded is the error probability of in the mimo sub - channel . since is dmt - achieving for any fading statistics ,we have ( [ eq:170 ] ) and ( [ eq:176 ] ) , the dmt ( [ eq : dmt - mixed ] ) is achieved with coding delay . since the perfect stbcs are approximately universal , so is this coding scheme .note that this scheme can be used for the af and pf schemes with . in the non - clustered case , the parallel af and the ff schemes are used .note that both schemes share the common parallel mimo channel structure and is the number of the parallel sub - channels .let be a code for the parallel channel .a codeword is defined by a set of matrices with .we define a parallel stbc with non - vanishing determinant ( nvd ) as follows .let be an alphabet that is scalably dense , i.e. , for , , a parallel stbc is called a _parallel nvd code _ if it 1 .is -linear is -linear means that each entry of any codeword in is a linear combination of symbols from . ] ; 2 . has full symbol rate , i.e., it transmits on average symbols per channel use from the signal constellation ; 3 . has the nvd property , i.e. , for any pair of different codewords , with a constant independent of the snr .we have the following result .[ thm : dmt - para ] the parallel nvd codes are approximately universal over the parallel channel defined by ( [ eq : parallel ] ) .see appendix [ sec : proof - thm - dmt - para ] . thus , to achieve the dmt of the parallel af and the ff schemes ,it is enough to construct a parallel nvd codes .several remarks are made before proceeding to the code construction .the actual data rate of the nvd codes is controlled by the size of the alphabet and the symbol rate .efficient decoding schemes ( e.g. , sphere decoding ) may not be implementable when the channel is under - determined or , alternatively speaking , rank - deficient in the sense that .practical schemes include reducing the symbol rate while increasing the size of the alphabet .this , however , does not guarantee the dmt - achievability . explicit parallel nvd codes for asymmetric parallel channel ( i.e. , for some ) being hard to construct algebraically , we focus on the symmetric case .note that in the ff scheme , the equivalent parallel channel is always symmetric . in the parallel af scheme, the numbers of transmit antennas of different sub - channels may be different .however , the problem can be overcome by using the same number of antennas ( i.e. , ) .the resulting parallel channel has at least the same dmt as the original channel .nevertheless , an alternative code construction that is suitable for both symmetric and asymmetric parallel channels is provided in appendix [ sec : altercod ] for completeness . from a given parallel partition with size ,the number of the parallel sub - channels is in the parallel af scheme , generally larger than in the ff scheme .since the minimum coding delay is that grows linearly with , it grows at least linearly with .moreover , the complexity of decoding can grow up to exponentially with if ml decoding is used .that is why it is important to find partitions of small size .a systematic way to construct nvd codes is the construction from cyclic division algebra ( cda ) . for more details on the concept, the readers can refer to . in the following , we aim to construct the perfect symmetric parallel nvd codes with quadrature amplitude modulation ( qam ) constellations .the generalization to hexagonal constellations is straightforward .we start by the construction of nvd codes for mimo channels ( ) .let be a cyclic extension of degree on the base field .we denote the generator of the galois group .let be such that are non - norm elements in .then , we can construct a cda of degree .each element in has the following matrix representation , .since is a cda , we can show that ] .finally , we construct codewords in the form of with qam symbols and we can show that the difference matrix of a pair of different codewords is also in the form of with symbols in ] .and the chosen ideal is principle , i.e. , with .the matrix is given by ,\label{eq : golden-2}\ ] ] where ] and in the non - correlated case with , the joint pdf is now , let us define the _ eigen - exponents _ [ lemma : det ] where and please refer to for details .[ lemma : detxi ] where and first , we have then , let us denote the determinant in the rhs of ( [ eq : lemmas : tmp1 ] ) as and we rewrite it as and the product term in ( [ eq : lemmas : tmp2 ] ) is obtained since for all .let us denote the determinant in ( [ eq : lemmas : tmp2 ] ) as .then , by multiplying the first column in with and noting that , the first column of becomes all .now , by eliminating the first `` ' 's of the first column by subtracting all rows by the last row as in ( [ eq : lemmas : tmp3 ] ) and ( [ eq : lemmas : tmp2 ] ) , we have . by continuing reducing the dimension, we get {i , j=1}^n \prod_{i=1}^{n+1}\mu_i^{m - n-1}\prod_{i = n+2}^m \mu_i^{m - i}\\ & \quad \cdot\prod_{i=1}^n\prod_{j = n+1}^m \left(1-e^{-\lambda_i/\mu_j}\right ) \end{split}\ ] ] from which we prove the lemma , by applying ( [ eq : detexp ] ) . with the preceding lemmas , we have the following lemma that provides the asymptotical pdf of conditioned on in the high snr regime .[ lemma : pdfrayleighcond ] where and let us replace and in ( [ eq : wishart : n > m ] ) and ( [ eq : wishart : n < m ] ) using the results of lemma [ lemma : det ] and lemma [ lemma : detxi ] .then , by applying variable changes as done in , ( [ eq : expcond ] ) can be obtained after some elementary manipulations . when , i.e. , , the joint pdf of is found in as shown in the following lemma .[ lemma : pdfrayleigh ] with .this lemma can be justified either by using ( [ eq : wishart : id ] ) or by setting in ( [ eq : expcond ] ) .[ lemma : invariance - asymp ] let be any random matrix and be any non - singular matrix whose singular values satisfy .define and .let and be the ordered singular values of and , then , we have [ prop : asympt - pdf ] let us denote the non - zero ordered eigenvalues of by with .then , the joint pdf of the eigen - exponents satisfies where with s defined by ( [ eq : ci ] ) . from lemma [ lemma : cal - dmt ], we can derive the dmt with the following optimization problem with being the outage region .note that is decreasing and is increasing with respect to .then , the proof of theorem [ thm : dmt - rp ] is immediate .now , what remains is the proof of proposition [ prop : asympt - pdf ] .the following lemma will be needed in the proof .[ lemma : ci ] let ] for some .either case , -related terms are gone and what remain are the s `` freed '' by from .same reasoning applies to for , except that the initial region is set to .therefore , the optimization problem can be solved by counting the total number of freed s . as shown in fig .[ fig : find - ci1 ] , when is small , the initial coefficient of is large and thus can free out .we have , which corresponds to the first stopping condition . for large ,the initial coefficient of is not large enough and only is freed , which corresponds to the second stopping condition . with the above reasoning, we can get from ( [ eq : tmp48 ] ) and ( [ eq : ci ] ) , we get and , can be obtained .a more careful analysis reveals that it is always satisfied with the described procedure .] from fig .[ fig : find - ci1 ] ( [ eq : tmp89 ] ) is from ( [ eq : gi_inv1 ] ) and the fact that , , ; ( [ eq : tmp54 ] ) can be derived from lemma [ lemma : ci ] , since and therefore the term in ( [ eq : tmp54 ] ) is dominated by for and by for , corresponding to the two terms in ( [ eq : tmp89 ] ) , respectively . in this case, we have and . from ( [ eq : jointexp ] ) , since , , the minimization of with respect to is in exactly the same manner as in the previous case .therefore , can be obtained from fig .[ fig : find - ci2 ] with in the same form as ( [ eq : gi ] ) as in the previous case , we have and the same as defined in ( [ eq : jointexp2 ] ) . without loss of generality, we assume that for some ] and proof is complete .let denote the vector of the eigen - exponents of a matrix as previously defined .to prove the first case , we use induction on .suppose that it is true for , which means that the joint pdf of is the same as that of .furthermore , we know by lemma [ lemma : invariance - asymp ] that .same steps as ( [ eq : tmp31])([eq : tmp32 ] ) complete the proof . to prove the second statement , we perform a singular value decomposition on the matrices s and then apply the first statement .let what we should prove is that and only if ( [ eq : reduction - cond ] ) is true . to this end , it is enough to show that and only if , that is , , and then apply the result successively to show the theorem . note that we need lemma [ lemma : ci ] to eliminate the minimization in ( [ eq:010 ] ) .the detailed proof is omitted here .the direct part of the theorem is trivial . to show the converse ,let and be the two concerned minimal forms .in addition , we assume , without loss of generality , that with and . now , let us define with defined in ( [ eq : ci3 ] ) .it can be shown that intervals are non - trivial with , .the values of s are in the following form same arguments also apply to with and , etc .it is then not difficult to see that to have exactly the same s ( thus , same s ) , we must have and is , the same minimal form .to prove the theorem , we will first show the following equivalence relations : the direct parts of , , and are immediate since the rhs are particular cases of the left hand side ( lhs ) . to show the reverse part of ( a ), we rewrite where is used twice in ( [ eq : tmp01 ] ) and ( [ eq : tmp03 ] ) ; is used in ( [ eq : tmp02 ] ) . as for ( b ) , if holds , then proves . by continuing the process , we can show that is true for all , provided holds . through and , one can verify that the lhs of is equivalent to the rhs of of which the rhs of is a particular case .hence , the direct part of is shown . the reverse part of ( c )can be proved by induction on . for , be shown explicitly using the direct characterization ( [ eq : dk ] ) .now , assuming that for non - ordered , we would like to show that holds .let us write the permutation invariance property is used in ( [ eq : tmp001 ] ) ; is used in ( [ eq : tmp002 ] ) since we assume that is trues ; and can be permuted according to .finally , we should prove the reverse part of ( d ) , i.e. , that holds for minimal .if is not minimal , then showing ( c ) is equivalent to showing is the order of with .therefore , we should show that the minimum is achieved with . according the direct characterization ( [ eq : dk ] ) , this is true only when .let us rewrite as is always true according to the reduction theorem , we have . the rest of this section is devoted to proving that ( [ eq : tmp004 ] ) holds for minimal .now , we restrict ourselves in the case of minimal and ordered , i.e. , we would like to prove , the optimal is in the interval ] .then , ( [ eq : tmp008 - 1 ] ) becomes also implies that from which the form of suggests that can be decomposed as ( [ eq : tmp009 - 1 ] ) and ( [ eq : tmp009 - 2 ] ) , we have and thus . with the form of optimal and some basic manipulations, we have finally ends the proof .first , we have from which being some strictly positive constant. then , we also have . from ( [ eq : tmp666 ] ) and ( [ eq : tmp667 ] ) , we have another strictly positive constant .hence , the lemma is proved since .let us consider a parallel channel , each sub - channel of rank and with eigen - exponents .since each sub - channel is an af path , the joint pdf of the eigen - exponents in the high snr regime is from lemma [ lemma : cal - dmt ] , the dmt is with being the outage region .first , we can deduce that then , if all af paths have the same dmt , they have the same set , i.e. , . we can verify that setting , is without loss of optimality , since 1 ) the objective function is linear and symmetric on different , and 2 ) the constraints are convex and symmetric on different . finally , the optimization problem becomes with is the outage region of each single af path .the lemma can be proved immediately from here .without loss of generality , we assume that . then , the bottleneck of the channel is the channel .since the partition achieves the maximum diversity , by theorem [ thm : cond_nece ] , the partition size is with the ( respectively , ) antennas being partitioned into ( respectively , ) supernodes .moreover , for any af path in the partition , we have adding all the inequalities up gives the sum in the lhs of ( [ eq : tmp432 ] ) can be upper - bounded by , since each supernode in the transmitter can not be connected to more than nodes .hence , we have the following inequality after some simple manipulations from which we have the lower bound on the partition size which is obviously increasing with .therefore , the minimum lower bound is obtained by setting and it coincides with ( [ eq:2 ] ) .it can be shown that this lower bound is achieved by partitioning the intermediate layer into supernodes with defined by ( [ eq:2 ] ) without partitioning either of the source and the destination antennas .let us define the selection matrices s as diagonal matrices with first , we would like to prove that the maximum diversity gain is achieved .this can be done in two steps .the first step is to prove that the parallel channel with achieves the maximum diversity . to this end , note that by partitioning the rows ( respectively , columns ) of ( respectively , ) according to the indices in ( respectively , ) , the matrix can be partitioned into blocks , each one being an af path from the source to the destination .therefore , comprises af paths , i.e. , all possible paths .obviously , these paths include the independent paths in the independent partition .therefore , the maximum diversity is achieved since the key of the second step is to show that the set of matrices defined in ( [ eq : tmp1111 ] ) is actually an invertible constant linear transformation of , i.e. , in this case , we have and the diversity is lower - bounded by the maximum diversity , according to lemma [ lemma : div - para ] .hence , the ff scheme also achieves the maximum diversity .the key point is shown in the following .first , let us divide the set of indices into groups , each one comprising exactly integers such that and varies from to .then , we partition the set according to the partition of the indices described above .hence , the matrices in the same group can be rewritten as with being some matrix .we have where is composed of blocks of matrices with the -th block being if and otherwise .we can verify that is invertible and with the transformation , the matrices s are replaced by s with the same indices . in the same manner , we can successively replace the matrices with by similar invertible transformations as .finally , we obtain and the total transformation is invertible , constant and linear .note that the parallel channel of the ff scheme is in outage for a target rate implies that at least one of the sub - channels is in outage for a target rate .therefore , one can show that , from which .finally , by showing that is piece - wise linear with sections , we prove the theorem .let and denote the vector of the ordered eigenvalues and the corresponding eigen - exponents of a matrix .the theorem can be proved by showing a stronger result : the asymptotical pdf of in the high snr regime is identical to that of .we show it by induction on .for , since , the result is direct .suppose that the theorem holds for .let us show that it also holds for .note that from which we have for a given .similarly , . at high snr , we can show that where the first equality comes from lemma [ lemma : invariance - asymp ] and the second one holds because finally , since we suppose that the joint pdf of is the same as that of , we can draw the same conclusion for and .let us consider an equivalent block - diagonal channel of the parallel channel ( [ eq : parallel ] ) in the following form where }^{{\scriptscriptstyle \mathsf{t } \!}}}$ ] , and are defined in the same manner .now , from the parallel nvd code , we can build a block - diagonal code with codewords defined by .we can verify that is actually a rate- nvd code defined in with . from ( * ? ? ?* th . 3 ) , we have is the dmt of the parallel channel ( and thus the block - diagonal channel ) . finally , it is obvious that since using will have the same error performance as using except that the transmission rate is times higher .we have thus .it is shown in that the achievability holds for any fading statistics .thus , the code is approximately universal .a simple alternative construction that is approximately universal is described as follows .let be a full rate nvd code with and .then , achieves the dmt of the channel ( [ eq : bd ] ) .it means that by partitioning every codeword matrix into blocks in such a way that the block is of size and sending the block in the sub - channel , the dmt of the original parallel channel is achieved .although this construction is simple and suitable for both symmetric and asymmetric channels , the main drawback is that the coding delay is roughly times larger than the parallel nvd code constructed in section [ sec : paranvd ] .decoding complexity of such codes is sometimes prohibitive .assume that is a norm in , which means consider now the extensions described in fig .[ fig : fields - extension ] with the proper fields .from ( [ eq : x - existence ] ) and the left extension of fig .[ fig : fields - extension ] , we deduce that , since the minimal polynomial of is .meanwhile , from the right extension of fig .[ fig : fields - extension ] , we have denote . then the number has an algebraic norm equal to , and belongs to which is in contradiction with the result obtained in .so , is a non - norm element .j. n. laneman and g. w. wornell , `` distributed space - time - coded protocols for exploiting cooperative diversity in wireless networks , '' _ ieee trans .inf . theory _ , vol .49 , no .24152425 , oct .j. n. laneman , d. n. c. tse , and g. w. wornell , `` cooperative diversity in wireless networks : efficient protocols and outage behavior , '' _ ieee trans .inf . theory _50 , no . 12 , pp .30623080 , dec . 2004 .k. azarian , h. el gamal , and p. schniter , `` on the achievable diversity - multiplexing tradeoff in half - duplex cooperative channels , '' _ ieee trans .inf . theory _ ,51 , no . 12 , pp .41524172 , dec . 2005 .s. yang and j .- c .belfiore , `` towards the optimal amplify - and - forward cooperative diversity scheme , '' _ ieee trans .inf . theory _ ,53 , no . 9 ,2007 , to appear .[ online ] .available : http://arxiv.org/pdf/cs.it/0603123 m. yuksel and e. erkip , `` multi - antenna cooperative wireless systems : a diversity multiplexing tradeoff perspective , '' _ ieee trans .inf . theory _2007 , special issue on models , theory , and codes for relaying and cooperation in communication networks , to appear .elia , k. r. kumar , s. a. pawar , p. v. kumar , and h. lu , `` explicit , minimum - delay space - time codes achieving the diversity - multiplexing gain tradeoff , '' _ ieee trans .inf . theory _ ,52 , no .9 , pp . 38693884 , sep .2006 .s. borade , l. zheng , and r. gallager , `` amplify and forward in wireless relay networks : rate , diversity and network size , '' _ ieee trans .inf . theory _ ,2007 , special issue on relaying and cooperation in communication networks , to appear .p. wolniansky , g. foschini , g. golden , and r. valenzuela , `` v - blast : an architecture for realizing very high data rates over the rich - scattering wireless channel , '' in _ proc .of the ursi international symposium on signal , systems , and electronics conference _ , new york , 1998 , pp .295300 .p. elia and p. v. kumar , `` approximately universal optimality over several dynamic and non - dynamic cooperative diversity schemes for wireless networks . ''[ online ] .available : http://fr.arxiv.org/pdf/cs.it/0512028 f. oggier and e. viterbo , `` algebraic number theory and code design for rayleigh fading channels , '' in _ foundations and trends in communications and information theory _ , 2004 , vol . 1 , no . 3 , pp .333415 . e.bayer - fluckiger , f. oggier , and e. viterbo , `` new algebraic constructions of rotated -lattice constellations for the rayleigh fading channel , '' _ ieee trans .inf . theory _50 , no . 4 ,pp . 702714 , apr .2004 .s. h. simon , a. l. moustakas , and l. marinelli , `` capacity and character expansions : moment generating function and other exact results for mimo correlated channels , '' _ ieee trans .inf . theory _ ,52 , no . 12 , pp .53365351 , dec . 2006 .s. yang and j .- c .belfiore , `` diversity - multiplexing tradeoff of double scattering mimo channels , '' _ ieee trans .inf . theory _ , mar .2006 , submitted for publication .[ online ] .available : http://arxiv.org/pdf/cs.it/0603124 j .- c .belfiore , g. rekaya , and e. viterbo , `` the golden code : a full - rate space - time code with non - vanishing determinants , '' _ ieee trans .inf . theory _ ,51 , no . 4 , pp . 14321436 , apr . 2005 . | we consider _ slow _ fading relay channels with a single multi - antenna source - destination terminal pair . the source signal arrives at the destination via hops through layers of relays . we analyze the diversity of such channels with _ fixed _ network size at _ high snr_. in the clustered case where the relays within the same layer can have full cooperation , the cooperative decode - and - forward ( df ) scheme is shown to be optimal in terms of the diversity - multiplexing tradeoff ( dmt ) . the upper bound on the dmt , the cut - set bound , is attained . in the non - clustered case , we show that the naive amplify - and - forward ( af ) scheme has the maximum multiplexing gain of the channel but is suboptimal in diversity , as compared to the cut - set bound . to improve the diversity , space - time relay processing is introduced through the parallel partition of the multihop channel . the idea is to let the source signal go through different `` af paths '' in the multihop channel . this _ parallel af scheme _ creates a parallel channel in the time domain and has the maximum diversity if the partition is properly designed . since this scheme does not achieve the maximum multiplexing gain in general , we propose a _ flip - and - forward _ ( ff ) scheme that is built from the parallel af scheme . it is shown that the ff scheme achieves both the maximum diversity and multiplexing gains in a distributed multihop channel of arbitrary size . in order to realize the dmt promised by the relaying strategies , approximately universal coding schemes are also proposed . relay channel , multiple - input multiple - output ( mimo ) , multihop , diversity - multiplexing tradeoff ( dmt ) , amplify - and - forward ( af ) . |
what is all the fuss about noise ? in this review we endeavor to convey the excitement and promise of studies of the cosmic microwave background ( cmb ) radiation to scientists not engaged in these studies , particularly to particle and nuclear physicists .although the techniques for both detection and data processing are quite far apart from those familiar to our intended audience , the science goals are aligned .we do not emphasize mathematical rigor , but rather attempt to provide insight into ( a ) the processes that allow extraction of fundamental physics from the observed radiation patterns and ( b ) some of the most fruitful methods of detection . in section 1 ,we begin with a broad outline of the most relevant physics that can be addressed with the cmb and its polarization .we then treat the early history of the field , how the cmb and its polarization are described , the physics behind the acoustic peaks , and the cosmological physics that comes from cmb studies .section 2 presents the important foreground problem : primarily galactic sources of microwave radiation .the third section treats detection techniques used to study these extremely faint signals .the promise ( and challenges ) of future studies is presented in the last two sections . in keeping with our purposes, we do not cite an exhaustive list of the ever expanding literature on the subject , but rather indicate several particularly pedagogical works . herewe briely review the now standard framework in which cosmologists work and for which there is abundant evidence .we recommend readers to the excellent book modern cosmology by dodelson ( 2 ) . early in its history( picoseconds after the big bang ) , the energy density of the universe was divided among matter , radiation , and dark energy . the matter sectorconsisted of all known elementary particles and included a dominant component of dark matter , stable particles with negligible electromagnetic interactions .photons and neutrinos ( together with the kinetic energies of particles ) comprised the radiation energy density , and the dark energy component some sort of fluid with a negative pressure appears to have had no importance in the early universe , although it is responsible for its acceleration today .matter and radiation were in thermal equilibrium , and their combined energy density drove the expansion of space , as described by general relativity . as the universe expanded , wavelengths were stretched so that particle energies ( and hence the temperature of the universe ) decreased : t( ) = t(0)(1 + ) , where is the redshift and t(0 ) is the temperature at = 0 , or today .there were slight overdensities in the initial conditions that , throughout the expansion , grew through gravitational instability , eventually forming the structure we observe in todays universe : myriad stars , galaxies , and clusters of galaxies .the universe was initially radiation dominated .most of its energy density was in photons , neutrinos , and kinetic motion . after the universe cooled to the point at which the energy in rest mass equaled that in kinetic motion ( matter - radiation equality ) , the expansion rate slowed and the universe became matter dominated , with most of its energy tied up in the masses of slowly moving , relatively heavy stable particles : the proton and deuteron from the baryon sector and the dark matter particle(s ) .the next important era is termed either decoupling , recombination , or last scattering .when the temperature reached roughly 1 ev , atoms ( mostly h ) formed and the radiation cooled too much to ionize .the universe became transparent , and it was during this era that the cmb we see today was emitted , when physical separations were 1000 times smaller than today ( ) .at this point , less than one million years into the expansion , when electromagnetic radiation ceased playing an important dynamical role , baryonic matter began to collapse and cool , eventually forming the first stars and galaxies .later , the first generation of stars and possibly supernova explosions seem to have provided enough radiation to completely reionize the universe ( at ) . throughout these stages ,the expansion was decelerated by the gravitational force on the expanding matter .now we are in an era of cosmic acceleration ( ) , where we find that approximately 70% of the energy density is in the fluid that causes the acceleration , 25% is in dark matter , and just 5% is in baryons , with a negligible amount in radiation .the cmb is a record of the state of the universe at a fraction of a million years after the big bang , after a quite turbulent beginning , so it is not immediately obvious that any important information survives . certainly the fundamental information available in the collisions of elementary particles is best unraveled by observations within nanoseconds of the collision . yet even in this remnant radiation lies the imprint of fundamental features of the universe at its earliest moments .one of the most important features of the cmb is its planck spectrum .it follows the blackbody curve to extremely high precision , over a factor of approximately 1000 in frequency ( see figure 1 ) .this implies that the universe was in thermal equilibrium when the radiation was released ( actually long before , as we see below ) , which was at a temperature of approximately 3000 k. today it is near 3 k. an even more important feature is that , to better than a part in 10 , this temperature is the same over the entire sky .this is surprising because it strongly implies that everything in the observable universe was in thermal equilibrium at one time in its evolution . yet at any time and place in the expansion history of the universe , there is a causal horizon defined by the distance light ( or gravity ) has traveled since the big bang ; at the decoupling era , this horizon corresponded to an angular scale of approximately 1 , as observed today .the uniformity of the cmb on scales well above 1 is termed the horizon problem .the most important feature is that there are differences in the cmb temperature from place to place , at the level of 10 , and that these fluctuations have coherence beyond the horizon at the time of last scattering .the most viable notion put forth to address these observations is the inflationary paradigm , which postulates a very early period of extremely rapid expansion of the universe .its scale factor increased by approximately 21 orders of magnitude in only approximately 10 s. before inflation , the small patch that evolves into our observable universe was likely no larger across than the planck length , its contents in causal contact and local thermodynamic equilibrium . the process of superluminal inflation disconnects regions formally in causal contact .when the expansion slowed , these regions came back into the horizon and their initial coherence became manifest .the expansion turns quantum fluctuations into ( nearly ) scale - invariant cmb inhomogeneities , meaning that the fluctuation power is nearly the same for all threedimensional fourier modes .so far , observations agree with the paradigm , and scientists in the field use it to organize all the measurements .nevertheless , we are far from understanding the microphysics driving inflation .the number of models and their associated parameter spaces greatly exceed the number of relevant observables .new observations , particularly of the cmb polarization , promise a more direct look at inflationary physics , moving our understanding from essentially kinematical to dynamical . for particle physicists , probing microphysics at energy scales beyond accelerators using cosmological observationsis attractive .the physics of inflation may be associated with the grand unifcation scale , and if so , there could be an observable signature in the cmb : gravity waves .metric perturbations , or gravity waves ( also termed tensor modes ) , would have been created during inflation , in addition to the density perturbations ( scalar modes ) that give rise to the structure in the universe today . in the simplest of inflationary models, there is a direct relation between the energy scale of inflation and the strength of these gravity waves .the notion is that the universe initially had all its energy in a scalar field displaced from the minimum of its potential . is suitably constructed so that slowly rolls down its potential , beginning the inflationary era of the universe , which terminates only when approaches its minimum .inflation does not predict the level of the tensor ( or even scalar ) modes .the parameter is the tensor - to - scalar ratio for fluctuation power ; it depends on the energy scale at which inflation began . specifically , the initial height of the potential depends on , as .a value of , perhaps the smallest detectable level , corresponds to .the tensor modes leave distinct patterns on the polarization of the cmb , which may be detectable .this is now the most important target for future experiments .they also have effects on the temperature anisotropies , which currently limit to less than approximately 0.3 .it is a remarkable fact that even a slight neutrino mass affects the expansion of the universe .when the dominant dark matter clusters , it provides the environment for baryonic matter to collapse , cool , and form galaxies . as described above , the growth of these structures becomes more rapid in the matter - dominated era . if a significant fraction of the dark matter were in the form of neutrinos with electron - volt - scale masses ( nonrelativistic today ) , these would have been relativistic late enough in the expansion history that they could have moved away from overdense regions and suppressed structure growth .such suppression alters the cmb patterns and provides some sensitivity to the sum of the neutrino masses .note also that gravitational effects on the cmb in its passage from the epoch ( or surface ) of last scattering to the present leave signatures of that structure and give an additional ( and potentially more sensitive ) handle on the neutrino masses ( see section 1.9.2 ) .we know from the cmb that the geometry of the universe is consistent with being flat .that is , its density is consistent with the critical density .however , the overall density of matter and radiation discerned today ( the latter from the cmb directly ) falls short of accounting for the critical density by approximately a factor of three , with little uncertainty .thus , the cmb provides indirect evidence for dark energy , corroborating supernova studies that indicate a new era of acceleration . because the presence and possible evolution of a dark energy component alter the expansion history of the universe, there is the promise of learning more about this mysterious component . in 1965 ,penzias and wilson ( 3 ) , in trying to understand a nasty noise source in their experiment to study galactic radio emission , discovered the cmb arguably the most important discovery in all the physical sciences in the twentieth century .shortly thereafter , scientists showed that the radiation was not from radio galaxies or reemission of starlight as thermal radiation .this first measurement was made at a central wavelength of 7.35 cm , far from the blackbody peak .the reported temperature was .however , for a blackbody , the absolute flux at any known frequency determines its temperature .figure 1 shows the spectrum of detected radiation for different temperatures . there is a linear increase in the peak position and in the flux at low frequencies ( the rayleigh - jeans part of the spectrum ) as temperature increases . ]multiple efforts were soon mounted to confirm the blackbody nature of the cmb and to search for its anisotropies .partridge ( 4 ) gives a very valuable account of the early history of the field .however , there were false observations , which was not surprising given the low ratio of signal to noise . measurements of the absolute cmb temperature are at milli - kelvin levels , whereas relative measurements between two places on the sky are at micro - kelvin levels . by 1967 , partridge and wilkinson had shown , over large regions of the sky , that , leading to the conclusion that the universe was in thermal equilibrium at the time of decoupling ( 4 ) .however , nonthermal injections of energy even at much earlier times , for example , from the decays of long - lived relic particles , would distort the spectrum .it is remarkable that current precise measurements of the blackbody spectrum can push back the time of significant injections of energy to when the universe was barely a month old ( 5 ) .thus , recent models that attribute the dark matter to gravitinos as decay products of long - lived supersymmetric weakly interacting massive particles ( susy wimps ) ( 6 ) can only tolerate lifetimes of less than approximately one month .the solar system moves with velocity , causing a dipole anisotropy of a few milli - kelvins , first detected in the 1980s .( note that the direction of our motion was not the one initially hypothesized from motions of our local group of galaxies . )the first detection of primordial anisotropy came from the cobe satellite ( 7 ) in 1992 , at the level of 10 ( 30 ) , on scales of approximately 10 and larger .the impact of this detection matched that of the initial discovery .it supported the idea that structure in the universe came from gravitational instability to overdensities .the observed anisotropies are a combination of the original ones at the time of decoupling and the subsequent gravitational red- or blueshifting as photons leave over- or underdense regions . herewe describe the usual techniques for characterizing the temperature field .first , we define the normalized temperature in direction on the celestial sphere by the deviation from the average : .next , we consider the multipole decomposition of this temperature field in terms of spherical harmonics : where the integral is over the entire sphere . if the sky temperature field arises from gaussian random fluctuations , then the field is fully characterized by its power spectrum .the order describes the angular orientation of a fluctuation mode , but the degree ( or multipole ) describes its characteristic angular size .thus , in a universe with no preferred direction , we expect the power spectrum to be independent of .finally , we define the angular power spectrum by . herethe brackets denote an ensemble average over skies with the same cosmology .the best estimate of is then from the average over m. because there are only the ( ) modes with which to detect the power at multipole , there is a fundamental limit in determining the power .this is known as the cosmic variance ( just the variance on the variance from a finite number of samples ) : the full uncertainty in the power in a given multipole degrades from instrumental noise , finite beam resolution , and observing over a finite fraction of the full sky , as shown below in equation 9 . for historical reasons ,the quantity that is usually plotted , sometimes termed the tt ( temperature - temperature correlation ) spectrum , is where is the blackbody temperature of the cmb .this is the variance ( or power ) per logarithmic interval in and is expected to be ( nearly ) uniform in inflationary models ( scale invariant ) over much of the spectrum .this normalization is useful in calculating the contributions to the fluctuations in the temperature in a given pixel from a range of values : data from other experiments are shown , in addition to the best - fit cosmological model to the wmap data alone .note the multipole scale on the bottom and the angular scale on the top .figure courtesy of the wmap science team[wmap - ps ] ] figure 2 shows the current understanding of the temperature power spectrum ( from herewith we redefine to have units by replacing with ) .the region below indicates the initial conditions .these modes correspond to fourier modes at the time of decoupling , with wavelengths longer than the horizon scale .note that were the sky describable by random white noise , the spectrum would be flat and the tt power spectrum , defined by equation 3 , would have risen in this region like .the ( pleasant ) surprise was the observation of finite power at these superhorizon scales . at high values , there are acoustic oscillations , which are damped at even higher values .the positions and heights of the acoustic - oscillation peaks reveal fundamental properties about the geometry and composition of the universe , as we discuss below .the cmb data reveal that the initial inhomogeneities in the universe were small , with overdensities and underdensities in the dark matter , protons , electrons , neutrinos , and photons , each having the distribution that would arise from a small adiabatic compression or expansion of their admixture .an overdense region grows by attracting more mass , but only after the entire region is in causal contact .we noted that the horizon at decoupling corresponds today to approximately 1 on the sky .only regions smaller than this had time to compress before decoupling . for sufficiently small regions , enough time elapsesthat compression continues until the photon pressure is sufficient to halt the the electrons via thomson scattering , and the protons follow the electrons to keep a charge balance .inflation provides the initial conditions - zero velocity .decoupling preserves a snapshot of the state of the photon fluid at that time .excellent pedagogical descriptions of the oscillations can be found at _http://background.uchicago.edu//_. other useful pages are _ http://wmap.gsfc.nasa.gov/,http://space.mit.edu/home/tegmark/index.html _ and _ http://www.astro.ucla.edu/ / intro.html_. perturbations of particular sizes may have undergone ( a ) one compression , ( b ) one compression and one rarefaction , ( c ) one compression , one rarefaction , and one compression , and so on .extrema in the density field result in maxima in the power spectrum .consider a standing wave permeating space with frequency and wave number , where these are related by the velocity of displacements ( the sound speed , ) in the plasma : .the wave displacement for this single mode can then be written as .the displacement is maximal at time of decoupling for we add the tt subscript to label these wave numbers associated with maximal autocorrelation in the temperature .note that even in this tightly coupled regime , the universe at decoupling was quite dilute , with a physical density of less than . because the photons diffuse their mean free path is not infinitely short this pattern does not go on without bound .the overtones are damped , and in practice only five or six such peaks will be observed , as seen in figure 2 . to help explain these ideas , we reproduce a few frames from an animation by w. hu .figure 3 shows a density fluctuation on the sky from a single mode and how it appears to an observer at different times .the figure shows the particle horizon just after decoupling .this represents the farthest distance one could in principle see approximately the speed of light times the age of the universe .an observer at the center of the gure could not by any means have knowledge of anything outside this region .of course , just after decoupling , the observer could see a far shorter distance .only then could light propagate freely .the subsequent frames show how the particle horizon grows to encompass more corrugations of the original density fluctuation . at firstthe observer sees a dipole , later a quadrupole , then an octopole , and so on , until the present time when that single mode in density inhomogeneities creates very high multipoles in the temperature anisotropy .it is instructive to think of how the temperature observed today at a spot on the sky arises from the local moments in the temperature field at the time of last scattering .it is only the lowest three moments that contribute to determining the anisotropies .the monopole terms are the ones transformed into the rich angular spectra .the dipole terms also have their contribution : the motion in the fluid oscillations results in doppler shifts in the observed temperatures .polarization , we see below , comes from local quadrupoles .these frames show one superhorizon temperature mode just after decoupling with representative photons last scattering and heading toward the observer at the center .left to right : just after decoupling ; the observer s particle horizon when only the temperature monopole can be detected ; som e time later when the quadrupole is detected ; later still when the 12-pole is detected ; and today , a very high , well aligned multipole , from just this single mode in -space , is detected .figure courtesy of w. hu [ hu-1 ] inflation is a mechanism whereby fluctuations are created without violating causality .there does not seem to be a better explanation for the observed regularities .nevertheless , wolfgang pauli s famous statement about the neutrino comes to mind : i have done a terrible thing : i have postulated a particle that can not be detected ! sometimes it seems that inflation is an idea that can not be tested , or tested incisively .of course pauli s neutrino hypothesis did test positive , and similarly there is hope that the idea of inflation can reach the same footing . still , we have not ( yet ) seen any scalar field in nature .we discuss what has been claimed as the smoking gun test of inflation the eventual detection of gravity waves in the cmb .however , will we ever know with certainty that the universe grew in volume by a factor of 10 in something like ? experiments have now shown that the cmb is polarized , as expected .researchers now think that the most fruitful avenue to fundamental physics from the cmb will be in precise studies of the patterns of the polarization .this section treats the mechanisms responsible for the generation of the polarization and how this polarization is described .if there is a quadrupole anisotropy in the temperature field around a scattering center , even if that radiation is unpolarized , the scattered radiation will be as shown in figure 4 : a linear polarization will be generated . ), the incoming quadrupole pattern produces linear polarization along the -direction . in terms of the stokes parameters ,this is , the power difference detected along the - and -directions .linear polarization needs one other parameter , corresponding to the power difference between 45 and 135 from the x - axis .this parameter is easily shown to be stokes .right : e and b polarization patterns .the length of the lines represents the degree of polarization , while their orientation gives the direction of maximum electric field .frames courtesy of w. hu.,title="fig:",width=163 ] ) , the incoming quadrupole pattern produces linear polarization along the -direction . in terms of the stokes parameters ,this is , the power difference detected along the - and -directions .linear polarization needs one other parameter , corresponding to the power difference between 45 and 135 from the x - axis .this parameter is easily shown to be stokes .right : e and b polarization patterns .the length of the lines represents the degree of polarization , while their orientation gives the direction of maximum electric field .frames courtesy of w. hu.,title="fig : " ] the quadrupole is generated during decoupling , as shown in figure 3 . because the polarization arises from scattering but said scattering dilutes the quadrupole , the polarization anisotropy is much weaker than that in the temperature field .indeed with each scatter on the way to equilibrium , the polarization is reduced .any remaining polarization is a direct result of the cessation of scattering . for this reason , the polarization peaks at higher values than does the temperature anisotropy .the local quadrupole on scales that are large in comparison to the mean free path is diluted from multiple scattering .the polarization field is both more complicated and richer than the temperature field . at each point in the sky, one must specify both the degree of polarization and the preferred direction of the electric field .this is a tensor field that can be decomposed into two types , termed e and b , which are , respectively , scalar and pseudoscalar fields , with associated power spectra .examples of these polarization fields are depicted schematically in figure 4 .the e and b fields are more fundamental than the polarization field on the sky , whose description is coordinate - system dependent .in addition , e modes arise from the density perturbations ( which do not produce b modes ) that we describe , whereas the b modes come from the tensor distortions to the space - time metric ( which do have a handedness ) .we mention here that the e and b fields are nonlocal .their extraction from measurements of polarization over a set of pixels , often in a finite patch of sky , is a well - developed but subtle procedure ( see section 3.3 ) .the peaks in the ee ( e - polarization correlated with itself ) spectrum should be 180 out of phase with those for temperature : polarization results from scattering and thus is maximal when the fluid velocity is maximal . calculating the fluid velocity for the mode in section 1.6, we find , defining modes with maximal ee power .the te ( e - polarization correlated with the temperature field ) spectrum how modes in temperature correlate with those with e polarization is also of cosmological interest , with its own peak structure .here we are looking at modes that have a maximum at decoupling in the product of their temperature and e - mode polarization ( or velocity ) .similarly , the appropriate maxima ( which in this case can be positive or negative ) are obtained when thus , between every peak in the tt power spectrum there should be one in the ee , and between every tt and ee pair of peaks there should be one in the te .figure 5 shows the ee results in addition to the expected power spectra in the standard cosmological model .measurements of the te cross correlation are also shown .the pattern of peaks in both power spectra is consistent with what was expected .what was unexpected was the enhancement at the lowest values in the ee power spectrum .this is discussed in the next section .the experiments reported in figure 5 , with 20 or fewer detectors , use a variety of techniques and operate in different frequency ranges .this is important in dealing with astrophysical foregrounds ( see section 2 ) that have a different frequency dependence from that of the cmb . and that to display features in the very low range , we plot .,title="fig : " ] and that to display features in the very low range , we plot .,title="fig : " ] limits from current experiments on the b - mode power are now at the level of 1 - 10 , far from the expected signal levels shown in figure 6 .the peak in the power spectrum ( for the gravity waves ) is at , the horizon scale at decoupling .the reader may wonder why the b modes fall off steeply above this scale and show no acoustic oscillations .the reason is simple : a tensor mode will give , for example , a compression in the x - direction followed by a rarefaction in the y - direction , but will not produce a net overdensity that would subsequently contract . in the final sectionwe discuss experiments with far greater numbers of detectors aimed specifically at b - mode science .note that such gravity waves have frequencies today of order hz .however , if their spectrum approximates one of scale invariance , they would in principle be detectable at frequencies nearer 1 hz , such as in the lisa experiment .this is discussed more fully in reference 10 . in this sectionwe briefly discuss three important processes after decoupling : rescattering of the cmb in the reionized plasma of the universe , lensing of the cmb through gravitational interactions with matter , and scattering of the cmb from hot gas in galaxy clusters .although these can be considered foregrounds perturbing the primordial information , each can potentially provide fundamental information . and . the predicted b - mode signal power spectrum due to the distortion of e modes by weak gravitational lensing is also shown .estimated statistical sensitivities for a new space mission ( pink line ) and two sample ground - based experiments , as considered in reference 9 , each with 1000 detectors operating for one year with 100% duty cycle ( dark and light blue lines ) , are shown .experiment i observes 4% of the sky , with a 6-arcmin resolution ; experiment ii observes 0.4% of the sky , with a 1-arcmin resolution . ] the enhancement in the ee power spectrum at the very lowest values in figure 5 is the signature that the universe was reionized after decoupling .this is a subject rich in astrophysics , but for our purposes it is important in that it provides another source for scattering and hence detection of polarization . from the wilkinson microwave anisotropy probe ( wmap ) polarization data ( 11 ) , one can infer an optical depth of order 10% , the fraction of photons scattering in the reionized plasma somewhere in the region of .this new scattering source can be used to detect the primordial gravity waves .the signature will show up at very low values , corresponding to the horizon scale at reionization .figure 6 shows that the region should have substantial effects from gravity waves .most likely , the only means of detecting such a signal is from space , and even from there it will be very difficult .the polarization anisotropies for this very low region are comparable to what is expected from the surface of last scattering ( ) .there are disadvantages to each signature . at the lowest values , galactic foregrounds are more severe , there are fewer modes in which to make a detection , and systematic errors are likely greater . at the higher values ,there is a foreground that arises from e modes turning into b modes through gravitational lensing ( the topic of the next section ) .clearly , it will be important to detect the two signatures with the right relative strengths at these two very different scales. both the temperature and polarization fields will be slightly distorted ( lensed ) when passing collapsing structures in the late universe .the bending of light means that one is not looking ( on the last scattering surface ) where one thinks .although lensing will affect both the polarization and t fields , its largest effect is on the b field , where it shifts power from e to b. gravitational distortions , although preserving brightness , do not preserve the e and b nature of the polarization patterns .figure 6 also shows the expected power spectrum of these lensed b modes .because this power is sourced by the e modes , it roughly follows their shape , but with suppressed by a factor of 20 .the peak structure in the e modes is smoothed , as the structures doing the lensing are degree scale themselves .owing to the coherence of the lensing potential for these modes , there is more information than just the power spectrum , and work is ongoing to characterize the expected cross correlation between different multipole bands .this signal should be detectable in next - generation polarization experiments .for our purposes , the most interesting aspect of this lensing is the handle it can potentially give on the masses of the neutrinos , as more massive neutrinos limit the collapse of matter along the cmb trajectories .all other parameters held fixed , there is roughly a factor - of - two change in the magnitude of the b signal for a 1-ev change in the mean neutrino mass . at very small angular scales values of a few thousand , way beyond where the acoustic oscillations are damped there are additional effects on the power spectra that result from the scattering of cmb photons from electrons after the epoch of reionization , including scattering from gas heated from falling deep in the potential wells of galaxy clusters ( the sunyaev- zeldovich , or sz , effect ) .these nonlinear effects are important as they can help in untangling ( a ) when the first structures formed and ( b ) the role of dark energy . in this section ,we show how the power spectrum information is used to determine important aspects of the universe .this is normally known as parameter estimation , where the parameters are those that define our cosmology .the observable power spectrum is a function of at least 11 such basic parameters .as we discuss below , some are better constrained than others .first , there are four parameters that characterize the primordial scalar and tensor fluctuation spectra before the acoustic oscillations , each of which is assumed to follow a power law in wave number .these four are the normalization of the scalar fluctuations ( ) , the ratio of tensor to scalar fluctuations , and the spectral indices for both ( historically denoted with and ) .second , there is one equation - of - state parameter ( ) that is the ratio of the pressure of the dark energy to its energy density , and one parameter that gives the optical depth ( ) from the epoch of reionization . finally , there are five parameters that characterize the present universe : its rate of expansion ( hubble constant , with km s mpc ) , its curvature ( ) , and its composition ( baryon density , matter density , and dark energy density ) .the latter three are described in terms of energy densities with respect to the critical density normalized to the present epoch : , and .just 10 of these are independent as .even though the cmb data set itself consists of hundreds of measurements , they are not sufficiently orthogonal with respect to the 10 independent parameters for each to be determined independently ; there are significant degeneracies .hence , it is necessary to make assumptions that constrain the values of those parameters upon which the data have little leverage . in some cases , such prior assumptions ( priors ) can have large effects on the other parameters , and there is as yet no standard means of reporting results .several teams have done analyses [ wmap ( 11 , 12 ) , cbi ( 13 ) , boomerang ( 14 ) , see also reference 15 ] . herewe first discuss the leverage that the cmb power spectra have on the cosmological parameters .then we give a flavor for the analyses , together with representative results .we consider analyses , done by the several teams , with just the six most important parameters : , and , where the other five are held fixed .for this discussion we are guided by reference 12 . completely within cmb data, there is a geometrical degeneracy between , a contribution to the energy density from the curvature of space , and .however , taking a very weak prior of , the wmap team , using just their first - year data , determined that , that is , no evidence for curvature .we assume unless otherwise noted .this conclusion has gotten stronger with the three - year wmap data together with other cmb results , and it is a prediction of the inflationary scenario .nevertheless , we emphasize that it is an open experimental issue .the position of the first acoustic peak reveals that the universe is flat or nearly so .as we describe above , the generation of acoustic peaks is governed by the ( comoving ) sound horizon at decoupling , ( i.e. , the greatest distance a density wave in the plasma could traverse , scaled to today s universe ) .the sound horizon depends on , and the radiation density , but not on , or the spectral tilt .the peak positions versus angular multipole are then determined by , where the quantity , the angular diameter distance , is the distance that properly takes into account the expansion history of the universe between decoupling and today so that when is multiplied by an observed angle , the result is the feature size at the time of decoupling . in a nonexpanding universe , this would simply be the physical distance .the expression depends on the ( evolution of the ) content of the universe . for a flat universe, we have in this expression , indicates the ( well - known ) radiation density , and the dilutions of the different components with redshift , between decoupling and the present , enter explicitly .it is easy to see how one in principle determines spectral tilt .if one knew all the other parameters , then the tilt would be found from the slope of the power spectrum after the removal of the other contributions .however , there is clearly a coupling to other parameters .experiments with a very fine angular resolution will determine the power spectrum at very high values , thereby improving the measurement of the tilt .here we discuss the primary dependences of the acoustic peak heights on and . increasing decreases the peak heights . with greater matter density , the era of equalityis pushed to earlier redshifts , allowing the dark matter more time to form deeper potential wells .when the baryons fall into these wells , their mass has less effect on the development of the potential so that the escaping photons are less redshifted than they would be , yielding a smaller temperature contrast . as to , increasing it decreases the second peak but enhances that of the third because the inertia in the photon - baryon fluid is increased , leading to hotter compressions and cooler rarefactions ( 16 ) .the peak - height ratios give the three parameters , and , with a precision just short of that from a full analysis of the power spectrum ( discussed in section 3.4.4 ) .following wmap , we define the ratio of the second to the first peak by , the ratio of the third to the second peak by , and the ratio of the first to the second peak in the polarization - temperature cross - correlation power spectrum by .table 1 shows how the errors in these ratios propagate into parameter errors .we see that all the ratios depend strongly on , and that the ratio of the first two peaks depends strongly on but is also influenced by . for ,the relative dependences on and are reversed .finally , the baryon density has little influence on the ratio of the te peaks .however , increasing deepens potential wells , increasing fluid velocities and the heights of all polarization peaks ..matrix of how errors in the peak ratios ( defined in text ) relate to the parameter errors .[ cols="^,^,^,^",options="header " , ] here we discuss experiments needed to detect the b - mode signals of lensing at large values and of gravity waves at intermediate and small values .table 3 shows the sensitivity required to make 3 detections of several target signals .estimates come from equation 9 , either for the full sky or for a smaller patch where the balance between sample variance and detector noise is optimized .we give expressions for the sensitivity to a feature in the power spectrum centered at and with width and for the fraction of the sky that accomplishes the balance : ( and . for our purposes , both the signal from lensing and from primordial gravity waves have approximately this shape . here is the ( peak ) cosmological signal of interest and is the total sensitivity of the experiment , summing over all the observing time and all the detectors . the lensing detection can be accomplished by observing for a year ( from the ground at a good site such as the atacama desert in chile or the south pole ) a patch of approximately 1.6 square degrees , with detectors having 1.5 times the wmap sensitivity .the table also gives the sensitivities required to detect primordial b modes at various levels of .foregrounds will be a problem for , perhaps more so for the detection of the signal from the reionized plasma than from the surface of last scattering .a satellite experiment can detect the signal from the reionized plasma , where the lensing contamination is nearly negligible . the planck experiment ( 2007 )has an order of magnitude in temperature sensitivity over wmap .polarization sensitivity was not a primary goal .still , much has gone into making sure the residual systematic uncertainties ( and foregrounds ) can be understood sufficiently well to allow the extraction of polarization signals around 50 nk , corresponding to .there is a program of experiments over the coming five to eight years .these will involve , progressively , tens , then hundreds , and nally a thousand or more detectors per experiment and will test polarization modulation schemes , effective scan strategies , foreground - removal methods , and algorithms for separating e and b modes . experiments with tens of detectors are already underway .the sister experiments quad and bicep observe from the south pole , using polarization - sensitive bolometers at 100 and 150 ghz .quad , with a 4-arcmin beam , is optimized for gravitational lensing , whereas bicep , at approximately 40 arcmin , is searching for gravitational waves .mbi and its european analog brain are testing the idea of using bolometers configured as an interferometer , and pappa is a balloon effort using waveguide - coupled bolometers from the goddard space flight center .these latter experiments have beams in the range of 0.5 to 1 .five initiatives at the level of hundreds of detectors have so far been put forth .four use tes bolometers at 90 , 150 , and 220 ghz : clover , the lone european effort , with an 8-arcmin beam ; polarbear , with a 4-arcmin beam ; and ebex and spider , balloon - borne experiments with 8- and 20 70-arcmin beams , respectively ( spider and polarbear will use antenna - coupled devices ) .the fifth uses coherent detectors at 44 and 90 ghz : quiet , initially with a 12-arcmin beam , observing from the atacama desert .all are dedicated ground - up polarization experiments that build their own optical systems .the act and spt groups , supported by the national science foundation , deploy very large telescopes to study both the cosmology of clusters via the sz effect and fine - scale temperature anisotropies , and will likely propose follow - up polarimeters .the reach of the next satellite experiment , termed cmbpol as defined by the three - agency task force in the united states ( 36 ) and termed bpol in europe , is to detect the signal from gravity waves limited only by astrophysical foregrounds . examining figure 6, we see that , and possibly lower values , can be reached.we should know a great deal from the suborbital experiments well before the 2018 target launch date . for studying polarization at large scales , where foregrounds pose their greatest challenge , information from wmap and planckwill be the most valuable .there is considerable promise for new , important discoveries from the cmb , ones that can take us back to when the universe s temperature was between the grand unified theory and planck energy scales .this is particle physics , and while we hope accelerators will provide crucial evidence for , for example , the particle nature of dark matter , exploring these scales seems out of their reach . in some ways, cosmology has followed the path of particle physics : it has its standard model , accounting for all confirmed phenomena . with no compelling theory ,parameter values are not of crucial interest .we can not predict the mass of the top quark , nor can we predict the primordial energy densities .each discipline is checking consistency , as any discrepancies would be a hint of new physics .the cmb field is not as mature as particle physics .it needs considerable detector development , even for current experiments .there is rapid progress , and overall sensitivity continues to increase .foregrounds are certainly not sufficiently known or characterized .there is a great deal of competition in the cmb , like the early days of particle physics before the experiments grew so large that more than one or two teams exploring the same topic worldwide was too costly . for the moment, this is good , as each team brings something unique in terms of control of systematics , frequencies , regions of the sky scanned , and detection technology .however , there is a difference in the way results are reported in the two fields . in the cmb field , typically almost nothing is said about an experiment between when it is funded and when it publishes . herepublishing means that results are announced , multiple papers are submitted and circulated , and often there is a full data release , including not only of raw data and intermediate data products but sometimes support for others to repeat or extend the analyses .the positives of this tradition are obvious .however , one negative is that one does not learn the problems an experiment is facing in a timely manner .there is a degree of secrecy among cmb scientists .there are other differences .cmb teams frequently engage theorists to perform the final analysis that yields the cosmological significance of the data .sophisticated analysis techniques are being developed by a set of scientists and their students who do not work with detectors but do generate a growing literature .there are as yet no standardized analysis techniques ; effectively each new experiment invents its own .the days appear to be over where the group of scientists that design , build , and operate an experiment can , by themselves , do the full scientific analysis .another distinction is that there is no one body looking over the field or advising the funding agencies , and private funds sometimes have a major impact .nearly all cmb scientists are working on multiple projects , sometimes as many as four or five , holding that many grants .more time is spent writing proposals and reports and arranging support for junior scientists , for whom there is little funding outside of project funds .this is certainly not the optimum way to fund such an exciting and promising field .the authors have enjoyed many productive conversations with colleagues at their institutions in the course of preparing this review .the authors are particularly grateful for helpful comments from norman jarosik , bernd klein , laura la porta , and kendrick smith .we also wish to acknowledge our collaborators in quiet and capmap for frequent discussions of relevant issues .this work was partially supported by grants from the national science foundation : phy-0355328 , phy- 0551142 , and astr / ast-0506648 .* kamionkowski m , kosowsky a. annu .rev . nucl . part .sci . 49:77( 1999 ) * dodelson s. modern cosmology .acad . press ( 2003 ) * penzias aa , wilson rw .j. 142:419 ( 1965 ) * partridge rb .3k : the cosmic microwave background radiation .cambridge , ny : cambridge univ . press ( 1995 ) * fixsen dj , et al . astrophys .j. 483:586 ( 1996 ) * feng jl , rajaraman a , takayama f. phys .d 68:063504 ( 2003 ) * smootgf , et al .j. 396:1 ( 1992 ) * hinshaw g , et al .j. in press ( 2007 ) * bock j , et al .astro - ph/0604101 ( 2006 ) * smith tl , kamionkowski m , cooray a. phys . rev .d 73(2):023504 ( 2006 ) * spergel dn , et al .j. in press ( 2007 ) * page l , et al .j. suppl . 148:233 ( 2003 ) * readhead acs , et al .science 306:836 ( 2004 ) * mactavish cj , et al .j. 647:799 ( 2006 ) * tegmark m , et al .d 69:103501 ( 2004 ) * hu w , sugiyama n. phys .d 51:2599 ( 1995 ) * eisenstein dj , et al .j. 633:560 ( 2005 ) * bennett cl , et al .j. suppl . 148:97 ( 2003 )* page l , et al .j. in press ( 2007 ) * laporta l , burigana c , reich w , reich p. astron . astrophys .455(2):l9 ( 2006 ) * carretti e , bernardi g , cortiglioni s. mnras373:93 ( 2006 ) * hanany s , rosenkranz p. new astron .47(11 12):1159 ( 2003 ) * pietranera l , et al .mnras . in press ( 2007 )* finkbeiner dp , davis m , schlegel dj .j. 524:867 ( 1999 ) * vaillancourt je . eur .publ . ser .23:147 ( 2007 ) * benoit a , et al .astron . astrophys .424:571 ( 2004 ) * ponthieu n , et al .astron . astrophys .444:327 ( 2005 ) * huffenberger km , eriksen hk , hansen fk .j. 651:81 ( 2006 ) * stivoli f , baccigalupi c , maino d , stompor r. mnras 372:615 ( 2006 ) * verde l , peiris hv , jimenez r. j. cosmol .1:19 ( 2006 ) * barkats d , et al . astrophys .159(1):1 ( 2005 ) * ryle m. proc .a 211:351 ( 1952 ) * zmuidzinas j. appl .optics 42:4989 ( 2003 ) * mather jcoptics 21:1125 ( 1982 ) * tegmark m. phys .d 56:4514 ( 1997 ) * smith k. new astron .50(11 12):1025 ( 2006 ) * crawford t. astro - ph/0702608 ( 2007 ) * couchout f , et al .cmb and physics of the early universe .cgi - bin / reader / conf.cgi?confid=27 ( 2006 ) * bond jr , jaffe ah , knox l. astrophys .j. 533:19 ( 2000 ) * hivon e , et al . astrophys .j. 567:2 ( 2002 ) | we intend to show how fundamental science is drawn from the patterns in the temperature and polarization fields of the cosmic microwave background ( cmb ) radiation , and thus to motivate the field of cmb research . we discuss the field s history , potential science and current status , contaminating foregrounds , detection and analysis techniques and future prospects . throughout the review we draw comparisons to particle physics , a field that has many of the same goals and that has gone through many of the same stages . |
the suppression of oscillation and the revival of oscillation are two opposite but interrelated important emergent phenomena in coupled oscillators . in the suppression of oscillation , oscillators arrive at a common homogeneous steady state ( hss ) or different branches of an inhomogeneous steady state ( ihss ) under some proper parametric and coupling conditions . the former is called the amplitude death ( ad ) state , whereas the latter is denoted as the oscillation death ( od ) state . on the other hand ,the revival of oscillation is a process by which the rhythmic behavior ( or rhythmicity ) of individual nodes in a network of coupled oscillators is restored from an ad / od state without changing the intrinsic parameters associated with the individual nodes . in the oscillation quenched ( suppressed ) state the dynamic nature of individual coupled oscillatorsthis has many potential applications : for example , the ad state is important to suppress unwanted oscillations that hinder a certain process , e.g. , in laser systems , chattering in mechanical drilling process , etc .similarly , the od state has a significant role in understanding of many biological processes , e.g. , synthetic genetic oscillator , cardiovascular phenomena , cellular differentiation , etc . on the other hand ,the research in the topic of revival of oscillation is important because many physical , environmental , biological and social processes require _ stable _ and _ robust _ oscillations for their proper functioning : examples include , el nio / southern oscillation in ocean and atmosphere , brain waves in neuroscience , electric power generators , cardiopulmonary sinus rhythm of pacemaker cells , etc . in these systemsthe suppression of oscillation may result in a fatal system breakdown or an irrecoverable physiological malfunction .thus , it has to be ensured that , if these type of systems are trapped in an oscillation cessation state , that state has to be revoked in order to establish rhythmicity .a recent burst of publications reveal many aspects of ad and od : in particular , identifying coupling schemes to induce them , transition from ad to od , their experimental verifications , etc .but only a few techniques have been reported to revoke ad / od and induce rhythmicity in a network of oscillators . in ref. several variants of time delay techniquesare discussed in order to revoke death states , whereas in ref. , network connections are chosen properly in order to revive oscillations .however , most of these techniques lack the generality to revive oscillations from a death state .only recently a general technique to revive oscillation from the oscillation suppressed state ( or death state ) has been reported by .the authors proposed a simple but effective way to revoke the death state and induce rhythmicity by introducing a simple feedback factor in the diffusive coupling .they showed that this technique is robust enough to induce rhythmicity in a diffusively coupled network of nonlinear oscillators , such as , the stuart - landau oscillator , brusselators model , chaotic lorenz system , cell membrane model , etc .they further tested the effectiveness of their proposed technique in conjugate and dynamic coupling .however , in the absence of parameter mismatch or coupling time - delay , simple diffusive coupling can not induce ad in coupled oscillators .therefore , for identical oscillators under diffusive coupling ( without coupling delay ) no ad is possible and thus the issue of revoking the ad state does not arise .also , od in diffusively coupled identical oscillators is always accompanied by a limit cycle , thus one needs to choose proper initial conditions to revoke that death state .regarding the conjugate coupling , it is not always a general technique to induce death : for example , in a first - order intrinsic time - delay system no conjugate coupling is possible .further , the dynamic coupling has its own pitfalls ( e.g. , its success strongly depends on the intrinsic properties of the oscillators under consideration ) , which has been discussed in detail in . in this context the mean - field diffusive couplingis a general way to induce ad / od even in networks of _ identical _ coupled oscillators : it works in any network of oscillators including chaotic first - order intrinsic time - delay systems .further , it has been shown in that , unlike diffusive coupling , the od state induced by the mean - field diffusive coupling is not accompanied by a limit cycle . also , the mean - field diffusive coupling is the most natural coupling scheme that occurs in physics , biology , ecology , etc .thus , for those systems that always need a robust limit cycle for their proper functioning , the mean - field diffusive coupling is a much stronger `` trap '' to induce death in comparison with the other coupling schemes .therefore , it is important to study the process of revoking the oscillation suppression state induced by the mean - field diffusive coupling and revive oscillation from the death state .motivated by the above facts , in this paper we introduce a feedback factor in the _ mean - field diffusive coupling _ and examine its effect in a network of coupled oscillators .we show that the interplay of the _ feedback factor _ and the _ density of mean - field _ coupling can restore rhythmicity from a death state even in a network of identical coupled oscillators .thus , unlike ref . , here we have two control parameters that enable us to revoke the death state . using rigorous eigenvalue and bifurcation analyses on coupled van der pol and stuart - landau oscillators , separately , we show that the region of the death state shrinks substantially in the parameter space depending upon those two control parameters .we also extend our results to a network consisting of a large number of mean - field coupled oscillators and show that the revival of rhythmicity works in the spatially extended systems , also .further , for the first time , we report an experimental observation of the revival of oscillation from death states induced by the mean - field coupling that qualitatively supports our theoretical results .at first we consider a network of van der pol ( vdp ) oscillators interacting through a modified mean - field diffusive coupling ; the mathematical model of the coupled system is given by [ systemvdp ] here and is the mean - field of the coupled system .the individual vdp oscillators show a near sinusoidal oscillation for smaller , and relaxation oscillation for larger .the coupling strength is given by ; is called the mean - field density parameter that determines the _ density of the mean - field _ ( ) ; it actually provides an additional free parameter that controls the mean - field dynamics : indicates the self - feedback case , whereas represents the maximum mean - field density .the feedback term controls the rate of diffusion ( ) : represents the maximum feedback and the original mean - field diffusive coupling ; represents the absence of a feedback and thus that of diffusion . any values in between this limit can be treated as the intermediate diffusion rate and thus represent a modified mean - field diffusive coupling . the origin of is well discussed in where it is speculated that it may arise in the context of cell cycle , neural network and synchronization engineering .as the limiting case we take two identical vdp oscillators : . from eq .we can see that there are the following fixed points : the origin and two more coupling dependent fixed points : ( i ) ( , , , ) where and .( ii ) ( , , , ) where and .the eigenvalues of the system at the origin are , [ lambdavdp ] from the eigenvalue analysis we derive two pitchfork bifurcation ( pb ) points pb1 and pb2 , which emerge at the following coupling strengths : [ pb ] the ihss , ( , , , ) , emerges at through a symmetry breaking pitchfork bifurcation . the other nontrivial fixed point( , , , ) comes into existence at , which gives rise to an unique _ nontrivial hss_. further , equating the real part of and to zero , we get two hopf bifurcation points at [ hb12 ] from eqs . andwe see that no hopf bifurcation of trivial fixed point occurs for ; in that case , only pitchfork bifurcations exist .the eigenvalues of the system at the nontrivial fixed point ( , , , ) , where and are given by : [ ntlambda ] where , , , , .now , with increasing , moves towards , and at a critical value , say , hb2 collides with pb1 : . for ,the ihss becomes stable at through a subcritical hopf bifurcation , where this is derived from the eigenvalues of the system at the the nontrivial fixed point ( , , , ) . actually determines the direct transition from od to a limit cycle , i.e. revival of oscillation .the second nontrivial fixed point , , , that was created at becomes stable through a subcritical pitchfork bifurcation at : this is derived from the eigenvalues corresponding to , , , .this nontrivial ad state can also be pushed back to a very large value of by choosing , and thus this ad state can also be revoked effectively .the above eigenvalue analysis is supported by a numerical bifurcation analysis using xppaut .figure [ f:1](a - c ) show the single parameter bifurcation diagram depending on for different for an exemplary value ( throughout the numerical simulation we consider ) .we observe that the oscillation cessation state ( both ad and od ) moves towards right , i.e. , a stronger coupling strength for a decreasing , and for ( ) [ fig .[ f:1 ] ( c ) ] the death state moves much further from pb1 in the right direction .we also verify that at no death state occurs ( not shown in the figure ) .we further explore the zone of oscillation cessation in a two parameter bifurcation diagram in the space . figure .[ f:1 ] ( d ) shows the bifurcation curves for .the hb2 curve determines the transition from oscillation to ad for ; beyond this limit the hbs curve determines the zone of oscillation and thus that of the death region . figure .[ f:1 ] ( e ) shows that the _ death region shrinks with decreasing _ confirming our theoretical analysis .finally , we plot the phase diagram in parameter space at [ fig .[ f:1 ] ( f ) ] : we find that a _ higher _ value of _ or _ a _ lower _ value of support oscillations .interestingly , even for ( i.e. , a complete mean - field diffusion ) , one can revive rhythmicity by simply increasing the value of ; thus , this coupling scheme offers two control parameters to revive oscillations . finally , we summarize the observations and discuss the following important points : ( i ) hb2 is the inverse hopf bifurcation point where an ad state is revoked and gives rise to a _stable _ limit cycle .this point ( or curve in a two parameter space ) determines the revival of oscillation below a critical value , which is determined by . from eq .we see that by choosing closer to ( ) the death zone shrinks substantially . thus , to revoke a death state one has to choose , and _ ensures _ that there will be no death state even in the stronger coupling strength ( whatever strong it may be ) .( ii ) even if complete diffusion is present , i.e. , , one can still achieve the revival of oscillation by choosing .this is an unique feature of the mean - field diffusive coupling , which is absent in other coupling schemes .( color online ) network of mean - field coupled van der pol oscillators [ eqs . ] with , , : spatiotemporal plots showing ( a ) and ( b ) ad and its revival , respectively , with decreasing at .( c ) and d ) show the od and its revival , respectively , with a decreasing at .the upper rows [ i.e. , ( a ) and ( c ) ] have and the lower rows [ i.e. , ( b ) and ( d ) ] have . the first time is excluded and then for the next are plotted . ] to show that our analysis of two coupled oscillators are valid for a larger network also , we consider the more general case of mean - field coupled identical van der pol oscillators of eqs .( ) .[ f:2](a ) shows the spatiotemporal plot of the network in the global ad regime at , and ; here all the nodes arrive at the zero fixed point .the global ad state is revoked and rhythmicity is restored in the network by decreasing the value of ; as shown in fig .[ f:2](b ) for an exemplary value .equivalently , for and ( as before ) we get an od state in the network [ fig .[ f:2](c ) ] .it can be seen that the nodes populate the upper and lower branches [ shown with yellow ( light gray ) and brown ( dark gray ) colors , respectively ] of od in a random manner and generate a multi - cluster od state .oscillation in this network is revived by decreasing ; fig .[ f:2](d ) shows this for .note that the values for which ad , od and oscillations are obtained agree with that for the case shown in fig .[ f:1 ] .next , we consider stuart - landau oscillators interacting through a modified mean - field diffusive coupling in their real part ; the mathematical model of the coupled system is given by with ; is the mean - field of the coupled system , .the individual stuart - landau oscillators are of unit amplitude and having eigenfrequency . as the limiting case we take , and rewrite eq . in the cartesiancoordinates : [ system ] ,\\ \label{y1 } \dot{y}_{i } & = \omega_{i}x_{i}+p_{i}y_{i}.\end{aligned}\ ] ] here , .we set the oscillators to be identical , i.e. , . from eq .it is clear that the system has the following fixed points : the trivial fixed point is the origin , and additionally two more coupling dependent nontrivial fixed points : ( i ) ( , , , ) where and .( ii ) ( , , , ) where and .( color online ) stuart - landau oscillators , : bifurcation diagram with ( a ) ( ) , ( b ) ( ) .( c ) two parameter bifurcation diagram in space for different .( d ) spatiotemporal plots of stuart - landau oscillators showing od at ( upper panel ) and the revival of oscillation at ( lower panel ) : and .the time scale is same as in fig .[ f:2 ] . ]the four eigenvalues of the system at the trivial fixed point are , [ lambda ] ,\\ \label{lambda3 } { \lambda}_{3,4 } & = 1- \left[\frac{\epsilon \alpha \pm \sqrt{(\epsilon \alpha)^2 - 4{\omega}^2}}{2}\right].\end{aligned}\ ] ] through an eigenvalue analysis and also a close inspection of the nontrivial fixed points reveal that two pitchfork bifurcations ( pb ) occur at : [ pb ] a symmetry breaking pitchfork bifurcation gives birth to the ihss ( , , , ) at .the second nontrivial fixed point ( , , , ) emerges at pb2 the stabilization of which leads to a _ nontrivial _ ad state that coexists with od .next , we get the hopf bifurcation point by equating the real part of and to zero , [ epsahb ] from eqs . andit is clear that for no hopf bifurcations ( of trivial fixed point ) occur , only pitchfork bifurcations govern the dynamics in that case . from eq .( [ epsahb ] ) it is noticed that for a fixed , is constant , but depends only upon ( but is independent of , where ) . now with increasing , hb2 moves towards pb1 , and at a critical , say , hb2 collides with pb1 : .the eigenvalues of the system at the nontrivial fixed point ( , , , ) , where and are given by : [ ntlambda ] where , , , , .we derive the loci of the hbs curve as bifurcation diagrams of fig .[ f:3](a ) ( ) and [ f:3](b ) ( ) show that with decreasing the death region moves towards a larger coupling strength .figure [ f:3](c ) shows this in the space ; one can see that a decreasing shrinks the region of death and thus broadens the area of the oscillation state in the parameter space .finally , we consider a network of identical stuart - landau oscillators with the same coupling scheme [ eq . ] .we find that the revival of oscillation works here as well .figure [ f:3](d ) demonstrates this for an exemplary value of and : the upper panel shows the od state in the network for , whereas from the lower panel we see that rhytmicity is restored in the network by reducing the value of to .( color online ) experimental circuit diagram of the modified mean - field coupled vdp oscillators . a , a1-a4 , and aqare realized with tl074 op - amps .all the unlabeled resistors have value k .c=10 nf , , v. box denoted by b " are op - amp based buffers ; inverters are realized with the unity gain inverting op - amps . sign indicates squarer using ad633 . ]next , we implement the coupled system of van der pol oscillators given by eq . in an electronic circuit ( figure [ ckt ] ) .we use tl074 ( quad jfet ) op - amps , and ad633 analog multiplier ics .a v power supply is used ; resistors ( capacitors ) have ( ) tolerance .the unlabeled resistors have the value k .the op - amp aq is used to generate the mean - field : , which is subtracted by using op - amps denoted by a. one can see that determines the coupling strength , determines the mean - field density and controls the feedback parameter .the voltage equation of the circuit can be written as : [ ckteqn ] ,\\ cr\frac{d{v}_{yi}}{dt}&=\frac{r}{r_a}\left(v_\delta-\frac{v_{xi}^2}{10}\right)\frac{v_{yi}}{10}-v_{xi}.\end{aligned}\ ] ] here .eqs . is normalized with respect to , and thus now becomes equivalent to eq .for the following normalized parameters : , , , , , , , , and . is the saturation voltage of the op - amp . in the experimentwe take v , and nf ; we choose by taking [ using a precision potentiometer ( pot ) ] .we experimentally observe the revival of oscillation by revoking the oscillation cessation states ( ad and od ) with varying ( i.e. , ) . atfirst we consider the case of the ad state : for that we choose and ( by setting k and k , respectively ) and decrease from to a lower value .figure [ f:5](a ) shows experimental snapshots of the ad state in and at k ( i.e. , ) [ using dso , agilent make , dso - x 2024a , 200 mhz , 2 gs / s ] ; in the same figure we show the revival of oscillation from the ad state at an exemplary value k ( ) .figure [ f:5](b ) gives the numerical time series at the corresponding values ( fourth - order runge - kutta method with step - size of ) .the numerical results are in good agreement with the experimental observations : also , the dynamical behaviors at these parameter values are in accordance with fig .[ f:1](f ) .next , we choose an od state for and ( i.e. , k and k , respectively ) : experimental snapshots of the od state at k ( ) and the rhythmicity at k ( ) are shown in fig .[ f:5](c ) .the corresponding numerical result is given in fig .[ f:5](d ) .we see that the experimental and numerical results are in good agreement and in accordance with fig .[ f:1](e ) .significantly , despite the presence of inherent parameter fluctuations and noise , which is inevitable in an experiment , it is important that in both the above cases theory and experiment are in good qualitative agreement , which proves the robustness of the coupling scheme in restoring rhythmicity in coupled oscillators .( color online ) ( a , c ) experimental real time traces of and along with the ( b , d ) numerical time series plots of and . [ ( a ) and ( b ) ] with k ( i.e. ) a decreasing ( ) restores oscillation ( lc ) from ad : ad at k ( ) , lc at k ( ) .[ ( c ) and ( d ) ] with k ( i.e. ) a decreasing ( ) restores oscillation ( lc ) from od : od at k ( ) , lc at k ( ) .others parameters are k ( ) and ( ) . axis : ( a ) 200 mv / div ( c ) 100 mv / div ; axis : 380 / div . ]we have investigated the effect of a feedback parameter , which controls the diffusion rate , on a network of nonlinear oscillators coupled under mean - field diffusion .we have shown that , unlike other coupling schemes , here two control parameters exist , namely the density of mean - field diffusion and the feedback parameter .the interplay of these two parameters can revive rhythmicity from any oscillation cessation state : in fact by controlling the feedback parameter closer to the density of the mean - field one can shrink the region of the oscillation cessation state to a very narrow zone in parameter space .more interestingly , even in the presence of complete diffusion ( i.e. , in the absence of feedback parameter ) , the density of the mean - field alone can induce rhythmicity from a death state .thus , it offers a very robust oscillation revival mechanism .we have extended our study to a network consists of large number of nodes and shown that the oscillation cessation states can be revoked in that case too . finally , we have supported our theoretical results by an experiment with electronic van der pol oscillators and for the first time observed the revival of oscillation from the mean - field - diffusion induced death state . since both the density of the mean - field and the feedback parameter have strong connections with many real biological networks including cell cycle , neural network , synthetic genetic oscillators , etc ., we believe that the present study will broaden our understanding of those systems and subsequently this study will shed light on the control of oscillation suppression mechanism in several biological and engineering systems .t. b. acknowledges the financial support from serb , department of science and technology ( dst ) , india [ project grant no .: sb / ftp / ps-005/2013 ] .d. g. acknowledges dst , india , for providing support through the inspire fellowship .j. k. acknowledges government of the russian federation ( agreement no .14.z50.31.0033 with institute of applied physics ras ) .31ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwosecondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) * * , ( ) _ _ , ed .( , ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) | the revival of oscillation and maintaining rhythmicity in a network of coupled oscillators offer an open challenge to researchers as the cessation of oscillation often leads to a fatal system degradation and an irrecoverable malfunctioning in many physical , biological and physiological systems . recently a general technique of restoration of rhythmicity in diffusively coupled networks of nonlinear oscillators has been proposed in [ zou et al . nature commun . 6:7709 , 2015 ] , where it is shown that a proper feedback parameter that controls the rate of diffusion can effectively revive oscillation from an oscillation suppressed state . in this paper we show that the mean - field diffusive coupling , which can suppress oscillation even in a network of identical oscillators , can be modified in order to revoke the cessation of oscillation induced by it . using a rigorous bifurcation analysis we show that , unlike other diffusive coupling schemes , here one has _ two control parameters _ , namely the _ density of the mean - field _ and the _ feedback parameter _ that can be controlled to revive oscillation from a death state . we demonstrate that an appropriate choice of density of the mean - field is capable of inducing rhythmicity even in the presence of complete diffusion , which is an unique feature of this mean - field coupling that is not available in other coupling schemes . finally , we report the _ first _ experimental observation of revival of oscillation from the mean - field induced oscillation suppression state that supports our theoretical results . , |
the parlor game best known as `` twenty questions '' has a long history and a broad appeal .it was used to advance the plot of charles dickens _ _ a christmas carol__ , in which it is called `` yes and no , '' and it was used to explain information theory in alfrd rnyi s _ _ a diary on information theory__ , in which it is called `` bar - kochba . '' the two - person game begins with an answerer thinking up an object and then being asked a series of questions about the object by a questioner .these questions must be answered either `` yes '' or `` no . ''usually the questioner can ask at most twenty questions , and the winner is determined by whether or not the questioner can sufficiently surmise the object from these questions .many variants of the game exist both in name and in rules .a recent popular variant replaces the questioner with an electronic device .the answerer can answer the device s questions with one of four answers `` yes , '' `` no , '' `` sometimes , '' and `` unknown . ''the game also differs from the traditional game in that the device often needs to ask more than twenty questions .if the device needs to ask more than the customary twenty questions , the answerer can view this as a partial victory , since the device has not answered correctly given the initial twenty .however , the device eventually gives up after questions if it can not guess the questioner s object .consider a short example of such a series of questions , with only `` yes , '' `` no , '' and `` sometimes '' as possible answers .the object to guess is one of the seven newtonian colors , which we choose to enumerate as follows : 1 .green ( g ) 2 .yellow ( y ) 3 .red ( r ) 4 .orange ( o ) 5 . indigo ( i ) 6 .violet ( v ) 7 .blue ( b ) . a first question we ask might be , `` is the color seen as a warm color ? '' if the answer is `` sometimes , ''the color is green .if it is `` yes , '' it is one of colors through .if so , we then ask , `` is the color considered primary ? '' `` sometimes '' implies yellow , `` yes '' implies red , and `` no '' implies orange .if the color is not warm , it is one of colors through , and we ask whether the color is considered purple , a different question than the one for colors through .`` sometimes '' implies indigo , `` yes '' implies violet , and `` no '' implies blue .thus we can distinguish the seven colors with an average of questions if is the probability that color in question is green .this series of questions is expressible using code tree notation , e.g. , , in which a tree is formed with each child split from its parent according to the corresponding output symbol , i.e. , the answer of the corresponding question . a code tree corresponding to the above series of questions is shown in fig . [ codetree ] , where a left branch means `` sometimes , '' a middle branch means `` yes , '' and a right branch means `` no . ''the number of answers possible is referred to by the constant and the tree is a -ary tree . in this case , and the code tree is ternary .the number of outputs , , is the number of colors .the analogous problem in prefix coding is as follows : a source ( the answerer ) emits input symbols ( objects ) drawn from the alphabet , where is an integer .symbol has probability , thus defining probability vector .only possible symbols are considered for coding and these are sorted in decreasing order of probability ; thus and for every such that .( since sorting is only time and space , this can be assumed without loss of generality . )each input symbol is encoded into a codeword composed of output symbols of the -ary alphabet .( in the example of colors , represents `` sometimes , '' `` yes , '' and `` no . '' ) the codeword corresponding to input symbol has length , thus defining length vector . in fig .[ codetree ] , for example , is the codeword corresponding to blue so length . the overall code should be a prefix code , that is , no codeword should begin with the entirety of another codeword . in the game , equivalently, we should know when to end the questioning , this being the point at which we know the answer . for the variant introduced here , all codewords must have lengths lying in a given interval [ , ] . in the example of the device mentioned above , and .a more practical variant is the problem of designing a data codec which is efficient in terms of both compression ratio and coding speed .moffat and turpin proposed a variety of efficient implementations of prefix encoding and decoding in , each involving table lookups rather than code trees .they noted that the length of the longest codeword should be limited for computational efficiency s sake .computational efficiency is also improved by restricting the overall range of codeword lengths , reducing the size of the tables and the expected time of searches required for decoding .thus , one might wish to have a minimum codeword size of , say , bits and a maximum codeword size of bits ( ) .if expected codeword length for an optimal code found under these restrictions is too long , can be reduced and the algorithm rerun until the proper trade - off between coding speed and compression ratio is found .a similar problem is one of determining opcodes of a microprocessor designed to use variable - length opcodes , each a certain number of bytes ( ) with a lower limit and an upper limit to size , e.g. , a restriction to opcodes being 16 , 24 , or 32 bits long ( , ) .this problem clearly falls within the context considered here , as does the problem of assigning video recorder scheduling codes ; these human - readable decimal codes ( ) have lower and upper bounds on their size , such as and , respectively .other problems of interest have and are thus length limited but have no practical lower bound on length .yet other problems have not fixed bounds but a constraint on the difference between minimum and maximum codeword length , a quantity referred to as fringe . as previously noted , largefringe has a negative effect of the speed of a decoder . in section [ conclusion ] of this paperwe discuss how to find such codes .note that a problem of size is trivial for certain values of and .if , then all codewords can have output symbols , which , by any reasonable objective , forms an optimal code . if , then we can not code all input symbols and the problem , as presented here , has no solution . since only other values are interesting , we can assume that ] .since most instances of twenty questions have fewer possible outcomes , this is usually not an interesting problem after all , as instructive as it is .in fact , the fallibility of the answerer and ambiguity of the questioner mean that a decision tree model is not , strictly speaking , correct .for example , the aforementioned answers to questions about the seven colors are debatable .the other applications of length - bounded prefix coding mentioned previously , however , do fall within this model .if we either do not require a minimum or do not require a maximum , it is easy to find values of or which do not limit the problem .as mentioned , setting results in a trivial minimum , as does .similarly , setting or using the hard upper bound results in a trivial maximum value . in the case of trivial maximum values , one can actually minimize expected codeword length in linear time given sorted inputs .this is possible because , at each stage in the standard huffman coding algorithm , the set of huffman trees is an optimal forest ( set of trees) .we describe the linear - time algorithm in section [ refine ] .if both minimum and maximum values are trivial , huffman coding yields a prefix code minimizing expected codeword length the conditions necessary and sufficient for the existence of a prefix code with length vector are the integer constraint , , and the kraft ( mcmillan ) inequality , finding values for is sufficient to find a corresponding code , as a code tree with the optimal length vector can be built from sorted codeword lengths in time and space .it is not always obvious that we should minimize the expected number of questions ( or , equivalently , the expected number of questions in excess of the first , where is if is positive , otherwise ) .consider the example of video recorder scheduling codes .in such an application , one might instead want to minimize mean square distance from , we generalize and investigate how to minimize the value under the above constraints for any penalty function convex and increasing on .such an additive measurement of cost is called a quasiarithmetic penalty , in this case a convex quasiarithmetic penalty .one such function is , a quadratic value useful in optimizing a communications delay problem .another function , for , can be used to minimize the probability of buffer overflow in a queueing system . mathematically stating the length - bounded problem , note that we need not assume that probabilities sum to ; they could instead be arbitrary positive weights .thus , in this paper , given a finite -symbol input alphabet with an associated probability vector , a -symbol output alphabet with codewords of lengths ] such that and .let .then and , due to convexity , .using , we know that is an integer multiple of .thus , using lemma [ lllemma ] with , , and , there exists an such that . since , is a contradiction to being an optimal solution to the coin collector s problem , and thus any optimal solution of the coin collector s problem corresponds to an optimal length vector . because the coin collector s problem is linear in time and space same - width inputs are presorted by weight , numerical operations and comparisons are constant time the overall algorithm finds an optimal code in time and space .space complexity , however , can be decreased .if , we are guaranteed no particular inequality relation between and since we did not specify a method for breaking ties . thus the length vector returned by the algorithm need not have the property that whenever .we would like to have an algorithm that has such a monotonicity property .a monotonic nodeset , , is one with the following properties : in other words , a nodeset is monotonic if and only if it corresponds to a length vector with lengths sorted in increasing order ; this definition is equivalent to that given in . examples of monotonic nodesets include the sets of nodes enclosed by dashed lines in fig .[ nodesetnum ] and fig .[ abcd ] . in the latter case , , , , and , so . as indicated ,if for some and , then an optimal nodeset need not be monotonic .however , if all probabilities are distinct , the optimal nodeset is monotonic .[ dmlemma ] if has no repeated values , then any optimal solution is monotonic .the second monotonic property ( [ validlen ] ) was proved for optimal nodesets in theorem [ cceqll ] .the first property ( [ firstprop ] ) can be shown via a simple exchange argument .consider optimal with so that , and also consider with lengths for inputs and interchanged , as in .then \\\quad \leq 0 \end{array}\ ] ] where the inequality is to due to the optimality of . since and is monotonically increasing, for all and an optimal nodeset without repeated must be monotonic .taking advantage of monotonicity in a package - merge coding implementation to trade off a constant factor of time for drastically reduced space complexity is done in for length - limited binary codes .we extend this to the length - bounded problem , first for without repeated values , then for arbitrary .note that the total width of items that are each less than or equal to width is less than .thus , when we are processing items and packages of width , fewer than packages are kept in memory . the key idea in reducing space complexity is to keep only four attributes of each package in memory instead of the full contents . in this manner, we use space while retaining enough information to reconstruct the optimal nodeset in algorithmic postprocessing .define for each package , we retain only the following attributes : 1 . 2 . 3 . 4 . where and .we also define . with only these parameters , the `` first run '' of the algorithm takes space .the output of this run is the package attributes of the optimal nodeset .thus , at the end of this first run , we know the value for , and we can consider as the disjoint union of four sets , shown in fig .[ abcd ] : 1 . = nodes in with indices in ] , 3 . = nodes in , 4 . = nodes in . due to the monotonicity of ,it is clear that \times [ l_{\min}+1 , l_\mid-1] ] .note then that and .thus we need merely to recompute which nodes are in and in . because is a subset of , and .given their respective widths , is a minimal weight subset of \times [ l_{\min}+1,l_{\mid}-1] ] .these are monotonic if the overall nodeset is monotonic .the nodes at each level of and can thus be found by recursive calls to the algorithm .this approach uses only space while preserving time complexity ; one run of an algorithm on nodes is replaced with a series of runs , first one on nodes , then two on an average of at most nodes each , then four on an average of at most , and so forth .an optimization of the same complexity is made in , where it is proven that this yields time complexity with a linear space requirement .given the hard bounds for and , this is always .the assumption of distinct puts an undesirable restriction on our input that we now relax . in doing so, we make the algorithm deterministic , resolving ties that make certain minimization steps of the algorithm implementation dependent .this results in what in some sense is the `` best '' optimal code if multiple monotonic optimal codes exist .recall that is a nonincreasing vector .thus items of a given width are sorted for use in the package - merge algorithm ; this order is used to break ties .for example , if we look at the problem in fig .[ nodesetnum ] , , , , with probability vector , then nodes , , and are the first to be grouped , the tie between and broken by order .thus , at any step , all identical - width items in one package have adjacent indices .recall that packages of items will be either in the final nodeset or absent from it as a whole .this scheme then prevents any of the nonmonotonicity that identical might bring about . in order to assure that the algorithm is fully deterministic , the manner in whichpackages and single items are merged must also be taken into account .we choose to combine nonmerged items before merged items in the case of ties , in a similar manner to the two - queue bottom - merge method of huffman coding .thus , in our example , there is a point at which the node is chosen ( to be merged with and ) while the identical - weight package of items , , and is not .this leads to the optimal length vector , rather than or , which are also optimal .the corresponding nodeset is enclosed within the dashed line in fig .[ nodesetnum ] , and the resulting monotonic code tree is the code tree shown in fig .[ codetree ] .this approach also enables us to set , the value for dummy variables , equal to without violating monotonicity . as in bottom - merge huffman coding ,the code with the minimum reverse lexicographical order among optimal codes ( and thus the one with minimum height ) is the one produced ; reverse lexicographical order is the lexicographical order of lengths after their being sorted largest to smallest .an identical result can be obtained by using the position of the `` largest '' node in a package ( in terms of position number ) in order to choose those with lower values , as in . however, our approach , which can be shown to be equivalent via simple induction , eliminates the need for keeping track of the maximum value of for each package .there are changes we can make to the algorithm that , for certain inputs , result in even better performance .for example , if , then , rather than minimizing the weight of nodes of a certain total width , it is easier to maximize weight over a complementary total width and find the complementary set of nodes .similarly , if most input symbols have one of a handful of probability values , one can consider this and simplify calculations . these and other similar optimizations have been done in the past for the special case , , , though we do not address or extend such improvements here .so far we have assumed that is the best upper bound on codeword length we could obtain .however , there are many cases in which we can narrow the range of codeword lengths , thus making the algorithm faster . for example , since , as stated previously , we can assume without loss of generality that , we can eliminate the bottom row of nodes from consideration in fig .[ nodesetnum ] .consider also when .an upper bound on can be derived from a theorem and a definition due to larmore : consider penalty functions and .we say that is flatter than if , for positive integers , . .a consequence of the convex hull theorem of is that , given flatter than , for any , there exist -optimal and -optimal such that is greater than in terms of reverse lexicographical order .this explains why the word `` flatter '' is used .penalties flatter than the linear penalty i.e. , convex can therefore yield a useful upper bound , reducing complexity .thus , if , we can use the results of a pre - algorithmic huffman coding of the input symbols to find an upper bound on codeword length in linear time , one that might be better than .alternatively , we can use the least probable input to find a looser upper bound , as in . when , one can still use a modified pre - algorithmic huffman coding to find an upper bound as long as .this is done via a modification of the huffman algorithm allowing an arbitrary minimum and a trivial maximum ( e.g. , or ) : * procedure for length - lower - bounded ( `` truncated huffman '' ) coding * 1 . add dummy items of probability .2 . combine the items with the smallest probabilities into one item with the combined probability .this item has codeword , to be determined later , while these smallest items are assigned concatenations of this yet - to - be - determined codeword and every possible output symbol , that is , .since these have been assigned in terms of , replace the smallest items with in to form .3 . repeat previous step , now with the remaining codewords and corresponding probabilities , until only items are left .4 . assign all possible long codewords to these items , thus defining the overall code based on the fixed - length code assigned to these combined items .this procedure is huffman coding truncated midway through coding , the resulting trees serving as subtrees of nodes of identical depth . excluding the last step , the algorithm is identical to that shown in to result in an optimal huffman forest .the optimality of the algorithm for length - lower - bounded coding is an immediate consequence of the optimality of the forest , as both have the same constraints and the same value to minimize . as with the usual huffman algorithm , this can be made linear time given sorted inputs and can be made to find a code with the minimum reverse lexicographical order among optimal codes via the bottom - merge variant .clearly , this algorithm finds the optimal code for the length - bounded problem if the resulting code has no codeword longer than , whether this be because is trivial or because of other specifications of the problem .if this truncated huffman algorithm fails , then we know that , that is , we can not have that for the length - bounded code .this is an intuitive result , but one worth stating and proving , as it is used in the next section : [ hclemma ] if a ( truncated ) huffman code ( ) for has a codeword longer than some , then there exists an optimal length - bounded code for bound ] has a codeword with length , then an optimal code for the bound ] has width .therefore , in the course of the package - merge algorithm , we at one point have packages of width which will eventually comprise optimal nodeset , these packages having weight no larger than the remaining packages of the same width .consider the nodeset formed by making each in into .this nodeset is the solution to the package - merge algorithm for the total width with bounds and .let denote the number of nodes on level .then since at most nodes can have length .the subset of not of depth is thus an optimal solution for bounds and with total width that is , at one point in the algorithm this solution corresponds to the least weighted packages of width . due to the bounds on , this number of packages is less than the number of packages of the same width in the optimal nodeset for bounds and ( with total width ) .thus an optimal nodeset to the shortened problem can contain the ( shifted - by - one ) original nodeset and must have its maximum length achieved for all input symbols for which the original nodeset achieves maximum length. thus we can find whether by merely doing pre - algorithmic bottom - merge huffman coding ( which , when , results in reduced computation ) .this is useful in finding a faster algorithm for large and linear .a somewhat different reduction , one analogous to the reduction of , is applicable if .this more specific algorithm has similar space complexity and strictly better time complexity unless .however , we only sketch this approach here roughly compared to our previous explanation of the simpler , more general approach .consider again the code tree representation , that using a -ary tree to represent the code .a codeword is represented by successive splits from the root to a leaf one split for each output symbol so that the length of a codeword is represented by the length of the path to its corresponding leaf .a vertex that is not a leaf is called an internal vertex ; each internal vertex of the tree in fig .[ codetree ] is shown as a black circle .we continue to use dummy variables to ensure that , and thus an optimal tree has ; equivalently , all internal vertices have children .we also continue to assume without loss of generality that the output tree is monotonic .an optimal tree given the constraints of the problem will have no internal vertices at level , internal vertices in the previous levels , and internal vertices with no leaves in the levels above this , if any .the solution to a linear length - bounded problem can be expressed by the number of internal vertices in the unknown levels , that is , by \end{array } \label{alpha}\ ] ] so that we know that if the truncated huffman coding algorithm ( as in the previous section ) fails to find a code with all , then we are assured that there exists an , so that can be assumed to be a sequence of strictly increasing integers .a strictly increasing sequence can be represented by a path on a different type of graph , a directed acyclic graph with vertices numbered to , e.g. , the graph of vertices in fig .the edge of the path begins at and ends at , and each represents the number of internal vertices at and below the corresponding level of the tree according to ( [ alpha ] ) .[ codetree ] shows a code tree with corresponding as a count of internal vertices .the path length is identical to the height of the corresponding tree , and the path weight is for edge weight function , to be determined .larmore and przytycka used such a representation for binary codes ; here we use the generalized representation for -ary codes . , , , ( ) ] in order to make this representation correspond to the above problem , we need a way of making weighted path length correspond to coding penalty and a way of assuring a one - to - one correspondence between valid paths and valid monotonic code trees .first let us define the cumulative probabilities so that there are possible values for , each of which can be accessed in constant time after -time preprocessing .we then use these values to weigh paths such that where we recall that denotes and is necessary for cases in which the numbers of internal vertices are incompatible ; this rules out paths not corresponding to valid trees .thus path length and penalty are equal , that is , this graph weighting has the concave monge property or quadrangle inequality , for all , since this inequality reduces to the already - assumed ( where for ) .[ dag ] shows such a graph .a single - edge path corresponds to while the two - edge path corresponds to . in practice, only the latter would be under consideration using the algorithm in question , since the pre - algorithmic huffman coding assured that .thus , if and we wish to find the minimum -link path from to on this weighted graph of vertices . given the concave monge property , an -time -space algorithm for solving this problem is presented in . thus the problem in question can be solved in time and space space if one counts the pre - algorithmic huffman coding and/or necessary reconstruction of the huffman code or codeword lengths an improvement on the package - merge - based approach except for .one might wonder whether the time complexity of the aforementioned algorithms is the minimum achievable .special cases ( e.g. , for , , and ) can be addressed using modifications of the package - merge approach .also , often implies ranges of values , obtainable without coding , for and .this enables one to use values of and that result in a significant improvement , as in for .an important problem that can be solved with the techniques in this paper is that of finding an optimal code given an upper bound on fringe , the difference between minimum and maximum codeword length .one might , for example , wish to find a fringe - limited prefix code in order to have a near - optimal code that can be simply implemented , as in section viii of . such a problem is mentioned in , where it is suggested that if there are codes better than the best code having fringe at most , one can find this -best code with the -time algorithm in , thus solving the fringe - limited problem .however , this presumes we know an upper bound for before running this algorithm .more importantly , if a probability vector is far from uniform , can be very large , since the number of viable code trees is .thus this is a poor approach in general . instead , we can use the aforementioned algorithms for finding the optimal length - bounded code with codeword lengths restricted to $ ] for each , keeping the best of these codes ; this covers all feasible cases of fringe upper bounded by .( here we again assume , without loss of generality , that . )the overall procedure thus has time complexity for the general convex quasiarithmetic case and when applying the algorithm of section [ linpen ] to the most common penalty of expected length ; the latter approach is of lower complexity unless .both algorithms operate with only space complexity .the author wishes to thank zhen zhang for first bringing a related problem to his attention and david morgenthaler for constructive discussions on this topic .m. j. golin and g. rote , `` a dynamic programming algorithm for constructing optimal prefix - free codes for unequal letter costs , '' _ ieee trans .inf . theory _it-44 , no . 5 , pp . 17701781 , sept .1998 . t. c. hu , l. l. larmore , and j. d. morgenthaler , `` optimal integer alphabetic trees in linear time , '' in _ proc .13th annual european symposium on algorithms_.1em plus 0.5em minus 0.4emspringer - verlag , oct .2005 , pp . 226237 . t. c. hu and j. d. morgenthaler , `` optimum alphabetic binary trees , '' in _ combinatorics and computer science _lecture notes in computer science , vol .1120.1em plus 0.5em minus 0.4emspringer - verlag , aug . 1996 , pp . 234243 .a. rnyi , _ a diary on information theory_.1em plus 0.5em minus 0.4emnew york , ny : john wiley & sons inc . , 1987 , original publication : _ napl az informcielmletrl _ , gondolat , budapest , hungary , 1976 . | efficient optimal prefix coding has long been accomplished via the huffman algorithm . however , there is still room for improvement and exploration regarding variants of the huffman problem . length - limited huffman coding , useful for many practical applications , is one such variant , for which codes are restricted to the set of codes in which none of the codewords is longer than a given length , . binary length - limited coding can be done in time and space via the widely used package - merge algorithm and with even smaller asymptotic complexity using a lesser - known algorithm . in this paper these algorithms are generalized without increasing complexity in order to introduce a minimum codeword length constraint , to allow for objective functions other than the minimization of expected codeword length , and to be applicable to both binary and nonbinary codes ; nonbinary codes were previously addressed using a slower dynamic programming approach . these extensions have various applications including fast decompression and a modified version of the game `` twenty questions '' and can be used to solve the problem of finding an optimal code with limited fringe , that is , finding the best code among codes with a maximum difference between the longest and shortest codewords . the previously proposed method for solving this problem was nonpolynomial time , whereas solving this using the novel linear - space algorithm requires only time , or even less if is not . |
in the era of big data analytics , cloud computing and internet of things , the growing demand of massive data processing challenges existing network resource allocation approaches .huge volumes of data acquired online with distributed sensors in the presence of operational uncertainties caused by , e.g. , renewable energy sources , call for scalable and adaptive network control schemes. scalability of a desirable approach refers to low complexity and amenability to distributed implementation , while adaptivity implies capability of online adjustment to dynamic environments .allocation of network resources can be traced back to the seminal work of . since then , popular allocation algorithms operating in the dual domain are first - order methods based on dual gradient ascent , either deterministic or stochastic . due to the simplicity in computation and implementation , these approaches have attracted a great deal of recent interest and have been successfully applied to cloud , transportation and power grid networks ; see , e.g. , .however , their major limitation is slow convergence , which results in high network delay . here, the delay can be viewed as workload queuing time in a cloud network , traffic congestion in a transportation network , or energy level of batteries in a power network . to address this issue, recent attempts aim at accelerating first- and second - order optimization algorithms .specifically , momentum - based accelerations over first - order methods were investigated using nesterov , or , heavy - ball iterations .though these approaches work well in static settings , their performance degrades with online scheduling , as evidenced by the increase in accumulated steady - state error . on the other hand ,second - order methods such as decentralized quasi - newton approach and its dynamic variant developed in and , incur high overhead to compute and communicate the decentralized hessian approximations . capturing prices of resources ,lagrange multipliers play a central role in stochastic resource allocation algorithms . given abundant historical data in an online optimization setting , a natural question arises : _ is it possible to learn the optimal prices from past data , so as to improve the quality of online resource allocation strategies ? _the rationale here is that past data contains statistics of network states , and learning from them can aid coping with the stochasticity of future resource allocation .a recent work attempting to address this question is , which considers resource allocation with a _ finite _ number of possible network states and allocation actions .the learning procedure , however , involves constructing a histogram to estimate the underlying distribution of the network states and explicitly solves an empirical dual problem . while constructing a histogram is feasible for a probability distribution with finite support , quantization errors and prohibitively high complexity are inevitable for a continuous distribution with infinite support . in this context, the present paper aims to design a novel online resource allocation algorithm that leverages online learning from historical data for stochastic optimization of the ensuing allocation stage .the resultant approach , which we term `` learn - and - adapt '' stochastic dual gradient ( la - sdg ) method , only doubles computational complexity of the classic stochastic dual gradient ( sdg ) method . with this minimal cost, la - sdg mitigates steady - state oscillation , which is common in stochastic first - order acceleration methods , while avoiding computation of the hessian approximations present in the second - order methods .specifically , la - sdg only requires one more past sample to compute an extra stochastic dual gradient , in contrast to constructing costly histograms and solving the resultant large - scale problem .the main contributions of this paper are summarized next .1 . targeting a low - complexity online solution , la - sdg only takes an additional dual gradient step relative to the classic sdg iteration .this additional dual gradient step plays the role of adapting the future resource allocation strategy through learning from historical data .meanwhile , la - sdg is linked with the stochastic heavy - ball method , nicely inheriting its fast convergence in the initial stage , while reducing its steady - state oscillation .2 . we analytically establish that the novel la - sdg approach , parameterized by a positive constant , yields a very attractive cost - delay tradeoff ] of the classic sdg method .numerical tests further corroborate the performance gain of la - sdg over existing resource allocation schemes ._ outline_. the rest of the paper is organized as follows .the stochastic network resource allocation problem is formulated in section ii , along with the standard sdg approach introduced in section iii .the la - sdg method is the subject of section iv .performance analysis of la - sdg is carried out in section v. numerical tests are provided in section vi , followed by concluding remarks in section vii . _notation_. denotes the expectation operator , stands for probability ; stands for vector and matrix transposition , and denotes the -norm of a vector .inequalities for vectors , e.g. , , are defined entry - wise .the positive projection operator is defined as ^+:=\max\{a,0\} ] obeys the recursion ^{+}\!,~\forall t.\ ] ] defining as the aggregate network cost parameterized by the random vector , the local cost per node is , and .the model here is quite general .the duration of time slots can vary from ( micro-)seconds in cloud networks , minutes in road networks , to even hours in power networks ; the nodes can present the distributed front - end mapping nodes and back - end data centers in cloud networks , intersections in traffic networks , or , buses and substations in power networks ; the links can model wireless / wireline channels , traffic lanes , and power transmission lines ; while the resource vector can include the size of data workloads , the number of vehicles , or the amount of energy .vector can represent user requests in data queues , cars waiting at road intersections , and amounts of energy in batteries . concatenating the random parameters into a random state vector ^{\top} ] , ^{\top} ] . at each mapping node and data center ,undistributed or unprocessed workloads are buffered in queues obeying ; see fig .[ fig : system ] for a system diagram ., mapping node has an exogenous workload plus that stored in the queue , and schedules workload to data center .data center serves an amount of workload out of the assigned as well as that stored in the queue .the thickness of each edge is proportional to its capacity.,scaledwidth=49.0% ] [ fig : system ] performance is characterized by the aggregate cost of power consumed at the data centers plus the bandwidth costs at the mapping nodes , namely the power cost , parameterized by the random vector , captures the local marginal price and the renewable generation at data center during time period .the bandwidth cost , parameterized by the random vector , characterizes the heterogeneous cost of data transmission due to spatio - temporal differences . with the goal of minimizing the time - average of, geographical load balancing fits the formulation in .in this section , the dynamic problem is reformulated to a tractable form , and classical stochastic dual gradient ( sdg ) approach is revisited , along with a brief discussion of its online performance .recall in section [ subsec.pf ] that the main challenge of solving resides in time - coupling constraints and unknown distribution of the underlying random processes .regarding the first hurdle , combining with , it can be shown that in the long term , workload arrival and departure rates must satisfy the following necessary condition ( * ? ? ? * theorem 2.8 ) \leq \mathbf{0}\ ] ] given that the initial queue length is finite , i.e. , .in other words , on average all buffered delay - tolerant workloads should be served .using , a relaxed version of is ~~ \text{s.t.}~ \eqref{eq.probl}~\text{and}~\eqref{queue - relax}\end{aligned}\ ] ] where is the optimal objective for the relaxed problem .compared to , problem eliminates the time coupling across variables by replacing and with .since is a relaxed version of with the optimal objective , if one solves instead of , it will be prudent to derive an optimality bound on , provided that the sequence of solutions obtained by solving is feasible for the relaxed constraints and . regarding the relaxed problem , using arguments similar to those in ( * ? ? ?* theorem 4.5 ) , it can be shown that if the random state is independent and identically distributed ( i.i.d . ) over time , there exists a _stationary _ control policy , which is a pure ( possibly randomized ) function of the realization of random state ( or the _ observed _ state ) ; i.e. , it satisfies , as well as guarantees that = \tilde{\psi}^{*} ] .as the optimal policy is time invariant , it implies that the _ dynamic _ problem is equivalent to the following time - invariant _ ensemble _ program where , , and ; set is the sample space of , and the constraint holds almost surely .observe that the index in can be dropped , since the expectation is taken over the distribution of random variable , which is time - invariant .leveraging the equivalent form , the remaining task boils down to finding the optimal policy that achieves the minimal objective in and obeys the constraints and ., which can be time - independent ( * ? ? ?* theorem 4.5 ) . ]note that the optimization in is with respect to a stationary policy , which is an infinite dimensional problem in the primal domain .however , there is a finite number of expected constraints [ cf . ] .thus , the dual problem contains a finite number of variables , hinting to the effect that solving is tractable in the dual domain . with denoting the lagrange multipliers associated with , the lagrangian of is \ ] ] where the instantaneous lagrangian is and constraint remains implicit .notice that the instantaneous objective and the instantaneous constraint are both parameterized by the observed state ^{\top} ] . correspondingly , the lagrange dual function is defined as the minimum of the lagrangian over the all feasible primal variables , given by [ eq.dual-func ] .\end{aligned}\ ] ] due to the linearity of the expectation operator and the separability of the instantaneous lagrangian defined by the realization , we can interchange the minimization and the expectation , and re - write the dual function in the following form \!=\!\mathbb{e}\!\left[\min_{\mathbf{x}_t \in { \cal x}}{\cal l}_t(\mathbf{x}_t,\bm{\lambda})\right]\!.\!\!\end{aligned}\ ] ] likewise , for the instantaneous dual function , the dual problem of is .\end{aligned}\ ] ] in accordance with the ensemble primal problem , we will henceforth refer to as the _ ensemble _ dual problem .if the optimal lagrange multiplier associated with were known , then optimizing and consequently would be equivalent to minimizing the lagrangian or infinitely many instantaneous , over the set ( * ? ? ?3.3.4 ) and .we restate this assertion as follows .[ prop.closedform ] consider the optimization problem in .given a realization , and the optimal lagrange multiplier associated with the constraints , the optimal instantaneous resource allocation decision is where accounts for possibly multiple minimizers of .when the realizations are obtained sequentially , one can generate a sequence of optimal solutions correspondingly for the dynamic problem . to obtain the optimal allocation in however , must be known .this fact motivates our novel `` learn - and - adapt '' stochastic dual gradient ( la - sdg ) method in section [ sec.la-sdg ] . to this end, we will first outline the celebrated stochastic dual gradient iteration ( a.k.a .lyapunov optimization ) . to solve, a standard gradient iteration involves sequentially taking expectations over the distribution of to compute the gradient .this is challenging because the distribution of is usually unknown in practice .even if the joint probability distribution functions were available , finding the expectations is not scalable as the dimensionality of grows .a common remedy to this challenge is stochastic approximation , which corresponds to the following sdg iteration ^{+},\;\forall t\end{aligned}\ ] ] where is a positive ( and typically pre - selected constant ) stepsize .the stochastic gradient is an unbiased estimate of the true gradient ; that is , =\nabla{\cal d}(\bm{\lambda}_t) ] denote the random state at node .it will be shown that the learning and allocation decision per time slot is processed locally per node based on its local state . to this end , rewrite the lagrangian minimization for a general dual variable at time as [ cf . and ] where is the -th entry of vector , and denotes the -th row of the node - incidence matrix .clearly , selects entries of associated with the in- and out - links of node .therefore , the subproblem at node is where is the feasible set of primal variable . in the case of , the feasible set can be written as a cartesian product of sets , so that the projection of to is equivalent to separate projections of onto .note that will be available at node by exchanging information with the neighbors per time .hence , given the effective multipliers ( -th entry of ) from its outgoing neighbors in , node is able to form an allocation decision by solving the convex programs with ; see also .needless to mention , can be locally updated via , that is ^{+}\ ] ] where are the local measurements of arrival ( departure ) workloads from ( to ) its neighbors . likewise , the tentative primal variable can be obtained at each node locally by solving using the current sample again with . by sending to its outgoing neighbors ,node can update the empirical multiplier via ^{+}\ ] ] which , together with the local queue length , also implies that the next can be obtained locally . compared with the classic sdg recursion - , the distributed implementation of la - sdg incurs only a factor of two increase in computational complexity .next , we will further analytically establish that it can improve the delay of sdg by an order of magnitude with the same order of optimality gap .this section presents performance analysis of la - sdg , which will rely on the following four assumptions .[ assp.iid ] the state is bounded and i.i.d . over time .[ assp.primal ] in is -strongly convex , and has -lipschitz continuous gradient . also , is non - decreasing w.r.t .all entries of over a convex set .[ assp.slater ] there exists a stationary policy satisfying for all , and \leq -\bm{\zeta} ] ; see . comparing with the standard tradeoff ] has been derived in under the so - termed local polyhedral assumption .observe though , that the considered setting in is different from the one here . while the network state set and the action set in discrete and countable , la - sdg allows continuous and with possibly infinite elements , and still be amenable to efficient and scalable online operations . ).,scaledwidth=31.0% ] ) .,scaledwidth=31.0% ] ) .,scaledwidth=31.0% ] ) .,scaledwidth=31.0% ]) .,scaledwidth=32.0% ] this section presents numerical tests to confirm the analytical claims and demonstrate the merits of the proposed approach .we consider the geographical load balancing network of section [ subsec.exp ] with data centers , and mapping nodes .performance is tested in terms of the time - averaged instantaneous network cost in , namely where the energy price is uniformly distributed over ] ; and the per - unit bandwidth cost is set to , with bandwidth limits generated from a uniform distribution within ] .the delay - tolerant workloads arrive at each mapping node according to a uniform distribution over ] , which is better than ] with , and accordingly that .+\mathbf{a}^{\top}\bm{\lambda}^*\right)^{\top}(\mathbb{e}[\mathbf{x}_t-\mathbf{x}_t^*])&\geq 0,\;\forall \mathbf{x}_t\in { \cal x}\label{eq.kkt1}\\ ( \bm{\lambda}^*)^{\top}\mathbb{e}[\mathbf{a}\mathbf{x}_t^*+\mathbf{c}_t]&=0\label{eq.kkt2}\\ \mathbb{e}[\mathbf{a}\mathbf{x}_t^*+\mathbf{c}_t]\leq \mathbf{0};\;\bm{\lambda}^*&\geq \mathbf{0}\label{eq.kkt3 } \end{aligned}\ ] ] to establish the claim , let us first assume that there exists entry that the inequality constraint is not active ; i.e. , =-\zeta ] .now we are on track to show that it contradicts with .since >0 ] . choose ] and =0 ] .hence , we arrive at ( with denoting -th entry of gradient ) +\mathbf{a}^{\top}\bm{\lambda}^*\right)^{\top}\mathbb{e}[(\mathbf{x}_t-\mathbf{x}_t^*)]\nonumber\\= & -\mathbb{e}\big[\nabla_j\psi_t(\mathbf{x}_t^*)(x_t^j)^*-\sum_{i\in{\cal i } } ( \lambda^i)^*\mathbf{a}_{(i , j)}(x_t^j)^*\big]\nonumber\\ \stackrel{(a)}{=}&\underbrace{-\mathbb{e}[\nabla_j\psi_t(\mathbf{x}_t^*)(x_t^j)^*]}_{<0}-\!\!\sum_{i\in{\cal i}\backslash k } \underbrace{(\lambda^i)^*\mathbf{a}_{(i , j)}\mathbb{e}[(x_t^j)^*]}_{\geq 0}<0\end{aligned}\ ] ] where ( a ) uses ; the first bracket follows from assumption [ assp.primal ] since is monotonically increasing and , thus for >0 ] ; and the second bracket follows that and each column of has at most one and .the proof is then complete since contradicts .* proof of lipschitz continuity : * under assumption [ assp.primal ] , the primal objective is -strongly convex , and the smooth constant of the dual function , or equivalently , the lipschitz constant of gradient directly follows from ( * ? ? ?* lemma ii.2 ) , which equals to , with denoting the maximum eigenvalue of .we omit the derivations of this result , and refer readers to that in .[ lemma.errorbd ] ( * ? ? ?* lemma 2.3 ) consider the dual function in and the feasible set in with only linear constraints .for any satisfying and , we have where the scalar depends on the matrix as well the constants , and introduced in assumption [ assp.primal ] .lemma [ lemma.errorbd ] states a _local _ error bound for the dual function .the error bound is `` local '' since it holds only for close enough to the optimum , i.e. , . following the arguments in however , if the dual iterate is artificially confined to a compact set such that with denoting the radius of ,, one can safely find a large set with radius to project dual iterates during optimization .] then for the case , the ratio , which implies the existence of satisfying for any .lemma [ lemma.errorbd ] is important for establishing linear convergence rate without strong convexity .remarkably , we will show next that this error bound is also critical to characterize the steady - state behavior of our la - sdg scheme .[ lemma.pl ] under assumption [ assp.primal ] , the local error - bound in implies the following pl condition , namely where is the lipschitz constant of the dual gradient and is as in . using the -smoothness of the dual function , we have for any and that choosing , and using proposition [ prop.primal-dual ] such that , we have where inequality ( a ) uses the local error - bound in .* proof of quadratic growth : * the proof follows the main steps of that in . building upon lemma [ lemma.pl ] , we next prove lemma [ lemma.qg ] . define a function of the dual variable as . with the pl condition in , and denoting the set of optimal multipliers for , we have for any that which implies that . which describes the continuous trajectory of starting from along the direction of .by using , it follows that is bounded below ; thus , the differential equation guarantees that we sufficiently reduce the value of function , and will eventually reach . in other words ,there exists a time such that .formally , for , we have since , we have , which implies that there exists a finite time such that . on the other hand ,the path length of trajectory will be longer than the projection distance between and the closest point in denoted as , that is , and thus we have from that where ( b ) follows from . choosing such that , we have squaring both sides of , the proof is complete , since is defined as and can be any point outside the set of optimal multipliers .since converges to according to theorem [ emp - dual ] , there exists a finite time such that for , we have . in such case , it follows that , since .therefore , we have ^+-[\tilde{\bm{\theta}}_t/\mu]^+\|^2\\ \stackrel{(a)}{\leq}&\|\mathbf{q}_t+\mathbf{a}\mathbf{x}_t+\mathbf{c}_t-\tilde{\bm{\theta}}_t/\mu\|^2\nonumber\\ \stackrel{(b)}{\leq } & \|\mathbf{q}_t-\tilde{\bm{\theta}}_t/\mu\|^2 + 2(\mathbf{q}_t-\tilde{\bm{\theta}}_t/\mu)^{\top}(\mathbf{a}\mathbf{x}_t+\mathbf{c}_t)+m^2\nonumber \end{aligned}\ ] ] where ( a ) comes from the non - expansive property of the projection operator , and ( b ) is due to the upper bound in assumption [ assp.dualgrad ] .the rhs of can be upper bounded by where ( c ) uses the definitions , and . since is the stochastic subgradient of the concave function at [ cf .] , we have \leq { \cal d}(\bm{\gamma}_t)-{\cal d}(\bm{\lambda}^*).\ ] ] taking expectations on - over the random state conditioned on and using , we arrive at \!\leq\!\|\mathbf{q}_t-\tilde{\bm{\theta}}_t/\mu\|^2\!+\frac{2}{\mu}\left({\cal d}(\bm{\gamma}_t)-{\cal d}(\bm{\lambda}^*)\right)+m^2\ ] ] where we use the fact that ] may happen only when and the maximum value is bounded by in assumption [ assp.dualgrad ] ; and ( h ) uses the bound in by choosing .setting in , there exists a sufficiently small such that .together with and , the latter implies that \right\|\leq \|\mathbf{1}\cdot md_1\mu^2\|={\cal o}(\mu).\ ] ] plugging into , setting in , and using , we arrive at . letting in, it follows from and that \!-\!{\cal d}(\bm{\lambda}^*)\!+{\cal o}(\mu)+\!\frac{\mu m^2}{2}\nonumber\\ & \stackrel{(h)}{\leq } { \cal d}\left(\lim_{t\rightarrow \infty}\frac{1}{t}\!\sum_{t=1}^t\mathbb{e}[\bm{\gamma}_t]\right)\!-\!{\cal d}(\bm{\lambda}^*)\!+{\cal o}(\mu)+\!\frac{\mu m^2}{2}.\end{aligned}\ ] ] where inequality ( h ) uses the concavity of the dual function .defining ] , and ( g ) follows because is bounded .one can follow the derivations in - to show , which is the second term in the rhs of .therefore , we have from that -{\psi}^ { * } \leq \!{\cal o}(\mu)+\frac{\mu m^2}{2}={\cal o}(\mu)\end{aligned}\ ] ] which completes the proof .l. tassiulas and a. ephremides , `` stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks , '' _ ieee trans ._ , vol .37 , no . 12 , pp .19361948 , dec . 1992 .t. chen , x. wang , and g. b. giannakis , `` cooling - aware energy and workload management in data centers via stochastic optimization , '' _ieee j. sel .topics signal process . _ ,10 , no . 2 ,402415 , mar .2016 .j. gregoire , x. qian , e. frazzoli , a. de la fortelle , and t. wongpiromsarn , `` capacity - aware backpressure traffic signal control , '' _ ieee trans .control of network systems _ , vol . 2 , no . 2 ,pp . 164173 , june 2015 .a. beck , a. nedic , a. ozdaglar , and m. teboulle , `` an gradient method for network resource allocation problems , '' _ ieee trans .control of network systems _ , vol . 1 , no . 1 ,6473 , mar .j. liu , a. eryilmaz , n. b. shroff , and e. s. bentley , `` heavy - ball : a new approach to tame delay and convergence in wireless network optimization , '' in _ proc .ieee infocom _, san francisco , ca , apr .2016 .a. g. marques , l. m. lopez - ramos , g. b. giannakis , j. ramos , and a. j. caamao , `` optimal cross - layer resource allocation in cellular networks using channel - and queue - state information , '' _ ieee trans . veh ._ , vol .61 , no . 6 , pp . 27892807 , jul. 2012 .n. l. roux , m. schmidt , and f. r. bach , `` a stochastic gradient method with an exponential convergence rate for finite training sets , '' in _ proc .advances in neural information processing systems _ , lake tahoe , nv , dec . 2012 , pp . 26632671. a. defazio , f. bach , and s. lacoste - julien , `` saga : a fast incremental gradient method with support for non - strongly convex composite objectives , '' in _ advances in neural info . process ._ , montral , canada , dec .2014 , pp . 16461654 . | network resource allocation shows revived popularity in the era of data deluge and information explosion . existing stochastic optimization approaches fall short in attaining a desirable cost - delay tradeoff . recognizing the central role of lagrange multipliers in network resource allocation , a novel learn - and - adapt stochastic dual gradient ( la - sdg ) method is developed in this paper to learn the empirical optimal lagrange multiplier from historical data , and adapt to the upcoming resource allocation strategy . remarkably , it only requires one more sample ( gradient ) evaluation than the celebrated stochastic dual gradient ( sdg ) method . la - sdg can be interpreted as a foresighted learning approach with _ an eye on the future _ , or , a modified heavy - ball approach from an optimization viewpoint . it is established - both theoretically and empirically - that la - sdg markedly improves the cost - delay tradeoff over state - of - the - art allocation schemes . first - order method , stochastic approximation , statistical learning , network resource allocation . |
the ever increasing demand for more secure , ecosystem friendly and broadband underwater communication and exploration necessitates comprehensive studies on underwater wireless optical communication ( uwoc ) . compared to its traditional counterpart , namely acoustic communication ,uwoc provides better security , lower time delay and higher bandwidth , and is more compatible with underwater ecosystem .despite all these , propagation of light under water is affected by three main degrading effects , namely absorption , scattering and turbulence , which limit the viable communication ranges of uwoc systems to typically less than . in recent years , many valuable researches have been carried out in the context of uwoc , from channel modeling to system design . in petzolds experimental data are used to simulate the uwoc channel impulse response through monte carlo numerical method . however ,these prior works only focused on the absorption and scattering effects of the channel , many relevant studies have been performed on the turbulence characterization .for example , in an accurate power spectrum has been derived for fluctuations of turbulent seawater refractive index .rytov method has been used in to evaluate the scintillation index of optical plane and spherical waves propagating in underwater turbulent medium . and in , the on - axis scintillation index of a focused gaussian beam is formulated for weak oceanic turbulence and log - normal distribution is considered for intensity fluctuations to evaluate the average bit error rate ( ber ) in such systems . on the other hand , a cellular topology based on optical code division multiple access( ocdma ) technique has been recently proposed in for underwater wireless optical networks , while the potential challenges and applications of such a network is discussed in . besides , beneficial applications of multi - hop transmission ( serial relaying ) on the up - and downlink performance of underwater users in an ocdma network and also in point - to - point uwoc links have been investigated in and , respectively , with respect to all the degrading effects of uwoc channels .furthermore , the authors in have focused on the turbulence - induced fading effects of uwoc channels and proposed multiple - input multiple - output ( mimo ) transmission to mitigate turbulence effects through spatial diversity gain of mimo technique .although , the statistical distribution of fading for free - space optical ( fso ) channels is thoroughly investigated and characterized , its probability density function ( pdf ) for uwoc channels is not yet well determined .prior works on the context of uwoc mainly focused on weak oceanic turbulence and adopted the same statistical distribution as fso channels , i.e. , log - normal distribution for this regime of underwater turbulence .however , significant differences between the nature of atmospheric and oceanic optical channels may result in a different pdf for underwater fading statistics .the research in this paper is inspired by the need to more specifically study the intensity fluctuations through uwoc channels . in this paper, we experimentally investigate the effect of various phenomena on the fluctuations of the received optical signal within a laboratory water tank . since the severity of optical turbulence rapidly increases with the link range , the short link range available with a laboratory water tank may typically result negligible fluctuations on the received optical signal .therefore , at the first step , we artificially add air bubbles in our water tank to enlarge the channel s temporal fluctuations . then , for each channel condition , we take sufficient samples from the received optical signal , normalize them to the received power mean , and plot the normalized samples histogram to obtain the intensity fluctuations distribution and the channel coherence time ; the time period in which the channel fading coefficient remains approximately constant .it is worth mentioning that since many of underwater vehicles , divers and submarines generate air bubbles , study on the intensity fluctuations due to the presence of air bubbles has significant importance in precise uwoc channel modeling and system design .the rest of the paper is organized as follows . in sectionii the uwoc channel model is presented .the used evaluation metrics , and general statistical distributions for optical turbulence are briefly described in sections iii and iv , respectively .section v provides the system model and experimental set up , section vi presents experimental results for various scenarios and section vii concludes the paper .propagation of light under water is affected by three major degrading effects , i.e. , absorption , scattering and turbulence .in fact , photons of a propagating light wave may encounter with water molecules and particles . during this collision , energy of each photonmay be lost thermally , which is specified as absorption process and is characterized by absorption coefficient , where is the wavelength of the propagating light wave .furthermore , in the aforementioned collision process direction of each photon may be changed that is defined as scattering process and is determined by scattering coefficient . besides , total energy loss of non - scattered light is described by extinction coefficient .the channel fading - free impulse response ( ffir ) , , can be obtained through monte carlo simulations similar to .although , this ffir includes both absorption and scattering effects , comprehensive characterization of the uwoc channel impulse response requires fading consideration as well . to do so, the channel ffir can be multiplied by a fading coefficient .however , log - normal distribution is mainly used for fading coefficients pdf , inspired by the behaviour of optical turbulence in atmosphere , an accurate characterization of fading coefficients statistical distribution necessitates more specific studies , that is carried out in this paper . in order to specify the fading strength ,it is common in the literature to define the scintillation index of a propagating light wave as ; -{{\operatorname{\mathbb{e}}}^2\left[i\left(r , d_0,\lambda \right)\right]}}{{{\operatorname{\mathbb{e}}}^2\left[i\left(r , d_0,\lambda \right)\right]}},\end{aligned}\ ] ] in which is the instantaneous intensity at a point with position vector , where is the propagation distance and ] , which implies that .it can be shown that for log - normal fading the log - amplitude variance relates to the scintillation index as ._ 2 ) k distribution : _ this distribution is mainly used for strong atmospheric turbulence in which . the pdf of k distribution is given by ; where is the gamma function , is a positive parameter which relates to the scintillation index as .and is the order modified bessel function of the second kind ._ 3 ) gamma - gamma distribution : _ such a statistical model , which factorizes the irradiance as the product of two independent random processes each with a gamma pdf , has been introduced in literature to describe both small - scale and large - scale atmospheric fluctuations .the pdf of gamma - gamma distribution is expressed as ; in which and are parameters related to effective atmospheric conditions .the scintillation index for gamma - gamma distribution is given by ._ 4 ) proposed combined exponential and log - normal distribution : _ when the received optical signal has a large dynamic range or equivalently the received signal lies either in large or small levels , general single - lobe distributions can not specify the fading statistical distribution .in such circumstances , a two - lobe statistical distribution is required to interpret the channel intensity fluctuations .therefore , we propose the combination of an exponential and a log - normal distributions with pdf of ; where , determines the proportion between the exponential and the log - normal distributions , is the exponential distribution mean , and and are the constants of the log - normal distribution .fading coefficients normalization implies that =k\gamma+(1-k)\exp\left(\mu+{\sigma^2}/{2}\right)=1 ] , respectively .our setup is implemented in a black colored water tank . in the transmitter side , a nm green laser diode with the maximum output power of mwis driven by a mos transistor to ensure a constant optical irradiance . in the receiver side , an ultraviolate - visible photodetector ( pd )is used ; the pd is covered by a convex outer lens in order to collect the maximum amount of irradiance .additionally , both laser and pd are sealed by placing in transparent boxes .pd s generated current is amplified through ad840 operational amplifier .the magnified signal , which is proportional to the received optical power relates to the received optical power as , where is the pd s quantum efficiency , is the planck s constant and is the light frequency .moreover , the amplified signal is proportional to the pd s current and hence to the received optical signal .therefore , normalized samples of the amplified photo - detected signal are good representatives of the received optical power normalized samples . ] , is sampled and monitored by an hp infiniium oscilloscope . for each test, we have collected samples with the sampling rate of . along with our setup, an alcohol thermometer is attached to the lower side of the tank to ensure that the water temperature remains constant around degrees celsius . finally , to produce air bubbles , a tunable air blower with the maximum blowing capacity of is employed .[ fig : set_up ] shows our experimental setup .[ cols= " < , < , < , < , < , < , < , < , < , < , < , < " , ] in this section , we present our experimental results for the channel coherence time and the fading statistical distribution , under various underwater channel conditions . for each scenariowe evaluate the validity of different distributions using the defined metrics in section iii .the considered channel conditions include free - space link as well as fresh and salty underwater channels with and without bubbles . for each scenario , we adjusted the transmitter laser power to ensure that a considerable power reaches the receiver . figs . [ first_fig](a ) and [ first_fig](b ) illustrate the received signal through the free - space and fresh water links , respectively . as it can be seen , for fso and uwoc channels with low link ranges ( like our water tank ) the received signal is approximately constant over a large period of time , i.e. , fading has a negligible effect on the performance of low - range fso and uwoc systems . as a consequence ,the channel impulse response for such scenarios can be thoroughly described by a deterministic ffir . in order to investigate salinity effects on the underwater optical channels , we added salt to our water tank .we observed significant loss on the received signal in comparison to fresh water link .therefore , we increased the transmitter power to observe a notable signal at the receiver , and hence to better investigate the fading fluctuations due to water salinity .[ first_fig](c ) shows the received signal through the salty water .as it is obvious , water salinity does not impose severe fluctuations on the propagating signal .we also used a water pump to flow water within the tank .our extra experiments indicated that as the velocity of moving medium increases , the received signal experiences faster and more severe fluctuations .this is mainly due to the fact that as the water flow increases , particles within the water move more rapidly and this causes more randomness on the transmitting signal path . according to figs .[ first_fig](a)-(c ) , due to the low link range available by the laboratory water tank , the received signal fluctuations are typically negligible . in other words ,optical turbulence manifests its effect at longer link ranges , i.e. , for uwoc and fso channels with typically and link ranges , respectively .therefore , an air compressor is used to produce bubbles and hence to induce a severe fading on the channel. figs . [ first_fig](d ) and [ first_fig](e ) illustrates the received signal through the bubbly fresh water , while fig .[ first_fig](f ) shows the same result for the bubbly and salty underwater link .as it is obvious from these figures , the presence of bubbles within the channel causes the received signal to severely fluctuate .this is mainly due to the random arrangement of bubbles encountering the propagating signal across the channel , that causes propagating photons to randomly scatter in different directions and leave their direct path .table i summarizes the results of test and rmse measure for different statistical distributions and for various channel conditions and is a positive quantity , the evaluation of k distribution is only possible when .moreover , for very small values of the the gamma - gamma distribution parameters , i.e. , and have large values hampering the analytical evaluation of gamma - gamma distribution for such regimes of scintillation index . ] .as it can be seen , for very small values of scintillation index , i.e. , log - normal distribution has an excellent goodness of fit and a negligible rmse .however , both log - normal and gamma - gamma distributions acceptably predict the irradinace fluctuations when , as increases and approaches both of these distributions lose their accuracy. meanwhile , both k and gamma - gamma distributions can aptly fit the experimental data for strong turbulent channels , characterized by , while log - normal distribution fails to describe the intensity fluctuations for such a regime of scintillation index .[ hists ] better illustrates the fitness of different statistical distributions with the acquired data histograms for various channel scenarios .based on the summarized results of table i , neither of the discussed general statistical distributions are capable of predicting the intensity fluctuations when . as the received intensity shapes in figs .[ first_fig](d)-(f ) and the histograms of the acquired data in figs .[ hists](d)-(f ) show the presence of air bubbles causes the received intensity to mainly lie either in large or small values .hence , typical single - lobe distributions can not appropriately fit the experimental data and generally a two - lobe statistical distribution is required .therefore , we proposed a combined exponential and log - normal distribution as discussed in section iv .as table i and fig .[ hists ] illustrate the proposed distribution excellently fits the acquired experimental data for all the regions of scintillation index with an acceptable rmse .[ tcc ] illustrates the temporal covariance coefficient of irradiance for different scenarios .as it can be seen , even in strongest fading condition of bubbly channels the aforementioned coefficient is larger than seconds confirming the flat fading , i.e. , the same fading coefficient over thousands up to millions of consecutive bits .moreover , this figure shows that the channel temporal variation slightly increases with the increase on the scintillation index value , i.e , the higher the value of , the less the value of .in this paper , we experimentally investigated the statistical distribution of intensity fluctuations for underwater wireless optical channels under different channel conditions .our experimental results showed that salt attenuates the received signal while air bubbles mainly introduce severe intensity fluctuations .moreover , we observed that log - normal distribution precisely fits the acquired data pdf for scintillation index values less than , while gamma - gamma and k distributions aptly predict the intensity fluctuations for .meanwhile , neither of these distributions are capable of predicting the received irradiance for .therefore , we proposed a combination of an exponential and a log - normal distributions to perfectly describe the acquired data pdf for such regimes of scintillation index .furthermore , our study on the channel coherence time have shown larger values than seconds for the channel temporal covariance coefficient implying flat fading uwoc channels .c. gabriel , m .- a .khalighi , s. bourennane , p. lon , and v. rigaud , `` monte - carlo - based channel characterization for underwater optical communication systems , '' _ journal of optical communications and networking _ , vol . 5 , no . 1 ,pp . 112 , 2013 .f. akhoundi , j. a. salehi , and a. tashakori , `` cellular underwater wireless optical cdma network : performance analysis and implementation concepts , '' _ ieee trans ._ , vol .63 , no . 3 , pp .882891 , 2015 .f. akhoundi , m. v. jamali , n. banihassan , h. beyranvand , a. minoofar , and j. a. salehi , `` cellular underwater wireless optical cdma network : potentials and challenges , '' _ arxiv preprint arxiv:1602.00377 _ , 2016 . m. v. jamali , f. akhoundi , and j. a. salehi , `` performance characterization of relay - assisted wireless optical cdma networks in turbulent underwater channel , '' _ accepted for publication in ieee trans .wireless commun . _ , 2016 .m. v. jamali and j. a. salehi , `` on the ber of multiple - input multiple - output underwater wireless optical communication systems , '' in _ 4th international workshop on optical wireless communications ( iwow)_.1em plus 0.5em minus 0.4emieee , 2015 , pp .2630 .m. al - habash , l. c. andrews , and r. l. phillips , `` mathematical model for the irradiance probability density function of a laser beam propagating through turbulent media , '' _ optical engineering _ , vol .40 , no . 8 , pp . 15541562 , 2001 .m. a. kashani , m. uysal , and m. kavehrad , `` a novel statistical channel model for turbulence - induced fading in free - space optical systems , '' _ j. lightw. technol ._ , vol .33 , no . 11 , pp .23032312 , 2015 . | in this paper , we experimentally investigate the statistical distribution of intensity fluctuations for underwater wireless optical channels under different channel conditions , namely fresh and salty underwater channels with and without air bubbles . to do so , we first measure the received optical signal with a large number of samples . based on the normalized acquired data the channel coherence time and the fluctuations probability density function ( pdf ) are obtained for different channel scenarios . our experimental results show that salt attenuates the received signal while air bubbles mainly introduce severe intensity fluctuations . moreover , we observe that log - normal distribution precisely fits the acquired data pdf for scintillation index ( ) values less than , while gamma - gamma and k distributions aptly predict the intensity fluctuations for . since neither of these distributions are capable of predicting the received irradiance for , we propose a combination of an exponential and a log - normal distributions to perfectly describe the acquired data pdf for such regimes of scintillation index . underwater wireless optical communications , intensity fluctuations , fading statistical distribution , channel coherence time . |
social network analysis is the study of the structure of relationships linking individuals ( or other social units , such as organizations ) and of interdependencies in behavior or attitudes related to configurations of social relations .the observational units in a social network are the relationships between individuals and their attributes . whereas studies in medicine typically involve individuals whose observationscan be thought of as statistically independent , observations made on social networks may be simultaneously dependent on all other observations due to the social ties and pathways linking them .accordingly , different statistical techniques are needed to analyze social network data .the focus of this chapter is _ sociocentric data _ , the case when relational data is available for all pairs of individuals , allowing a fully - fledged review of available methods .two major questions in social network analysis are : 1 ) do behavioral and other mutable traits spread from person - to - person through a process of induction ( also known as _ social influence _ , _ peer effects _ , or _ social contagion _ ) ; 2 ) what exogeneous factors ( e.g. , shared actor traits ) or endogeneous factors ( e.g. , internal configurations of actors such as triads ) are important to the overall structure of relationships among a group of individuals .the first problem has affinity to medical studies in that individuals are the observational units . in medicine ,the health of an individual is paramount and so individual outcomes have historically been used to judge the effectiveness of an intervention .a study of social influence in medicine may involve the same outcome but the treatment or intervention is the same variable evaluated on the peers of the focal individual ( referred to as _ alters _ ) .an important characteristic of studies of social influence is that individuals may partly or fully share treatments and one individual s treatment may depend on the outcome of another .for example , an intervention that encourages person a to exercise in order to lose weight might also influence the weight of a s friends ( b and c ) because they exercise more when around a. hence , a s weight intervention may also affect the weight of b and c. a consequence is that the total effect of a s treatment must also consider it s effect on b and c , the benefit to individuals to whom b and c are connected , and so on . such _ interference _ between observations violates the stable - unit treatment value assumption ( sutva ) that one individuals treatment not affect anothers outcome , which presents challenges for identification of causal effects .interference is likely to result in an incongruity between a regression parameter and the causal effect that would be estimated in the absence of interference .the second problem is important in sociology as social networks are thought to reveal the structure of a group , organization , or society as a whole .for example , there has always been great interest in determining whether the triad is an important social unit .if the existence of network ties a - b and a - c makes the presence of network tie b - c more likely then the network exhibits _ transitivity _ , commonly described as `` a friend of a friend is a friend '' .thus , just as an individual may influence or be influenced by multiple others , the relationship status of one dyad ( pair of individuals ) may affect the relationship status of another dyad , even if no individuals are common to multiple dyads .accounting for between dyad dependence is a core component of many social network analyses and has entailed much methodological research .network science is a parallel field to social network analysis in that there is very little overlap between researchers in the respective fields despite the similiarity of the problems . whereas solutions to problems in social networks have tended to be data - oriented in that models and statistical tests are based on the data , those in network science have tended to be phenomenon - oriented with analogies to problems in the physical sciences often providing the backbone for solutions .methods for social network analysis often have causal hypotheses ( e.g. , does one individual have an effect on another , does the presence of a common friend make friendship formation more likely ) motivating them and involving micro - level modeling .in contrast , methods in network science seek models generated from some theoretical basis that reproduce the network at a global or system level and in so - doing reveal features of the data generating process ( e.g. , is the network scale - free , does the degree - distribution follow a power - law ) .one of the goals of this chapter is to address the lack of interaction between the social network and network science fields by providing the first joint review of both . by enlarging the range of methods at the disposal of researchers , advances at the frontier of networks and healthwill hopefully accelerate .the computer age has enabled widespread implementation of methods for social network and network science analysis , particularly statistical models . at the same time , a diverse range of applications of social network analysis have appeared , including in medicine . because many medical and health - related phenomena involve interdependent actors ( e.g. patients , nurses , physicians , and hospitals ) , there is enormous potential for social network analysis to advance health services research .the layout of the remainder of the chapter is as follows .this introductory section concludes with a brief historical account of social networks and network science is given .the major types of networks and methods for representing networks are then discussed ( section 2 ) . in section 3 formal notationis introduced and descriptive measures for networks are reviewed . social influence and social selectionare studied in sections 4 and 5 , respectively .our focus switches to methods akin with network science in section 6 , where descriptive methods are discussed .the review of network science methods continues with community detection methods in section 7 and generative models in section 8 .the chapter concludes in section 9 . in the 1930 s ,a field of study involving human interactions and relationships emerged simultaneously from sociology , psychology , and anthropology .moreno is credited for inventing the sociogram , a visual display of social structure .the appeal of the sociogram led to moreno being considered a founder of sociometry , a precursor to the field known as _social networks_. a number of mathematical analyses of network - valued random variables in the form of sociograms followed .other important contributions were to structural balance ; the diffusion of medical innovations ; structural equivalence ; and social influence . refer to for a detailed historical account .early network studies involved small networks with defined boundaries such as students in a classroom , or a few large entities such as countries engaging in international trade . because the typical number of individuals in such studies was small ( e.g. , ) , relationships could be determined for all possible pairs of individuals yielding complete _ sociocentric _ datasets .furthermore , the often enclosed nature of the system ( e.g. , a classroom or commune ) reduced the risk of confounding by external factors ( e.g. , unobserved actors ) . sociological theory developed over time as sociologists provided intuitive reasoning to support various hypotheses involving social networks and society . in the specific area of individual health , at least five principal mediating pathways through which social relationships and thus social networks may influence outcomes have been posited .prominent among these is social support , which has emotional , instrumental , appraisal ( assistance in decision making ) , and informational aspects . beyond social support , networks may also offer access to tangible resources such as financial assistance or transportation .they can also convey social influence by defining norms about such health - related behaviors as smoking or diet , or via social controls promoting ( for example ) adherence to medication regimes .networks are also channels through which certain communicable diseases , notably sexually transmitted ones , spread and certain network structures have been hypothesized to reduce exposure to stressors . a field known as mathematical sociology complemented social theory by attempting to derive results using mathematical rather than intuitive arguments .in particular , statistical and probability methods are used to test for the presence of various structural features in the network .other key areas of mathematics that have been used in network analysis include graph theory and algebraic models .develop tests of dependence within dyads ( pairs of actors ) while ( ) develop tests of triadic dependence .in general , results were descriptive or based on simple models making strong assumptions about the network . with the advent of powerful computers ,mathematical contributions have taken on more importance as so much more can be implemented that in the past .for example , computer simulation has recently been used to test and develop theoretical results . in the mid - late 1990s ,network science emerged as a discipline . whereas social networks were the domain of social scientists and a growing number of statisticians ,network scientists typically have backgrounds in physics , computer science or applied mathematics .the use of physical concepts to generate solutions to problems is common as evinced by the large domains of research focusing on the adaptation of ( e.g. ) a particular physical equation to network data .for example , several procedures for partitioning a network into disjoint groups of individuals ( `` communities '' ) rely on the _ modularity _ equation , which was developed in the context of spin - theory to model the interaction of electrons .while much of the initial work focused on the properties of the solution at different values of the parameters there recently been has increased attention to using these methods to provide valuable insight on important practical problems .social networks are comprised of units and the relationships between them .the units are often individuals ( also referred to as _ actors _ ) but can include larger ( e.g. , countries , companies ) and smaller ( e.g. , organisms , genes ) entities . in sociocentric studies ,data is assembled on the ties linking all units or actors within some bounded social collective .for example , the collection of data on the network of all children in a classroom or on all pairs of physician collaborations within a medical practice constitutes a sociocentric study .relationships can be shared or directional , and quantified by binary ( tie exists or not ) , scale ( or valued ) , or multivariate variables . by measuring all relationships , sociocentric dataconstitutes the highest level of information collection and facilitates an extensive range of analyses including accounting for the effects of multiple actors on actor outcomes or the structure of the network itself to be studied . a weaker form of relational data is collected in egocentric studies where individuals ( `` egos '' ) are sampled at random and information is collected on at least a sample of the individuals with direct ties to the egos ( `` alters '' ) . because standard statistical methods such as regression analysis can generally be used to analyze egocentric data , herein egocentric data are not featured .relational data is often binary ( e.g. , friend or non - friend ) .one reason is that other types of relational data ( e.g. , nominal , ordinal , interval - valued ) are often transformed to binary due to the convenience of displaying binary networks .another is the greater range of models available for modeling binary data .many studies involve two distinct types of units , such as patients and physicians , or physicians and hospitals , authors and journal articles or books , etc . in these two - mode networks, the elementary relationships of interest usually refer to affiliations of units in one set with those in the other ; e.g. , of patients with the physician(s ) responsible for their care , or of physicians with the hospital(s ) at which they are admitted to practice .two - mode networks are also known as affiliation or __ networks .they can be viewed as a special - case of general sociocentric network data in that the relationship of interest is between heterogeneous types of actors .the advent of high - powered computers has enabled the analysis of large networks , which has benefitted fields such as health services research that regularly encounter large data sets .a challenge facing analyses of large networks is that it may be infeasible for all actors to be exposed to each other actor and thus for a relationship to have formed .therefore , statistical analyses for large networks essentially use relational data representing the joint event of individuals meeting and then forming a tie , not the network of ties that would be observed if all pairs of individuals actually met .accordingly , analyses of large networks may underestimate effect sizes unless information on the likelihood of two individuals meeting is incorporated .let the status of the relationship from to be denoted by , element of the adjacency matrix . in a directed network may differ from while in a non - directed network , implying .a network constructed from friendship nominations is likely to be directed while a network of coworkers is non - directed . in the case of immutable relationships ( e.g. , siblings ) , will only change as actors are added or removed ( e.g. , through birth or death ) , as relationship status is otherwise invariant . in the following ,assume the network is binary unless otherwise stated .matrices and graphs are two common ways of representing the status of a networks at a fixed time . in a matrix representation ,rows and columns correspond to units or actors ; the matrix is square for one - mode and rectangular for two - mode networks. elements of the matrix contain the value of the relationship linking the corresponding units or actors , so that element represents the relationship from actor to actor . with binary ties ( 1 = tie present , 0 = tie absent ) ,the matrix representation is known as an adjacency matrix .irrespective of how the network is valued , the diagonal elements of the matrix representing the network equal 0 as self - ties are not permitted .several network properties can be computed through matrix operations . .note : self - ties are not relevant in studies involving relationships.,width=432 ] in graphical form , units or actors are vertices and non - null relationships are lines .non - directed relationships are known as `` edges '' and directed ones as `` arcs '' ; arrows at the end(s ) of arcs denote their directionality .value - weighted graphs can be constructed by displaying non - null tie values along arcs or edges , or by letting thinner and thicker lines represent line values .such graphical imagery is a hallmark of social network analysis . two - mode ( or bipartite ) networks may be represented in set - theoretic form as hypergraphs consisting of a set of actors of one type , together with a collection of subsets of the actors defined on the basis of a common actor of the second type . this representation highlights the multi - party relationships that may exist among those actors of one type that are linked to a given actor of the other type ; e.g. , the set of all physicians affiliated with a particular clinic or service . in matrix form ,element of an affiliation matrix indicates that actor of the first type is linked to actor of the second type .affiliation networks may usefully be represented as bipartite graphs in which nodes are partitioned into two disjoint subsets and all lines link nodes in different sets .an induced one - mode network may be obtained by multiplying an affiliation matrix by its transpose , ; entry of the outer - product gives the number of affiliations shared by a pair of actors of one type ( see figure [ fig : biparproj ] , which emulates a figure in ) .dually , the inner - product yields a one - mode network of shared affiliations among actors of the second type .the diagonals of the outer and inner matrix products give the degree of the actors ( i.e. , the number of ties to actors of the other mode ) . )bipartite adjacency matrix .a one - mode projection of the doctor - patient network is obtained by multiplying the bipartite adjacency matrix by its transpose , , to yield a symmetric one - mode adjacency matrix , whose elements indicate the number of patients the two physicians have in common .the diagonal elements of correspond to the number of patients the given physician `` shares with themselves '' ( i.e. , the number of patients they care for).,width=432 ] in health services applications , an investigator is often interested in a one - mode network that is not directly observed but rather is induced from a two - mode network .such one - mode projection networks are motivated theoretically by a claim that shared actors from the other mode act as surrogates for ties between the actors .for example , physicians with many patients in common might have heightened opportunities for contact through consultations or sharing of information about those patients and thus the number of shared patients is a surrogate for the actual extent of interaction between pairs of physicians .examples of provider ( physician , hospital , health service area ) networks obtained as one - mode projections of bipartite networks in health services research are given in ( ) .an often overlooked feature of bipartite network analysis is the mechanism by which network data is obtained .networks obtained from one - mode projections have different statistical properties from directly - observed one - mode networks .consider a patient - physician bipartite network and suppose a threshold is applied to the physician one - mode projection such that true social ties are assumed to exist or not according to whether one or more patients are shared .then a patient that visits three physicians is seen to induce ties between all three physicians .the same complete set of ties between the three physicians is also induced by three patients that each visit different pairs of the three physicians .however , the projection does not preserve the distinction ( see section [ sec : bipar ] for further comment ) .the number of units or actors ( ) is known as the order of the network .a common network statistic is network density ( ) , defined as the number of ties across the network ( ) divided by the number of possible ties ; for directed networks and for non - directed networks .thus , density equals the mean value of the binary ( 1 , 0 ) ties across the network . the same definition can be used for general relational data , in which case the resulting measure is sometimes referred to as _strength_. while results in this chapter are generally presented for binary networks , corresponding measures for weighted networks often exist ( ) .the tendency for relationships to form between people having similar attributes is known as homophily .homophily involves subgroup - specific network density statistics . with high homophily according to some attribute , networks tend toward segregation by that attribute - the extreme case occurs when the network consists of separate components ( i.e. , no ties between actors in different components ) defined by levels of the attribute . in the other direction , one obtains a bipartite network where all ties are between different types of actors ( extreme heterophily ) .the out- and in - degree for an actor are the number of ties from , ( column sum ) , and to , ( row sum ) , actor .these are also referred to as _ expansiveness _ and _ popularity _ , respectively .for example , a positive correlation between out- and in - degree suggests that popular individuals are expansive .the number of ties ( or value of the ties ) in a network is given by , where denotes the mean degree ( or strength ) of an individual , implying the density of the network is given by .this result is not specific to in- or out - degree due to the fact that the total number of inward ties must equal the total number of outward ties , implying mean in - degree equals mean out - degree .the variance of the degree distribution measures the extent to which tie - density ( or connectedness ) varies across the network .often actors having higher degree have prominent roles in the network .a special type of homophily is the phenomenon where individuals form ties with individuals of similar degree , commonly referred to as _assortative mixing_. in directed networks , assortative mixing can be defined with respect to both out - degree and in - degree .the opposite scenario to a network with the same degree for all actors is a -star a network configuration with relationships are incident to the focal actor ( figure [ fig : config ] ) in which there are no ties between the other actors .the length of a path between two actors through the network is defined as the number of ties traversed to get from one actor to the other .the elements of the adjacency matrix multiplied by itself times , denoted , equal the number of paths of length between any two actors with the number of -cycles ( including multiple or repeated loops ) on the diagonal .the shortest path between two actors is referred to as the _ geodesic distance_. certain subnetworks have particular theoretical prominence .the first step - up from the trivial single actor subnetwork , also known as an isolated node , is the network comprising two actors ( a `` dyad '' ) .the presence and magnitude of a tendency toward symmetry or reciprocity in a directed network can be measured by comparing the number of mutual dyads ( ties in both directions ) to the number expected under a null model that does not accommodate reciprocity .if the number of mutual dyads is higher than expected , there is a tendency towards reciprocation .a _ triad _ is formed by a group of three actors .figure [ fig : config ] shows a `` transitive triad '' , so - named as it exhibits the phenomenon that a `` friend of a friend is a friend . '' non - parametric tests for the presence of transitivity or other forms of triadic dependence are based on the distribution of the number of closed and non - closed triads conditional on the number of null ( no ties intact ) , directed ( one tie intact ) , and mutual dyads ( both ties intact ) collectively known as the _ dyad census _ ; the degree distribution ; and other lower - order effects ( e.g. , homophily of relevant individual characteristics ) in the observed network . such tests are described in .centrality is the most common metric of an actor s prominence in the network and many distinct measures exist .they are often taken as indicators of an actor s network - based `` structural power . ''such measures are often used as explanatory variables in individual - level regression models .different centrality measures are characterized by the aspects of an actor s position in the network that they reflect .for example , degree - based centrality the degree of an actor in an undirected network and in- or out - degree in a directed network reflects an actor s level of network connectivity or involvement in the network .betweenness centrality computes the frequency with which an actor is found in an intermediary position along the geodesic paths linking pairs of other actors .actors with high betweenness centrality have high capacity to broker or control relationships among other actors . a third major centrality measure , closeness centrality ,is inversely - proportional to the sum of geodesic distances from a given actor to all others .the rationale underlying closeness measures is that actors linked to others via short geodesics have comparatively little need for intermediary units , and hence have relative independence in managing their relationships .closeness measures are defined only for networks in which all actors are mutually related to one another by paths of finite geodesic distance ; i.e. , single component networks .finally , eigenvalue centrality is sensitive to the presence or strength of connections , as well as those of the actors to which an actor is linked .it assumes that connections to central actors indicate greater prominence than do ( similar - strength ) connections to peripheral actors .the key component of the measure is the largest eigenvalue of an adjacency or other matrix representation of the network .network - level centrality indices are network - level statistics that resemble the degree variance whose values grow larger to the extent that a single actor is involved in all relationships ( as in the `` star '' network shown in figure [ fig : config ] ) .the assignment of actors to groups is an important and growing field within social networks .the rationale for grouping actors is that it may reveal salient social distinctions that are not directly observed .the general statistical principle adhered to is that individuals within a group are more alike than individuals in different groups .groups are typically formed on the basis of network ties alone , the rationale being that the similarity of individuals positions in the network is in - part revealed by the pattern of ties involving them .thus , actors in densely connected parts of the network are likely to be grouped together .a related concept to a group is a clique , a maximal subset of actors having density 1.0 ( i.e. , ties exist between all pairs of individuals in a binary network ) .the larger the clique the stronger the evidence that the collective individuals are in the same group .grouping algorithms based on maximizing the ratio of within - group to between group ties are unlikely to split large cliques as doing so creates a lot of between group ties .however , a clique need not be its own group . components of a network are defined by the non - existence of any paths between the actors in them . often a network is comprised of one large component and several small components containing few individuals .a more practical way of grouping individuals than by cliques is through -connected components , a maximal subset of actors mutually linked to one another by at least node - independent paths ( i.e. , paths that involve disjoint sets of intermediary actors who also lie within the subgraph ) .such a criterion is related to -coreness , a measure of the extent to which subgraphs with all internal degrees occur in a network .there are several other ways for grouping the actors in a network .model - based methods include mixed - membership stochastic block models and latent - class models in which the group is treated as a categorical individual - level latent variable while non - parametric methods used in network science include _ modularity _ and its variants .these methods are discussed in section [ sec : physcd ] , where the grouping of actors is referred to as _ community detection_. in practice two - mode networks are rarely directly analyzed .if one of the modes instigates ties or is of primary interest , the network involving just those actors is often analyzed as a single - mode network .for example , in a physician - patient referral network , the physicians often instigate ties through patient referrals while patients are chiefly responsible for who they see first . the projection from a two - mode network to a one - mode network links nodes in one mode ( e.g. , physicians )if they share a node of the other mode ( e.g. , patients ) . a weighted network can be formed with the number of shared actors of the other mode ( or function thereof ) as weights . in describing networks obtained from a projection of a two - mode network ,the usual practice is to use unipartite descriptive measures .however , several layers of information are lost , including the number of actors in the other mode underlying a tie and the degree distribution of the actors in the other mode , from treating a one - mode projection as an actual network . even if the two - mode network is completely random , ties in a one - mode projection that arise from a single ( e.g. ) patient with ties to ( e.g. ) three physicians are not separate events .more generally , a patient who visits -physicians generates a -clique among those physicians and tells us nothing about whether physician sharing of one patient is correlated with physician sharing of another patient the question of primary interest in the study of the diffusion of treatment practices .thus , -cliques for may be excluded from measures of transitivity in two - mode networks .descriptive measures for two - mode networks may be computed that parallel those for one - mode networks .centrality measures based on the bipartite network representation are covered in .review visualization , subgroup detection , and measurement of centrality for two - mode network data .more descriptive measures for two - mode networks have recently been proposed .for example , a two - mode measure of transitivity defined as the ratio of the total number of six cycles ( closed paths of six ties through six nodes ) in the two - mode network divided by the total number of open five - paths through six nodes . in the context of the patient - physician network , _ physician transitivity _ exists if physicians a and b sharing a patient and physicians b and c sharing a patient makes it more likely for physicians a and c to share a patient .it is only if the two pairs of physicians have different patients in common that the physician triad may be transitive and only if the third pair share a different patient from the first two that the event can be attributed to transitivity .the involvement of distinct patients makes the physician - physician ties distinct events and thus informative about clustering of physicians ( and patients ) . in general , the matrix equation in which a bipartite network adjacency matrix is multiplied by its transpose yields a weighted one - mode network ( the elements contain the number of shared actors of the other mode ) .to avoid losing information about the number of actors leading to a tie between primary nodes , weights can be retained or monotonically transformed in the projected network .weighted analogies of descriptive measures of binary networks can be evaluated on the weighted one - mode projection .for example , the calculation of degree is emulated by summing the weights of the edges involving an individual , yielding their _ strength_. degree and strength together distinguish between actors with many weak ties and those with a few strong ties .analogous measures of centrality can also be computed for the weighted one - mode projection . however , whether ties between physicians arise through them all treating the same patient , from each pair of physicians sharing a unique patient , or some in - between scenario can not be determined post - transformation ; thus , the projection transformation expends information .a further strategy is to set weights for the bipartite network prior to forming the projection .for example , in co - authorship networks , the tie connecting an author to a publication might receive a weight of where is the number of authors on paper .( only papers with at least two authors are used to form such networks . )the rationale is that the greater the number of authors the lower the expected interaction between any pair ( a similar logic underlies the example weight matrix described in section [ sec : influ ] ) .the sum of the weights across all publications common to two authors is then the basis of their relationship in the author network .if the events defining the bipartite network occur at different times ( e.g. , medical claims data often contain time - stamps for each patient - physician encounter ) a directed one - mode network may be formed. the value of the a - b and b - a ties in the physician - physician network could be the number of patients who visited a before b and b before a , respectively . in the resulting directed networkeach physician has a flow to and from each other physician .subsequent transformation of the flows to binary values yields dyads with states null , directed , and mutual as in a directed unipartite binary network .because medical claims and surveys are frequent sources of information about one entity s experience ( e.g. , a patient ) with another entity ( e.g. , a health plan or physician ) , bipartite network analysis is an area that promises to have enormous applicability to health services research .hence , new methods for bipartite network analysis are needed .we now consider the use of statistical models in social network analysis .particular emphasis is placed on methods for estimating social influence or peer effects and models for analyzing the network itself , including accounting for social selection through the estimation of effects of homophily .reported claims about peer effects of health outcomes such as bmi , smoking , depression , alcohol use , and happiness have recently tantalized the social sciences . in large part , the discussion and associated controversies have arisen from the statistical methods used to estimate peer effects .let and denote a scalar outcome and a vector of variables , respectively , for individual at time ( includes 1 as its first element to accommodate an intercept ) .in this section , the relationship status of individuals and from the perspective of individual ( denoted ) , is assumed to be time - invariant . for ease of notation no distinction is made between random variables and realizations of them .the vector and the matrices and are the network - wide quantities whose element , row , and element contain the outcome for individual , the vector of covariates for individual , and the relationship between individuals and as perceived by individual , respectively .the representation of an example adjacency matrix , denoted , is depicted in figure [ fig : netrep1 ] .regression models for estimating peer effects are primarily concerned with how the distribution of a dependent variable ( e.g. a behavior , attitude or opinion ) measured on a focal actor is related to one or more explanatory variables . when behaviors , attitudes or opinions are formed in part as the result of interpersonal influence, outcomes for different individuals may be statistically dependent .the outcome for one actor will be related to those for the other actors who influence her or him , leading to a complex correlation structure . in social influence analysesthe weight matrix , $ ] in figure [ fig : netrep2 ] , apportions the total influence acting on an individual evenly across the individuals with whom they have a netwok tie .typically 1 . : non - negative weights : no self - influence . : weights give relative influences ( because its row - sums equal 1 , is said to be row - stochastic ) .let denote the influence - weighted average of the outcome across the network after excluding ( i.e. , subtracting ) individual from the set of individuals to be averaged over .similarly , let denote the vector containing the corresponding influence weighted covariates , often referred to as _contextual variables_. the most common choice for is the row - stochastic version of . for illustration ,suppose that is binary ( the elements are 1 and 0 ) . then the off - diagonal elements on the row of equal if and otherwise ( figure [ fig : netrep2 ] ) .this framework assumes that an individual s alters are equally influential . in general , influence might only transmit through outgoing ties ( e.g. , those individuals viewed as friends by the focal actor - a scenario consistent with figure [ fig : netrep2 ] ) , or might only transmit through received ties ( e.g. , individuals who view the focal actor as a friend ) , or might act in equal or different magnitude in both directions .( right ) .a directed edge from to means that node ( or individual ) has a relationship to node while element of quantifies the extent that individual is influenced by individual .although the mathematical form of influence depicted here assumes that influence only acts in the direction of the edge , influence may in general act in the absence of a tie ( e.g. , people who consider me as a friend might influence me even if i do not consider them a friend).,width=432 ] network - related interdependence among the outcomes may be incorporated in two distinct ways .first , an outcome for one actor may depend directly on the lagged outcomes or lagged covariates of the alters to whom she or he is linked . for example , consider the model : where is a scalar parameter quantifying the peer effect ; is a -dimensional vector of parameters of peer effects acting through the covariates in , is a vector of other regression parameters for the within - individual predictors , and is the independent error assumed to have mean 0 and variance .the notation used in equation [ eq : dynam1 ] is adopted through this section ; hence , and denote peer effects and within - individual effects , respectively .equation ( [ eq : dynam1 ] ) is known as the `` linear - in - means model '' due to the conduit for peer influence being the trait averaged over the alters of each focal actor .the model has a symmetric appearance in that it contains corresponding peer effects for each of the within individual predictors .a common alternative model assumes ; in other words , that peer effects only act through the same variable in the alters as the outcome .another set of variants arises in the case when there are multiple types of alters with heterogeneous peer effects .such a situation may be represented in a model by defining distinct influence matrices for each type of peer .let denote the weight matrix formed from the adjacency matrix for the network comprising only alters of type and let for , where is the number of distinct types of alters .then an extension of the linear - in - means model to accommodate heterogeneous peer effects is : in the special case where and , ( [ eq : dynam2 ] ) reduces to ( [ eq : dynam1 ] ) .an alternative to ( [ eq : dynam2 ] ) is to fit separate models for each type of peer , which would yield estimates of the overall ( or marginal ) peer effect for each type of peer as opposed to the independent effect of each type of peer above and beyond that of the other types . , which is an intermediary between and .( because the point made here does nut depend on and they are not depicted . )if ( or ) is conditioned on , the path is unblocked and therefore confounds , whose effect is the peer effect of interest .although the dag looks like a digraph of a network , a dag is a different construction.,width=192 ] failing to account for all alters may lead to biased results if the alters are interconnected .figure [ fig : netconfound ] presents a simple directed acyclic graph ( dag ) , which is a device for determining whether or not an effect is identifiable , involving three individuals , and .the nodes represent the variables of interest ( a trait measured on each individual such as their bmi ) and the arrows represent causal effects ( the origin of the arrow is the cause and the tip is the effect ) .consider the peer effect of individual at on individual at .a causal effect is identifiable if it is the only unblocked path between two variables .because individual is a cause of both individual and individual , the peer effect of on will be confounded by individual unless unless the analysis conditions on .the scenario depicted in figure [ fig : netconfound ] does not present any major difficulties as long as effects involving individual are accounted .however , if individual is not known about or is ignored , then the analysis may be exposed to unmeasured confounding .this point has particular relevance to social network analyses as networks are often defined by specifying boundaries or rules for including individuals as opposed to being finite , closed systems . in situations where such boundaries break true ties ,influential peers may be excluded , potentially leading to biased results . from a practical standpoint , it may be infeasible to use a model with only lagged predictors such as ( [ eq : dynam1 ] ) .for instance , the time points might be so far apart that statistical power is severely compromised .therefore , it is tempting to use a model with contemporaneous predictors such as : where adjusting for seeks to isolate the peer effect acting since .however , because is correlated with the outcome variables of other observations , ols will be inconsistent .therefore , methods to account for the endogeneity arising from the correlation between and for in network science parlance the state of is said to be an internal product or consequence of the system as opposed to an external ( exogenous ) force . in ,the most widely cited of the christakis - fowler peer effect papers , the endogeneity problem is resolved using a novel theoretical argument .they purported that it is reasonable to assume in a friendship network that the influence acting on the focal actor ( the ego ) is greatest for mutual friendships , followed by ego - nominated friendships , followed by alter - nominated friendships , and finally is close to 0 in dyads with no friendships .furthermore , they reasoned that because unmeasured common causes should affect each dyad equally . because the estimated peer effects declined from large and positive for mutual friendships to close to 0 for alter and null friendships , consistent with their theory ,it was suggested that this constituted strong evidence of a peer effect . despite the compelling argument , revealed that unobserved factors affecting tie - formation ( homophily ) may confound the relationship and thus lead to biased effects .the estimation of peer effects is a topic of ongoing vigorous debate in the academic and the popular press .alternative approaches to the theory - based approach of christakis and fowler are now described .a parametric model - based solution to endogenous feedback is to specify a joint distribution for .then the reduced form of the model satisfies for to yield .the resulting model emulates a spatial autocorrelation model .one way of facilitating estimation is by specifying a probability distribution for .however , relying on the correctness of the assumed distribution for identification may make the estimation procedure sensitive to an erroneous assumed distribution .a semi - parametric solution is to find an instrumental variable ( iv ) , ; a variable that is related to but conditional on and does not cause . if is excluded from ( [ eq : dynamcf ] ) , its elements can potentially be used as ivs .however , iv methods can be problematic if the instrument is weak or if the assumption that the iv does not directly impact ( the exclusion restriction ) is violated , an untestable assumption .thus , in fitting a model with contemporaneous peer effects , one faces a choice between assuming a multivariate distribution holds , relying on the non - existence of unmeasured confounding variables , or relying on the validity of an iv .none of these assumptions can be evaluated unconditionally on the observed data .while joint modeling and iv methods provide theoretical solutions to the estimation of contemporaneous peer effects , the notion of causality is philosophically challenged when the cause is not known to occur prior to effect . therefore , longitudinal data provide an important basis for the identification of causal effects , in particular in negating concerns of reverse causality . if the observation times are far apart the use of lagged alter predictors may , however , substantially reduced the power of an analysis .if the dyads consist of mutually exclusive or isolated pairs of actors there are no inter - dyad ties and influence only acts within dyads .an example of such a situation occurs when individuals can have exactly one relationship and the relationship is reciprocated , as is the case with spousal dyads .the network influence models of section [ sec : influ ] reduce to _ dyadic influence models _ in which the predictors are based on individual alters . for example , the dyadic influence model analogous to ( [ eq : dynamcf ] ) is obtained by replacing the subscript with .that is , the model in ( [ eq : dyad1 ] ) may be estimated using generalized estimating equations ( gee ) , avoiding specifying a distribution for . however ,if any relationships are bidirectional , standard software packages will yield inconsistent estimates of the peer effects as they do not account for the statistical dependence introduced by individuals who play the dual role of ego and alter at time .there has recently been a lot of interest and discussion concerning causal peer effects .issues that have been discussed include the use of ordinary least squares ( ols ) for the estimation of contemporaneous peer effects and the identification of peer effects independent of homophily .the discussion has helped elevate social network methodology to the forefront of many disciplines .for example , show that ols still provides a valid test of the null hypothesis that the peer effect is zero when the true peer effect is zero .therefore , ols can be used to test for peer effects despite the fact that ols estimates are inconsistent under the alternative hypothesis .use tie directionality to account for unmeasured confounding variables under the assumption that their effect on relationship status is the same for all types of relationships .the rationale is that the estimated peer effect in dyads where the relationship is not expected to be conducive to peer influence ( `` control relationships '' ) provides a baseline against which to identify the peer effect for other types of relationships .however , this test fails to offer complete protection against unmeasured homophily , reflecting the vulnerability of observational data to unmeasured sources of bias .however , sensitivity analyses that evaluate the effect - size needed to overturn the results may be conducted to help support a conclusion by illustrating that the confounding effect must be implausibly large to reverse the finding .instrumental variable ( iv ) methods have also been used to estimate peer effects .a common source of instruments is alters attributes other than the one for which the peer effect is estimated .potential ivs must predict the attribute of interest in the alter but must not be a cause of the same attribute in other individuals .attributes that are invisible such as an individuals genes appear to be ideal candidate genes . for instance , an individual with two risk alleles of an obesity gene is at more risk of increased bmi but conditional on that individual s bmi their obesity genes should not affect the bmi of other individuals .however , if the obesity genes are revealed through another behavior ( a phenomenon known as _ pleiotropy _ ) that is associated with bmi then , unless such factors are conditioned on , genes will not be valid ivs .sociocentric network studies assemble data on the ties representing the relationship linking a set of individuals , such as all physicians within a medical practice .models for such data posit that global network properties are the result of phenomena involving subgroups of ( most commonly ) four or fewer actors .examples of such regularities are actor - level tendencies to produce or attract ties ( homophily and heterophily ) , dyadic tendencies toward reciprocity , and triadic tendencies toward closure or transitivity .a relational model , in essence , specifies a set of micro - level rules governing the local structure of a network . in this section , models for cross - sectional relational dataare consider first followed by longitudinal counterparts of them .the simplest models for sociocentric data assume dyadic independence . under the random model ,all ties have equal probability of occurring and the status of one has no impact on the status of another .more general dyadic models were developed in and later were extended in . because independence is still assumed between dyads , the information from the data about the model parameters accumulates in the form of a product of the probability densities for the status of the dyadin observation over each dyad : where and are vectors of actor - specific parameters representing the actors expansiveness ( propensity to send ties ) and popularity ( propensity to receive ties ) , respectively , and is a vector of covariates relevant to ( this may include covariates specific to either actor and combined traits of both actors ) .it is important to realize that covariates can be directional ; thus , need not equal .although the model may include other parameters , and play an important role in network analysis due to their relationship to the degree distribution of the network and so are explicitly denoted .when relationship status is binary , the distribution of is a four - component multinomial distribution .the probabilities are typically represented in the form of a generalized logistic regression model ( an extension of the logistic regression model to categories ) having the form where and , and are functions of and .the term includes factors associated with the likelihood that but not necessarily the likelihood that . in an non - directed networkthe predictors can be directional and so it is likely that .however , the only covariates included in must be non - directional as they affect the likelihood of ; the sign of indicates whether a mutual tie is more ( if ) or less ( if ) likely to occur than predicted by the density terms and so is a measure of _ reciprocity _ or _mutuality_. null mutuality is implied by . in dyadic models , the terms , and account for the local network about actors and through the inclusion of .furthermore , other effects can be homogeneous across actors or actor - specific .for example , the p model assumes and , implying the covariate - free joint probability density function of the network given by where , , , and .thus , the p model depends on network statistics and associated parameters .if the p model holds within ( ego , alter)-shared values of categorical attributes , a stochastic block model is obtained by allowing block - specific modifications to the density and reciprocity of ties .an extension would allow reciprocity to also vary between blocks .because the stochastic blockmodel extension of the p model is saturated at the actor - level due to the expansiveness and popularity fixed effects , no assumption is made about differences in the degree - distributions of the actors in different blocks .stochastic block models are the basis of mixed - membership and other recent statistical approaches for node - partitioning social network data .individuals in the same block of a stochastic block model are often referred to as being structurally equivalent .a criticism of dyadic independence models is that they fail to account for interdependencies between dyads .the or exponential random graph model ( ergm ) generalizes dyadic independence models .an ergm has the form where denotes a possible state of the network , denotes a network statistic evaluated over ( e.g. , the number of ties , the number of reciprocated ties ) , and is the set of all possible realizations of a directed network . in general , the scale factor that sums over each distinct network does not factor into a product of analogous terms . as a result , it is computationally infeasible to exactly evaluate the likelihood function of dyadic dependent ergms for even moderately - sized ( e.g. , is problematic ) .the key feature of the p model that allows the probability of the network to decompose into the product of dyadic - state probabilities is that it only depends on network statistics that sum individual ties or pairs of ties from the same dyad .if dyads are independent unless they share an actor , the network is a _markov random graph _markov random graphs may include terms for density , reciprocity , transitivity and other triadic structures , and -stars ( equivalent to the degree distribution ) these terms contain sums of the products of no more than three ties .such terms may be multiplied with actor attribute variables to define interaction effects .( an interaction is the effect of the product of two or more variables ; e.g. , if males and females have different tendencies to reciprocate ties then gender is said to interact with reciprocity . )networks that extend markov random graphs by allowing four - cycles but no fifth- or higher - order terms are _ partially conditionally dependent_. in such networks , a sufficient condition for dependence of and is that or .thus , two edges may be dependent despite not having any actors in common .partial conditional dependence is the basis of the new parameterizations of network statistics developed by that have led to better fitting ergms ( see below ) . under ergms ,the conditional likelihood of each tie given the other ties in the network has the logistic form : ^{-1 } , \label{eq : condlike}\ ] ] where is with excluded , is the vector of changes in network statistics that occur if is 1 rather than 0 .thus , the parameters of an ergm are interpreted as the change in the log of the odds that the tie is present to not being present conditional on the status of the rest of the network .a large positive parameter suggests that more configurations of the type represented in the network statistic appear in the observed network more often than expected by chance , all else equal . due to the factorization of the likelihood function in ( [ eq : likefn ] ) , likelihood - based estimators of dyadic independence modelshave desirable statistical properties such as consistency and statistical efficiency .however , if the model for the network includes predictors based on three or more actors , no such factorization occurs and markov chain monte carlo ( mcmc ) is required to optimize the likelihood function for ( [ eq : ergm ] ) , which for each observation involves making computations on ( if directed and if non - directed ) distinct networks .ergms have been demonstrated to be estimable on networks with , but computational feasibility depends on the terms in the model and the amount of memory available .the ergm ( `` exponential random graph model '' ) package that is part of the statnet suite in r , developed by the statnet project , estimates ergms .other estimation difficulties include failure of the optimization algorithm to converge and the fitted model producing nonsensical `` degenerate '' predicted networks . _degeneracy _ arises because for certain specifications of the network statistics are highly collinear or there is unaccounted effect heterogeneity across the network . as a result , under the fitted model the local neighborhood of networks around the observed network may have probability close to 0 and those networks with positive probability ( often the empty and complete graphs )may be radically different from each other and thus the observed network .although the average network over repeated draws has similar network statistics to the observed network , the individual networks generated under the fitted model do not bear any resemblence to the observed network . because an actor of degree contributes -stars for ,-star configurations are nested within one another and thus are highly correlated .therefore , when multiple -stars are predictors , extensive collinearity results .however , the estimated coefficients of successive -star configurations ( e.g. , 2-star , 3-star , 4-star ) tend to decrease in magnitude and have alternating signs , an observation often seen when multiple highly colinear variables are included in a regression model .this observation led to the development of the _ alternating -star _ , given by where denotes the number of -stars , being used in place of multiple individual -star terms in ( [ eq : ergm ] ) .a positive estimate of the coefficient of as( ) suggests that the degree distribution is skewed towards higher degree nodes while a negative coefficient implies large degrees are unlikely .the value of can be specified or estimated from the data .network statistics for triadic configurations the triangle ( a non - directed closed triad ) in non - directed networks and transitive triads , three - cycles , closed three - out stars , closed three - in stars in directed networks are the most prone to degeneracy .one reason is that heterogeneity in the prevalence of triads across the network , leads to heterogeneity in the density of ties across the network .a model that assumes homogeneous triadic effects across the network is unable to describe networks with regions of high and low density ; the generated networks are either dominated by excessive low density regions or by excessive high density regions .this observation suggests a hierarchical modeling strategy where the first step is to use a community detection algorithm ( see section [ sec : physcd ] ) to partition the network into blocks of nodes .then fit an ergm ( or other model ) to the sub - network corresponding to each community , allowing the network statistics to have different effects within each community . the just - described modeling strategy combines methods of network science and social network analysis .a similar approach has been used to overcome severe computational difficulties that often occur when one or multiple triadic ( triangle - type ) terms are included in the model .a -triangle is a set of triangles resting on a common base .for example , if individuals , , and are one closed triad and individuals , , and are another then the four individuals form a 2-triangle with the edge common to both . let denote the number of -triangles in the network .thus , denotes the total number of closed triads , the total number of 2-triangles , and so on .the _ alternating -triangle statistic _ was developed to perform for triadic structures what as( ) performs for -stars . the presence of makes at( ) nonlinear in the triangle count , giving lower probability to highly clustered structures . by making the number of actors who share partners the core term , at( )can be re - written as a geometrically weighted edgewise shared partner ( gwesp ) statistic .the as( ) and at( ) statistics do not differentiate between outward and inward ties .recently , directed forms of these statistics have been introduced .the directed versions of the -star are threefold , corresponding to two - paths , shared destination node ( activity ) , shared originator node ( popularity ) .the directed versions of the -triangle represent transitivity , activity closure , popularity closure , and cyclic - closure .an alternative approach to modeling a one - mode projection ( by construction a non - directed network ) from a two - mode network is to directly model the two - mode network .an advantage of direct modeling is that all the information in the data is used .ergms or any other model applied to bipartite data need to account for the fact that ties can only form in dyads including one actor from each mode . in a dyadic independence modelthis is recognized simply by excluding all same mode dyads from the dataset . in general , the denominator in ( [ eq : ergm ] ) only sums over networks in which there are no within mode ties . if the number of actors in the two modes are and , there are distinct non - directed networks .the density and degree distributions may be represented in a bipartite ergm as in a unipartite ergm .however , with two modes it may be that two types of each network statistic and other predictor is needed .representations of homophily in two - mode networks are defined across modes . likewise , because there are no within mode ties , statistics that account for closure must also depend only on inter - mode ties .the smallest closed structure in a bipartite graph is a four - cycle ( closed four - path ) .an example of a four - cycle is the path a1c2a in figure [ fig : biparproj ] ; it includes four distinct actors and four edges are traversed to return to the initial actor .a simple measure of closure contrasts the number of closed four - cycles out of all three paths containing four unique actors with the overall density of ties . a simple model for testing whether clustering ( closure ) is present in a bipartite network includes density , both sets of -stars , three - path , and four - cycle statistics as predictors .a significant positive effect of the four - cycle statistic suggests that two actors of degree two in one mode that have one of the actors in the other mode in common are more likely to also have the second actor in common , relative to two randomly selected actors of degree two from the same mode .for example , in a physician - patient network , clustering implies having one patient in common increases the likelihood of having another patient in common .physicians a and c both have patients 1 and 2 in common , hence they provide evidence for bipartite closure .however , physicians e and f have patient 3 in common ; despite being eligible to exhibit bipartite closure they do not , hence they provide evidence against bipartite closure .analogies of ergms and solutions to problematic issues exist for bipartite networks .for example , to avoid problems of high colinearity between the -star terms , alternating -star statistics can be used in place of them .let denote the number of ties from one mode to the other , and denote the alternating -star statistics for each mode , denotes the number of three - paths , and denote the number of closed four - cycles for a network .the resulting bipartite ergm for has the form : where sums over the possible bipartite graphs .the statistic is the proportion of times that two patients each visit the same two physicians out of all the occurrences where two patients both have one visit to one physician and one patient visits the other physician .the coefficient is the effect associated with this lowest - order form of closure in a two - mode sense ( but should not be thought of as reciprocity because the network is non - directed ) .the development of relational models has primarily focused on cross - sectional data .however , extensions of ergms to longitudinal scenarios have been developed most often involving a markov assumption to describe dependence across time .the first longitudinal ergms treated tie - formation and tie - dissolution as equitable events in the evolution of the network .a more general formulation treats tie - formation ( attractiveness in the context of network science ) and tie - duration ( the complement of tie - duration referred to as fitness in network science ) as separable processes , thereby allowing the same network statistic to impact tie - formation and tie - dissolution differently .like ergms for cross - sectional data , longitudinal ergms are defined by statistics that count the number of occurrences of substructures in the network . however , in addition to the current state of the network , such statistics may also depend on previous states . under markovian dependence , network statistics only depend on the current and the most recent state ; for example , the number of ties that remain intact from the preceding observation .the recently released tergm ( `` temporal exponential random graph model '' ) package in the statnet suite in r estimates ergms for discrete temporal ( i.e. , longitudinal ) sociocentric data .an alternative approach for modeling network evolution is the actor - oriented model .this centers on an objective function that actors seek to maximize and which may be sensitive to multiple network properties , including reciprocity , closure , homophily , or contact with high - degree actors .the model assumes that actors control their outgoing ties and change them in order to increase their satisfaction with the network in one or more respects as quantified by the objective function .it resembles a rationale choice model in which each agent attempts to maximize their own utility function .estimated parameters indicate whether changes in a given property raise or lower actor satisfaction .an important distinction of actor - oriented models from ergms is that the relevant network statistics in the actor - oriented model are specific to individuals rather than being aggregations across the network .however , like ergms , estimation is computationally intensive .the siena package in stocnet uses a stochastic approximation algorithm but struggles with networks of appreciable size ( e.g. , thousands of individuals ) .because they only resemble ergms in the limiting steady - state case , actor - oriented models may also suffer from degeneracy but the problem is less profound .a virtue of the actor - oriented modeling framework in siena is that an actor s relationships can be modeled jointly with the social - influence effects of an actor s peers on their own traits . if the model is correctly specified , it has the potential to account for unmeasured confounding factors that affect both the evolution of relationship status and the values of individuals attributes , yielding unbiased estimates of the effects of observed variables affecting social influence and the evolution of the network .such a model was developed by steglich and colleagues but to date work in this area is limited . in ergmsa huge increase in computational complexity occurs between the dyadic independent and dyadic dependent models .a second concern about ergms is that in general they are not consistent under sampling in the sense that statistical inferences drawn from the network for the sample do not generalize to the full network .the few ergms to exhibit such consistency include the dyadic independent p and stochastic block models .an alternative modeling strategy provides a more graduated transition between independence and dependence scenarios by using random effects to model dyadic dependence and also ensures consistency between the results of analyzing the sample and the population of interest .random effects are used to account for dyadic independence in the p model introduced below . the p model is much like the p model except that the expansiveness and popularity parameters are random as opposed to fixed effects .typically , is assumed to be bivariate normal with covariance matrix .therefore , the p model is given by and .thus , and includes a subset of covariates that are symmetric ( ) in reflection of the fact that reciprocity is a symmetric phenomenon .conditional on the model implies that the relationship status of one dyad does not depend on that of another .a positive off - diagonal element of implies that expansive individuals also tend to be popular .the p model can be extended to account for more general forms of dyadic dependence than the latent propensity of an individual to send or receive ties .let each individual have a vector of latent variables , denoted in the case of individual , that together with the same for individual affects the value of the relationship between and .the dependence of tie - status on is generally represented using a simple mathematical function .the major types of models are latent class models , latent distance models , and latent eigenmodels .these models are characterized by the form of the latent variable which is included as an additional predictor in . in ( [ eq : latvble ] ) the form and interpretation of changes from denoting a scalar categorical latent variable in the latent class model ( first row ) , to a position in a continuously - valued multi - dimensional space in the latent distance and latent eigenmodels ( second and third rows , respectively ) .the term can be added to either the or components of the p model to allow higher - order dependence to moderate the effect of density and reciprocity , respectively . in the latent class specificationthe array of values of form a symmetric matrix .a basic specification is if ( nodes in same partition ) and if ( nodes in different partitions ) ( ) .latent class models extend stochastic - block models to allow latent clusters as well as observed clustering variables .this family of models is suited to network data exhibiting structural equivalence ; that is , under the model individuals are hypothesized to belong to latent groups such that members of the same group have similar patterns of relationships . in the latent distance specification the most common values for are 1 and 2 , corresponding to absolute and cartesian distance , respectively .the distance metric accounts for latent homophily the effect of unobserved individual characteristics that induce ties between individuals . in this model, can be interpreted as the position of individual in a social space .this model accounts for triadic dependence ( e.g. , transitivity ) by requiring that latent distances between individuals obey the triangle inequality .latent distance models are available in the latentnet package in r .the latent eigenmodel is the most general specification and accounts for both structural equivalence and latent homophily .furthermore , the parameter space of the latent eigenmodel model of dimension generalizes that of the latent class model of the same dimension and weakly generalizes the latent distance model of dimension .conversely , the latent distance model of dimension does not generalize the one - dimensional latent eigenmodel model . the closeness of the latent factors and quantifies the structural equivalence of actors and positions in the network ; a tie is more likely if and have a similar direction and magnitude , allowing for more clustering than under ( [ eq : p2mod ] ) . on the other hand ,latent homophily is accounted for by the diagonal elements of , which can be positive or negative ( allowing for heterophily as well as homophily ) .the model constrains the extent to which the quadratic - forms , , and constructed from the latent vectors vary from one another .the greater the magnitude of the greater the extent to which ties are expected to cluster and form cliques .the latent eigenmodel model is appropriate if a network exhibits clustering due to both structural equivalence and unmeasured homophily . in and modelsare specified at the tie - level with reciprocity ( in directed networks ) represented as the within - dyad correlation between two tie - specific latent variables . modeling reciprocity as a latent process differs from the p model , in which reciprocityis represented as a direct effect .therefore , an alternative family of latent variable models for networks is obtained by augmenting the density term in the p model with ( [ eq : latvble ] ) .an advantage of specifying a joint model at the dyad level is that the resulting ( extended - p ) model involves fewer latent variables , possibly alleviating computational issues such as non - identifiability of parameters or multiple local optima . the challenges of estimating models involving latent variables resemble those of factor analysis or other dimension - reduction methods .first , an appropriate value of may not be able to be specified from existing knowledge of the network and estimating from the data is not straightforward .second , computational challenges in estimating the latent variables can make the method difficult to apply to large networks .however , such issues are more easily overcome than degeneracy in ergms .degeneracy is avoided in these models as the model for a dyad determines the distribution of the network . in other words ,the factorization of the likelihood into a product of like terms ensures asymptotically that networks sampled under the model are almost surely in the neighborhood of the observed network , increasingly so as increases .another contrast with ergms is that the model describes a population as opposed to the single observed network .thus , in latent variable models the data generating process is modeled whereas ergms are specific to the observed network and so have more in common with finite population inference .another advantage of conditional independence models over ergms is that the same types of models can be applied to valued relational data .analogous to generalized linear models , the link function and any parametric distributions assumptions that define a conditional independence network model can be tailored to the type of relationship variable ( scale , count , ratio , categorical , multivariate ) . however, a recent adaptation of ergms has been proposed for modeling count - valued sociocentric data .offsetting the above advantageous features of conditional independence models is that terms such as are limited from the hypothesis testing and interpretational standpoint in that they do not distinguish particular forms of social equivalence or latent homophily .for example , the effect of transitivity is not distinguished from that of cyclicity or higher - order clustering , such as tetradic closure .therefore , the choice of model in practice might depend on the importance of testing specific hypotheses about higher - order effects to obtaining a model whose generative basis allows it to make predictions beyond the data set on which the model was estimated .longitudinal counterparts of conditional independence models are obtained by introducing terms that account for longitudinal dependence ( e.g. , past states of the dyad ) .a simple markov transition model was developed in with tie - formation and tie - dissolution treated as unrelated processes .conditional on the past state of the dyad and the sender and receiver random effects , the value of each tie is assumed to be statistically independent of that of any other tie .a more general formulation extends the p model , allowing dependence between ties within a dyad ( reciprocity ) , heterogeneous effects in the formation and dissolution of ties , and the inclusion of higher - order effects ( e.g. , third - order interactions to account for transitivity ) as lagged predictors .the approach in is notable for attempting to capture the best of both worlds : it allows localized ( actor or dyadic ) versions of the higher - order predictors available in ergms to be included as predictors , but avoids degeneracy by using their lagged values as opposed to their current values as predictors .therefore , conditional on the observed and latent predictors , dyads are cross - sectionally independent but longitudinally dependent on prior states of other dyads ( in addition to their own past states ) in the network .an extension that builds on is to incorporate the latent class , distance or eigenfactor terms in ( [ eq : latvble ] ) in the model .such a model was entertained in but has not yet been developed .we now switch attention to methods that have been derived and used in the field of network science . in general ,network science approaches avoid assumptions about distributions in models .for example , to test whether a network exhibits a certain property , the commonly - employed approach is to use a permutation test to develop a null distribution for a statistic that embodies the property in question and then evaluate how extreme the observed value of the statistic is with respect to the null distribution .this technique is the cornerstone of the procedure used to evaluate the degree of separation to which social clustering can be detected in .network science focuses not only on social networks , but also covers information networks , transportation networks , biological networks , and many others .most of the networks studied within network science are non - directed as ties are typically thought of as connections as opposed to measures for which the distinction between instigator and receiver is relevant .thus , the networks in this section are assumed to be non - directed unless stated otherwise .network science has taken a somewhat different approach to modeling networks than the social sciences or statistics .essentially all models developed within network science are _ generative models _ , sometimes also known as forward models , in contrast to probabilistic models such as ergms .these models start from a set of simple hypothesized mechanisms , often functioning at the level of individual nodes and ties , and attempt to describe what types of network structures emerge from a repeated application of the proposed mechanisms .many of the models describe growing networks , where one starts from a small connected seed network consisting of a few connected nodes , and then grows the network by subsequent addition of nodes , usually one at a time .the _ attachment rules _ specify how exactly an incoming node attaches itself to the existing network .generative models are commonly exploratory in nature .if they reproduce the type of structure observed in an empirical network , it is plausible that the proposed mechanisms may underlie network formation in the real world .the main insight to be gained from a generative model is a potential explanation for why a network possesses the type of structure it does .many of the models are simple in nature , which occasionally leads to analytical tractability , but the main reason for simplicity is the potential to expose clearly the main mechanism(s ) driving the phenomenon of interest .it is not uncommon for generative models to possess only two or three parameters , yet occasionally simple generative mechanisms can explain some of the key features surprisingly well .once a model can explain the main features , it can be fine - tuned by adding more specific or nuanced mechanisms .a few examples of generative models are now described .cumulative advantage refers to phenomena where success seems to breed success , such as in the case of accumulation of further wealth to already wealthy individuals . in networks of scientific citations ,where a node represents a scientific paper , each node has some number of edges pointing to nodes that correspond to cited papers . in the present context ,for example , there would be an edge pointing from the node representing this chapter to the node representing the 1965 _ science _ paper of price . while the out - degree of nodes is fairly uniform , as the length of bibliographies is fairly constrained , the in - degree distribution was found to be fat - tailed with the functional form of a power - law , .price later proposed a mathematical model for cumulative advantage processes , `` the situation in which success breeds success '' .in this model , nodes are added to the network one at a time , and the average out - degree of each node is fixed .the attachment rule in the model specifies that each new paper will cite existing papers with probability proportional to the number of citations they already have .thus each incoming node will attach itself with some number of directed edges to the existing network , the exact number of ties being drawn from a distribution , and the nodes these new edges are pointing to will be chosen proportional to their in - degree . in this formulation , however , papers with exactly zero citations can never accrue citations . to overcome this problem , one can either consider the original publication as the first citation so that each paper starts with one citation or , alternatively , add a small constant to the number of citations . either way , the outcome is that the target nodes are chosen in proportion to their in - degree plus this small positive constant. a derivation of the resulting in - degree distribution is given by newman . denoting the average out - degree of a node by and using to denote the small positive constant , the in - degree distribution for large values of has the power - law form , where .this simple model ( although the derivation of the result is quite involved ) is able to reproduce the empirical citation ( in - degree ) distribution for scientific papers with surprising accuracy given that the model only contains two parameters .it may seem odd that the model does not incorporate any notion of paper quality , which surely should be an important driver of citations .here it is important to notice that the model does not make any attempt to predict _ which _ paper becomes popular ( although it can be shown , using the model , that papers published at the inception of a field have a much higher probability to become popular ) .instead , the model incorporates the quality of papers implicitly , and indeed the number of citations to a paper is frequently seen as an indicator of its quality .popular papers are also easily discovered , which further feeds their popularity .the idea of using popularity as a proxy for quality may extend to other areas where resources are scarce ; for example , skilled surgeons are in high demand .the cumulative advantage model of price is developed as a modification of the polya urn model , which is used to model a sampling process where each draw from the urn , corresponding to a collection of different types of objects , changes the composition of the urn and thereby changes the probability of drawing an object of any type in the future .the standard polya urn model consists of an urn containing some number of black and white balls , drawing a ball at random and then returning it to the urn along with a new ball of the same color .independently of price , barabasi and albert introduced a similar model in 1999 .they examined the degree distributions of an actor collaboration network ( two actors are connected if they are cast in the same movie ) , world wide web ( two web pages are connected if there is a hyperlink from one page to the other ) , and power grid ( two elements ( generators , transformers , substations ) are connected if there is a high - voltage transmission line between them ) , finding that they approximately followed power - law distributions . although the actor collaboration network and the power grid networks are defined much like a projection from a two - mode to a one mode network , a subtle difference between them is that direct interaction between the nodes can be assumed . in other words ,the nodes can be thought of as directly linked . both of the generic network models in existence at the time , the erds - rnyi and the watts - strogatz models , operated on a fixed set of vertices , and assumed that connections were placed or rewired without any regard to the degrees of the nodes to which they were connected .the model of barabasi and albert changed both of these aspects .first , they introduced the notion of network growth , such that at each time step a new node would be added to the network .second , this new node would connect to the existing network with exactly non - directed edges , and the nodes they attached to were chosen in proportion to their degree .the probability for the incoming vertex to connect to vertex depends solely on its degree and is given by the model was solved by barabasi and albert using rate equations , which are differential equations for the evolution of node degree over time where both degree and time , as an approximation , are treated as if they were continuous variables .more general solutions were provided by krapivsky _et al . _ also using rate equations and dorogovtsev __ using master equations which , like rate equations , are differential equations for the evolution of node degree , but they ( correctly ) treat degree as a discrete variable while still making the continuous - time approximation for time . in the master equation approach ,one writes down an equation for the evolution of the number of nodes of a given degree .let us use to denote the number of nodes of degree in the network at time , where time is identified with network size , i.e. , time corresponds to the network at the point of its evolution when it consists of nodes .( the nodes making up the seed network can be usually ignored in the limit as time increases . )the number can change in two ways : it can either increase as an incoming node attaches itself to a node of degree and thus turns it into a node of degree , or it can decrease as an incoming node attaches itself to a node of degree , turning into a node of degree .the former situation leads to , and the latter to .transitions larger than one , e.g. , from to or from to are very unlikely and can be ignored .the value of increases by one per time step as each incoming node has degree , which also means there are no nodes with degree less than , and hence the equations used to model the evolution of quantities like are not valid for .the resulting degee distribution has the form which asymptotically behaves as .the preferential attachment model of barabasi and albert has attracted a tremendous amount of scientific interest in the past few years , and consequently numerous modifications of the model have been introduced .for example , extensions of the model allow : * ties to appear and disappear between any pairs of vertices ( the original formulation only considers the addition of ties between the incoming vertex and set of vertices already in existence ) .* vertices to be deleted either uniformly at random or based on their connectivity . *the attachment probability to be super - linear or sub - linear in degree , or to consist of several terms .* nodal attributes , such as the _ attractiveness _ ( the propensity with which new ties form with the node ) or _ fitness _ ( the propensity with which established ties remain intact ) of a node , and the attachment probability can incorporate these attributes in addition to degree . * edges to assume weights instead of binary values to codify connection strength between any pair of elements . in the context of physician networks ,a preferential attachment model could be used to examine the process of new physicians seeking colleagues to ask for advice upon joining a medical organization , such as a hospital . under the preferential attachment hypothesis, new physicians would be more likely to form ties with and thus seek advice from popular established physicians or physicians in the same cohort ( e.g. , medical school or residency program ) . the class of models known as _ network evolution models _ can be defined via three properties : ( i ) the models incorporate a set of stochastic attachment rules which determine the evolution of the network structure explicitly on a time - step by time - step basis ; ( ii ) the network evolution starts from an empty network consisting of nodes only , or from a small seed network possessing arbitrary structure ; and ( iii ) the models incorporate a stopping criterion , which for growing network models is typically in the form of the network size reaching a predetermined value , and for dynamical ( non - growing ) network models the convergence of network statistics to their asymptotic values .many network evolution models do not reference intrinsic properties or attributes of nodes , and in this sense they are similar to the various implementations of preferential attachment models that do not postulate node specific fitness or attractiveness .most network evolution models that are intended to model social networks employ some variants of focal closure and cyclic closure ( see , e.g. , ) ._ focal closure _ refers to the formation of ties between individuals based on shared foci , which in a medical context could correspond to a group of doctors who practice in a particular hospital ( the focus ) .the concept of shared foci in network science is analogous to homophily in social network analysis .more broadly , ties could represent any interest or activity that connects otherwise unlinked individuals .in contrast , _ cyclic closure _ refers to the idea of forming new ties by navigating and leveraging one s existing social ties , a process that results in a cycle in the underlying network .because the network is non - directed , the term cycle is used interchangeably with closure .this differs from when the network is directional and a cycle is a specific form of closure , with transitivity being another form ._ triadic closure _ , which is the special case of cyclic closure involving just three individuals , refers to the process of getting to know friends of friends , leading to the formation of a closed triad in the non - directed network .most social networks are expected to ( i ) have skewed and fat - tailed degree distributions , ( ii ) be assortatively mixed ( high - degree individuals are connected to high - degree individuals ) , ( iii ) be highly clustered , and ( iv ) possess the small - world property ( average shortest path lengths are short , or more precisely , scale as ) , and ( v ) exhibit community structure . the models by davidsen __ and marsili _ et al . _ exemplify dynamic ( non - growing ) network evolution models for social networks .both have a mechanisms that starts by selecting a node in the network uniformly at random . in the model of davidsen _ et al ._ , if node has fewer than two connections , it is connected to a randomly chosen node in the network ; otherwise two randomly chosen neighbors of node are connected together . in the model of marsili _ et al ._ , node ( regardless of its degree ) is connected with probability to a randomly chosen node in the network ; then a second - order neighbor of node , i.e. , a friend s friend , is connected with probability to node .the first mechanism in each model , the random connection , emulates focal closure , because there are no nodal attributes signifying shared interests .the point is that the formation of these connections is not driven by the structure of existing connections but , from the point of view of network structure , is purely random .the second mechanism , the notion of triadic closure , is implemented in slightly different ways across the models .if these mechanisms were applied indefinitely , the result would be a fully connected network . to avoid this outcome, the models also delete ties at a constant rate , which makes it possible for network statistics of interest to reach stationary distributions . in the model of davidsen __ , tie deletion is accomplished by choosing a node in the network uniformly at random , and then removing all of its ties with some probability ; marsili _ et al . _accomplish the same phenomenon by selecting a tie uniformly at random , and then deleting it with probability .growing network evolution models , such as those by vzquez and toivonen _ et al . _ , do not usually incorporate link deletion , but instead grow the network to a pre - specified size , which obviates the need for link deletion .marsili _ et al ._ use extensive numerical simulations , as well as a master equation approach applied to a mean - field approximation of the model , to explore the impact of varying the probabilities ( global linking ) , ( neighborhood linking ) , and ( link deletion ) for average degree and average clustering coefficient .consider a situation where the value of ( neighborhood linking ) is increased while keeping the value of ( link deletion ) fixed . at first , for small values of , components with more than two nodes are rare , and the network can be said to be in the sparse phase . upon increasing the value of up to a specific point ,a large connected component emerges , and the value of the average degree suddenly jumps up .this point equals and is known as the critical point it marks the beginning of the dense phase in the phase diagram of the system .as is increased further , the network becomes more densely connected . reversing the process by slowly decreasingthe value of identifies a range of values from where the largest connected component remains densely connected and the average degree remains high .only when the value of is decreased below a point denoted by does the network `` collapse '' and re - enter the sparse phase .this phenomenon , which demonstrates some of the connections between network science and statistical physics , is typical of first - order or discontinuous phase transitions in statistical physics , and it demonstrates how hysteresis , the effect of the system remembering its past state can rise in networked systems . although markov dependence is a special case of hysteresis , its use is generally restricted to probabilistic models whereas hysteresis is typically aligned with nonlinear models of physical phenomena having a continuous state - space . from the social network point of viewthis means that the network can remain in a connected phase even if the rate of establishing new connections at the current rate would not be sufficient for getting the network to that phase in the first place . in more practical terms, this finding implies that it is possible to maintain a highly connected network with a relatively low `` effort '' ( the parameter in the model ) once the network has been established , but that same low level of effort would not be sufficient for establishing the dense phase of network evolution in the first place .( the analogy in social network analysis is that the threshold for forming a ( e.g. ) friendship is greater than that needed for it to remain intact . )the model by kumpula _ , which is another dynamical ( non - growing ) network evolution model for social networks , implements cyclic closure and focal closure ( see figure [ fig : kumpula ] ) in a manner similar to the models of davidsen _ et al . _ and marsili _ et al ._ , but introduces a minor modification . unlike the previous models which produce binary networks with , this model produced weighted networks with . the main modification deals with the triadic closure step , which here is implemented as a weighted two - step random walkstarting from a randomly chosen node ; this node chooses one of its neighbors with probability , where is the strength of node , i.e. , the sum of the edge weights connecting it to its neighbors .if node has neighbors other than , such a node will be chosen with probability , where there is a requirement that .the weights and on the edges just traversed will be increased by a value .in addition , if there is a link connecting node and node , the weight on that link is similarly increased by ; otherwise a new link is established between node and with . when , there is no clear community structure present , but as the value of is increased , very clear nucleation of communities takes place .this phenomenon happens because when , a type of positive feedback or memory gets imprinted on the network , which reinforces existing connections , and makes future traversal of those connections more likely .this is not unlike the models of cumulative advantage or preferential attachment discussed above , but now applies to individual links as opposed to nodes .if one inspects the community structure produced by the model , most of the strong links appear to be located within communities , whereas links between communities are typically weak .this type of structural organization is compliant with the so - called weak ties hypothesis , formulated in , which states , in essence , that the stronger the tie connecting two individuals , the higher the fraction of friends they have in common ._ showed that a large - scale social network constructed from the cell phone communication records of millions of people was in remarkable agreement with the hypothesis only the top 5% of ties in terms of their weight deviated noticeably from the prediction .the networks produced by the model of kumpula _ et al . _ are clearly reminiscent of observed real - world social networks , and the inclusion of the tuning parameter makes it straightforward to create networks with sparser or denser communities .the downside is that the addition of weights to the model appears to make it analytically intractable ., ( b ) , ( c ) , and ( d ) .figure adapted from kumpula _ et al . _( 2007 ) . ]nodal attribute models , in stark contrast to network evolution models , specify nodal attributes for each node , which could be scalar or vector valued .the probability of linkage between any two nodes is typically an increasing function of the similarity of the nodal attributes of the two nodes in consideration .this is compatible with the notion of homophily , the tendency for like to attract like .nodal attribute models can also be interpreted as spatial models , where the idea is that each node has a specific location in a social space .the models by bogu __ and wong _ et al ._ serve as interesting examples .nodal attribute models do not specify attachment rules at the level of the network , and in some sense can be seen as latent variable models for social network formation .these types of models have been studied less in the network science literature than network evolution models .clearly , nodal attribute models have a strong resemblance to models developed and studied in the social network literature that treat dyads as independent conditional on observed attributes of the individuals , other covariates , and various latent variables ( individual - specific random effects in the case of the p model , categorical latent variables in the case of latent class models , continuous latent variables under the latent - space and latent eigenmodels in section [ sec : condindep ] ) .unlike network science , work on such models in the social network literature has been more prominent than work on network evolution .a difference in the approach of some nodal attribute models and social network models is that the former may use specific rules for determining whether a tie is expected , such as a threshold function ( in a sense emulating formal decision making ) , whereas the latter rewards values of parameters that make the model most consistent with the observed network(s ) .many network characteristics are either microscopic or macroscopic in nature ; the value of a microscopic characteristic depends on local network structure only , whereas the value of a macroscopic characteristic depends on the structure of the entire network .node degree is an example of a microscopic quantity : the degree of a node depends only on the number of its connections .in contrast , network diameter , the longest of all pairwise shortest paths in the network , can change dramatically by the addition ( or removal ) of even a very small number of links anywhere in the network .for example , a -cycle consists of nodes connected by links such that a cycle is formed with each node connected to precisely two nodes .the diameter of such a network is , where the floor function maps a real number to the largest previous integer , such that for an even it follows that . for large values of , adding just a few links quickly brings down the value of network diameter .there is a third , intermediate scale that lies between the microscopic and macroscopic scales which is often known as the _ mesoscopic _ scale .for example , a -clique could justifiably be called a mesoscopic object ( especially if is large ) .another type of mesoscopic structure is that of a network community , which can be loosely defined as a set of nodes that are densely connected to each other but sparsely connected to other nodes in the network ( but not to the extent of resulting in distinct components ) .there has been considerable interest especially in the physics literature focusing on how to define and detect such communities , and several review papers cover the existing methods .the motivation behind many of these efforts is the idea that communities may correspond to functional units in networks , such as unobserved societal structures .the examples range from metabolic circuits within cells to tightly - knit groups of individuals in social networks .the interested reader can consult the review articles on community detection methods for more details .another application is health care where , for instance , have deduced communities of physicians based on network ties representing their treating the same patients within the same period of time .the clustering of physicians in communities is shown for one particular hospital referral region ( a health care market encompassing at least one major city where both cardiovascular surgical procedures and neurosurgery are performed ) in the united states ( figure [ fig : commex ] ) .one potential application of network science methods for community detection is in the area of health education and disease prevention ( e.g. , screening ) . due to limited resources, it may not be possible to send materials or otherwise directly educate every member of the population .the partition of individuals into groups would facilitate a possibly more efficient approach whereby the communities are first studied to identify key individuals .then a few key individuals in each community are trained and advised on mechanisms for helping the intervention to diffuse across the community . a general characteristic of interventions where such an approach might be useful are those where intensive training is required to be effective and where delegation of resources through passing on knowledge or advice is possible .a number of network community detection methods define communities implicitly via an appropriately chosen quality function .the underlying idea is that a given network can be divided into a large number of partitions , or subsets of nodes , such that each node belongs to one subset , and each such partition has a scalar - valued quality measure associated with it , denoted by . in principleone would like to enumerate all possible partitions and compute the value of for each of them , and the network communities would then be identified as the partition ( or possibly partitions ) with the highest quality . in practice , however , the number of possible partitions is exceedingly large even for relatively small networks , and therefore heuristics are needed to optimize the value of .community detection methods based on quality function optimization therefore have two distinct components , which are the functional form of the quality function , and the heuristic used for navigating a subset of partitions over which is maximized .the most commonly used optimization based approach to community detection is modularity maximization , where modularity is one possible choice for the quality function ; in statistical terminology , modularity maximization would be regarded as a non - parametric procedure due to the fact that no distributional nor functional form assumptions are relied upon .there are many variants of modularity , but here the focus is on the original formulation by newman and girvan .modularity can be seen as a measure that characterizes the extent of homophily or assortative mixing by class membership , and one way to derive it is by considering the observed and expected numbers of connections between vertices of given classes , where the class of vertex is given by .the following derivation follows closely that of , although other derivations , based for example on dynamic processes , are also available .we start by considering the observed number of edges between vertices of the same class , which is given by , where is the kronecker delta , and the factor prevents double - counting vertex pairs . to obtain the expected number of edges between vertices of the same class , cut every edge in half , resulting in two stubs per edge , and then connect these stubs at random . for a network with edges , there are a total of such stubs .consider one of the stubs connected to vertex .this particular stub will be connected at random to vertex of degree with probability , and since vertex has such stubs , the number of expected edges between vertices and is .the expected number of edges falling between vertices of the same class is now .the difference between the observed and expected number of within class ties is therefore .given that the number of edges varies from one network to the next , it is convenient to deal with the fraction of edges as opposed to the number of edges , which is easily obtained by dividing the expression by , resulting in the assignment of nodes into classes that maximizes modularity is taken as the optimal partition and identifies the assignment of nodes into network communities .note that modularity can be easily generalized from binary networks to weighted networks , in which case stands for the strength ( sum of all adjacent edge weights ) of node , and is the total weight of the edges in the network .the expression for modularity has an interesting connection to spin models in statistical physics . in a so - called infinite range -state potts model ,each of the particles can be in one of states called spins , and the interaction energy between particles and is if they are in the same state and zero if they are not in different states .the energy function of the system , known as its hamiltonian , is given by the sum over all of the pairwise interaction energies in the system where indicates the spin of particle and denotes the configuration of all spins . finding the minimum energy state ( the ground state ) of the system corresponds to finding such that is minimized .the states of the particles ( spins ) correspond to community assignments of nodes in the network problem , and minimizing is mathematically identical to maximizing modularity . in the physical system ,depending on the interaction energies , the spins seek to align with other spins ( interact ferromagnetically ) or they seek to have different orientations ( interact antiferromagnetically ) . in the community detection problem ,two nodes seek to be in the same community if they are connected by an edge that is stronger than expected ; otherwise they seek to be in different communities .this correspondence between the two problems has enabled the application of computational techniques developed for the study of spin systems and other physical systems to be applied to modularity optimization and , more broadly , to the optimization of other quality functions .simulated annealing , greedy algorithms , and spectral methods serve as examples of these methods .more details and references are available in community detection review articles . , where subscripts and are used to index the nodes and subscript is used to index the slices .each node is coupled to itself in the other slices , and the structure of this coupling , encoded by the tensor , depends on whether the slices correspond to snapshots taken at different times ( time - dependent network ) , to communities detected at different resolution levels ( multiscale network ) , or to a network consisting of multiple types of interactions ( multiplex network ) . for time - dependent and multiscale networks ,the slice - to - slice coupling extends for each node a tie to itself across neighboring slices only as exemplified for the node in the upper right corner of the slices ; for multiplex networks , the slice - to - slice coupling extends a tie from each node to itself in all the slices as exemplified for the node in the lower left corner .whatever the form of this coupling , it is applied the same way to each node , although for visual clarity the slice - to - slice couplings are shown just for two nodes .adapted from mucha _ et al . _( 2010 ) . ]although there are several extensions of modularity maximization , only one such generalization is described here ._ developed a generalized framework of network quality functions that allow the study of community structure of arbitrary multislice networks ( see fig .[ fig : ms ] ) , which are combinations of individual networks coupled through links that connect each node in one slice to the same node in other slices .this framework allows studies of community structure in time - dependent , multiscale , and multiplex networks .much of the work in the area of community detection is motivated by the observation that the behavior of dynamical processes on networks is driven or constrained by their community structure .the approach of mucha _ et al . _is based on a reversal of this logic , and it introduces a dynamical process on the network , and the behavior of the dynamical process is used to identify the ( structural ) communities .the outcome is a quality function \delta(c_{is } , c_{jr}),\ ] ] where encodes the node - to - node couplings within slices and encodes the node - to - node couplings across slices that are usually set to a uniform value ; is the number ( or weight ) of ties within slice and is the weight of all ties in the network , both those located within slices and those placed across slices ; is a resolution parameter that controls the scale of community slices separately for each slice .the standard modularity quality function uses to denote the community assignment of node , but in the multislice context two indices are needed , giving rise to the terms , where the subscript is used to index the node in question and the subscript to index the slice .the outcome of minimizing , which can be done with the same heuristics as minimization of the standard modularity , is a matrix that consists of the community assignments of each node in every slice .the multislice framework can handle any combination of time - dependent , multiscale , and multiplex networks .for example , the slices in fig .[ fig : ms ] could correspond , say , to a longitudinal friendship network of a cohort of college students , each slice capturing the offline friendships of the students in each year .if data on the online friendships of the students were also available , corresponding to a different type of friendship , one could then introduce a second stack of four slices encoding those friendships .the four offline slices and the four online slices form a multiplex system , and they would be coupled accordingly .one could further introduce multiple resolution scales , and if one was interested in examining the community structure of the students at three different scales using , say , , this would result in a three - fold replication of the slice array with each of the three layers having a distinct value for .taken together , this would lead to a three - dimensional array of slices .cliques are ( usually small ) fully connected subgraphs , and a non - directed -clique is a complete subgraph consisting of nodes connected with links . in materialsscience the term percolation refers to the movement of fluid through porous materials .however , in mathematics and statistical physics , the field of percolation theory considers the properties of clusters on regular lattices or random networks , where each edge may be either open or closed , and the clusters correspond to groups of adjacent nodes that are connected by open edges .the system is said to percolate in the limit of infinite system size if the largest component , held together by open edges , occupies a finite fraction of the nodes .the method of -clique percolation in combines cliques and percolation theory , and it relies on the empirical observation that network communities seem to consist of several small cliques that share many of their nodes with other cliques in the same community . in this framework , cliques can be thought of as the building blocks of communities .a -clique community is then defined as the union of all adjacent -cliques , where two -cliques are defined to be adjacent if they share nodes .one can also think about `` rolling '' a -clique template from any -clique in the graph to any adjacent -clique by relocating one of its nodes and keeping the other nodes fixed . a community , defined through the percolation of such a template , then consists of the union of all subgraphs that can be fully explored by rolling a -clique template . as becomes larger , the notion of a community becomes more stringent , and values of tend to be most appropriate because larger values become unwieldy .the special case of reduces to bond ( link ) percolation and reduces to site ( node ) percolation .the -clique percolation algorithm is an example of a local community - finding method .one obtains a network s global community structure by considering the ensemble of communities obtained by looping over all of its -cliques .some nodes might not belong to any community ( because they are never part of any -clique ) , and others can belong to several communities ( if they are located at the interface between two or more communities ) .the nested nature of communities is recovered by considering different values of , although -clique percolation can be too rigid because focusing on cliques typically causes one to overlook other dense modules that are not quite as tightly connected .the advantage of -clique percolation is that it provides a successful way to consider community overlap . allowing the detection of network communities that overlap is especially appealing in the social sciences , as people may belong simultaneously to several communities ( colleagues , family , friends , etc . ) .however , the case can be made that it is the underlying interactions that are different , and one should not combine interactions that are of fundamentally different types . in statistics , this is analogous to using composite variables or scales that combine multiple items in ( e.g. ) health surveys or questionnaires .if the nature of the interactions is known , the system might be more appropriately described as a multiplex network , where one tie type encodes professional interactions , another tie type corresponds to personal friendships , and a third tie type captures family memberships . the multislice framework discussed above is able to accommodate memberships in multiple communities as long as distinct interaction types are encoded with distinct ( multiplex ) ties . the latent class models in section [ sec : condindep ] partitions the actors in a network into disjoint groups that can be thought of as communities . the clustering process can be thought of as a search for structural equivalence in that individuals are likely to be included in the same community if the network around them is similar to that of their neighbors .the criteria for judging the efficacy of the partition of nodes into communities is embedded in the statistical model implied for the network and as such is a balance between all of the terms in the model .this contrasts a non - model - based objective function such as modularity which focuses on maximizing in some sense the ratio of density of ties within and between communities . to illustrate the difference ,consider a -star .the greater the value of , the greater the discrepancy in the degree of the actors . therefore ,if -stars occur frequently , the members of the same -star are likely to be included in the same group by the latent class model but , due to the difference in degree , are unlikely to be grouped under modularity maximization . however , an advantage of the network science approach is that results are likely to be more robust to model mis - specifications than under the social network approach . in the future it is possible to imagine a bridging of the two approaches to community detection .for example , a model for the network or the component of the model involving the key determinants of network ties , could be incorporated in the modularity function in ( [ eq : modmax ] ) .depending on the specification , the result might be a weighted version of modularity in which a higher penalty is incurred if individuals with similar traits or in structurally equivalent positions with respect to -stars , triadic closure or other local network configurations are included in different communities than if individuals with different traits are in different communities. however , to the best of the author s knowledge , such a procedure is not available .[ sec : discuss ] in this chapter , the dual fields of social networks and network science have been described , with particular focus on sociocentric data . both fields are growing rapidly in methodological results and the breadth of applications to which they are applied . in health applications , social network methods for evaluating whether individuals attributes spread from person - to - person across a population ( social influence ) and for modeling relationship or tie status ( social selection )have been described .models of relationship status have not been applied as frequently in health applications , where focus often centers on the patient . however , is a notable exception . due to the ever - growing availability of data , the interest in peer effects , and the need to design support mechanisms ,the role of social network analysis in health care and medicine is likely to undergo continued growth in the future .a novel feature of this chapter is the attention given to network science .although network science is descriptively - inclined and thus is removed from mainstream translational medical research seeking to identify causes of medical outcomes , the increasing availability of complex systems data provides an opportunity for network science to play a more prominent role in medical research in the future .for example , barabasi and others have created a human disease network by connecting all hereditary diseases that share a disease - causing gene . in other work, they created a phenotypic disease network ( pdn ) as a map summarizing phenotypic connections between diseases .these networks provided important insights into the potential common origins of different diseases , whether diseases progress through cellular functions ( phenotypes ) associated with a single diseased ( mutated ) gene or with other phenotypes , and whether patients affected by diseases that are connected to many other diseases tend to die sooner than those affected by less connected diseases .such work has the potential to provide insights into many previously untested hypotheses about disease mechanisms .for example , they may ultimately be helpful in designing `` personalized treatments '' based on the network position held by an individual s combined genetic , proteomic , and phenotypic information .in addition , they may suggest conditions for which treatments found to be effective on another condition might also be tried .there are several important topics that have not been discussed , notably including network sampling . in gathering network data , adaptive methods such as link - tracing designsare often used to identify individuals more likely to know each other and thus to have formed a relationship with other sampled individuals than in a random - probability design .link - tracing and other related designs are often used to identify hard - to - reach populations .however , the sampling probabilities corresponding to link - tracing designs may be difficult to evaluate ( generally requiring the use of simulation ) and it may not be obvious how they should be incorporated in the analysis .the development of statistical methods that account for the sample design in the analysis of social network data has lagged behind the designs themselves .however , recently progress has been made on statistical inference for sampled relational network data . in the future it is likely that more bridges will form between the social network and the network science fields with models or methods developed in one field used to solve problems in the other .furthermore , as these two fields become more entwined , it is likely that they will also become more prominent in the solution to important problems in medicine and health care .the time and effort of dr .omalley and dr .onnela on researching and developing this chapter was supported by nih / nia grant p01 ag031093 ( pi : christakis ) and robert wood johnson award # 58729 ( pi christakis ) .dr onnela was further supported by nih / niaid grant r01ai051164 ( degruttola ) .the authors thank mischa haider , brian neelon , and bruce e. landon for reviewing an early draft of the manuscript and providing several useful comments and suggestions .to help readers familiar with social networks understand the network science component of the chapter and conversely for readers familiar with network science to understand the social network component , the following glossary contains a comprehensive list of terms and definitions . 1 . social network : a collection of actors ( referred to as actors ) and the ( social ) relationships or ties linking them .2 . relationship , tie : a link or connection between two actors .dyad : a pair of actors in a network and the relationship(s ) between them ; two relationships per measure for a directed network , one relationship per measure for an undirected network .triad : a triple of three actors in the network and the relationships between them .scale or valued relationship : a non - binary relationship between two actors ( e.g. , the level of a trait ) .we focused on binary relationships in the chapter .directed network : a network in which the relationship from actor to actor need not be the same as that from actor to actor . 7 .non - directed network : a network in which the state of the relationship from actor to actor equals the state of the relationship from actor to actor .sociocentric network data : the complete set of observations on the relationships in a directed network , or relationships in an undirected network , with actors .collaboration network : a network whose ties represent the actors joint involvement on a task ( e.g. , work on a paper ) or a common experience ( e.g. , treating the same episode of health care for a patient ) . 10 .bipartite : relationships are only permitted between actors of two different typesunipartite : relationships are permitted between all types of actorssocial contagion , social influence , peer effects : terms used to describe the phenomenon whereby an actor s trait changes due to their relationship with other actors and the traits of those actors . 13 .mutable trait : a characteristic of an actor than can change state .social selection : the phenomena whereby the relationship status between two actors depends on their characteristics , as occurs with homophily and heterophily . 15 .homophily : a preference for relationships with actors who have similiar characteristics . popularly referred to as `` birds of a feather flock together . ''heterophily : a preference for relationships with actors who have different characteristics . popularly referred to as `` opposites attracting . '' 17 . in - degree ,popularity : the number of actors who initiated a tie with the given actor . 18 .out - degree , expansiveness , activity : the number of ties the given actor initiates with other actors .-star : a subnetwork in which the focal actor has ties to other actors . 20 .-cycle : a subnetwork in which each actor has degree 2 that can be arranged as a ring ( i.e. , a -path through the actors returns to its origin without backtracking .for example , the ties a - b , b - c , and c - a form a three - cycle . 21 . degrees of separation : two individuals linked by a -path ( intermediary actors ) that are not connected by any path of length or less . 22 . density : the overall tendency of ties to form in the network .a descriptive measure is given by the number of ties in the network divided by the total number of possible ties .reciprocity : the phenomena whereby an actor is more likely to have a tie with actor if actor has a tie with actor . only defined for directed networks . 24 .clustering : the tendency of ties to cluster and form densely connected regions of the network .closure : the tendency for network configurations to be closed . 26 .transitivity : the tendency for a tie from individual a to individual b to form if ties from individual a to individual c and from individual c to individual b exist .a form of triadic closure commonly stated as `` a friend of a friend is a friend . ''reduces to general triadic closure in an undirected network .centrality : a dimenionless measure of an actors position in the network .higher values indicate more central positions .there are numerous measures of centrality .four common ones are degree , closeness , betweeness , and eigenvalue centrality .degree and eigenvalue centrality are extremes in that degree centrality is determined solely from an actor s degree ( it is internally focused ) while eigenvalue centrality is based on the centrality of the actors connected to the focal actor ( it is externally focused ) .structural balance : a theory which suggests actors seek balance in their relationships ; for example , if a likes b and b likes c then a will endeavor to like c as well to keep the system balanced .thus , the existence of transitivity is implied by structural balance .structural equivalence : the network configuration ( arrangement of ties ) around one actor is similar to that of another actor . even though actors may not be connected , they can still be in structurally similar situations .structural power : an actor in a dominant position in the network .such an actor may be one in a strategic position , such as the only bridge between otherwise distinct components .network component : a subset of actors having no ties external to themselves .graph theory : the mathematical basis under which theoretical results for networks are derived and empirical computations are performed . 33 .digraph : a graph in which edges can be bidirectional .unlike social networks , digraphs can contain self - ties .graphs lie in two - dimensional spacehypergraph : a graph in dimension three or higher .maximal subset : a set of actors for whom all ties are intact in a binary - network ( i.e. , has density 1.0 ) .if the set contains actors , the maximal subset is referred to as a -clique .scalar , vector , matrix : terms from linear and abstract algebra .a scalar is a matrix , a vector is a matrix , and a matrix is , where . 37 .adjacency matrix : a matrix whose off - diagonal elements contain the value of the relationship from one actor to another .for example , element contains the relationship from actor to actor .the diagonal elements are zero by definition .matrix transpose : the operation whereby element is exchanged with element for all . 39 . row stochastic matrix : a matrix whose rows sum to 1 and contain non - negative elements .thus , each row represents a probability distribution of a discrete - valued random variable .random variable : a variable whose value is not known with certainty .it can relate to an event or time period that is yet to occur , or it can be a quantity whose value is fixed ( i.e. , has occurred ) but is unknown .parametric : a term used in statistics to describe a model with a specific functional form ( e.g. , linear , quadratic , logarithmic , exponential ) indexed by unknown parameters or an estimation procedure that relies on specification of the complete distribution of the data .non - parametric : a model or estimation procedure that makes no assumption about the specific form of the relationship between key variables ( e.g. , whether the predictors have linear or additivie effects on the outcome ) and does not rely upon complete specification of the distribution of the data for estimation . 43 .outcome , dependent variable : the variable considered causally dependent on other variables of interest .this will typically be a variable whose value is believed to be caused by other variables .independent , predictor , explanatory variable , covariate : a variable believed to be a cause of the outcome .contextual variable : a variable evaluated on the neighbors of , or other members of a set containing , the focal actor .for example , the proportion of females in a neighboring county , the proportion of friends with college degrees .interaction effect : the extent to which the effect of one variable on the outcome varies across the levels of another variable .endogenous variable : a variable ( or an effect ) that is internal to a system .predictors in a regression model that are correlated with the unobserved error are endogeneous ; they are determined by an internal as opposed to an external process . by definition outcome variablesare endogenous .exogenous variable : a variable ( or an effect ) that is external to the system in that its value is not determined by other variables in the system .predictors that are independent of the error term in a regression model are exogeneous .instrumental variable ( iv ) : a variable with a non - null effect on the endogeneous predictor whose causal effect is of interest ( the `` treatment '' ) that has no effect on the outcome other than that through its effect on treatment .often - used sufficient conditions for the latter are that the iv is ( i ) marginally independent of any unmeasured confounders and ( ii ) conditionally independent of the outcome given the treatment and any unmeasured confounders . in an iv analysis a set of observed predictors may be conditioned on as long as they are not effects of the treatment and the iv assumptions hold conditional on them . while subject to controversy , iv methods are one of the only methods of estimating the true ( causal ) effect of an endogeneous predictor on an outcome .linear regression model : a model in which the expected value of the outcome ( or dependent variable ) conditional on one or more predictors ( or explanatory variables ) is a linear combination of the predictors ( an additive sum of the predictors multiplied by their regression coefficients ) and an unobserved random error .longitudinal model : a model that describes variation in the outcome variable over time as a function of the predictors , which may include prior ( i.e. , lagged ) values of the outcome .observations are typically only available at specific , but not necessarily equally - spaced , times .longitudinal models make the direction of causality explicit .therefore , they can distinguish between the association between the predictors and the outcome and the effect of a change in the predictor on the change in the outcome . 52 . cross - sectional model : a model of the relationship between the values of the predictors and outcomes at a given time . because one can not discern the direction of causality , cross - sectional models are more difficult to defend as causal .stochastic block model : a conditional dyadic independence model in which the density and reciprocity effects differ between blocks defined by attributes of the actors comprising the network .for example , blocks for gender accomodate different levels of connectedness and reciprocity for men and women .logistic regression : a member of the exponential family of models that is specific to binary outcomes .it utilizes a link function that maps expected values of the outcome onto an unrestricted scale to ensure that all predictions from the model are well - defined .multinomial distribution : a generalization of the binomial distribution to three or more categories .the sum of the probabilities of each category equals 1 .56 . exponential random graph model : a model in which the state of the entire network is the dependent variable .provides a flexible approach to accounting for various forms of dependence in the network . not amenable to causal modeling .degeneracy : an estimation problem encountered with exponential random graph models in which the fitted model might reproduce observed features of the network on average but each actor draw bears no resemblence to the observed network . often degenerate draws are empty or complete graphs .latent distance model : a model in which the status of dyads are independent conditional on the positions of the actors , and thus the distance between them , in a latent social space .latent eigenmodel : a model in which the status of dyads are independent conditional on the product of the ( weighted ) latent positions of the actors in the dyad .latent variable : an unobserved random variable .random effects and pure error terms are latent variables . 61 .latent class : an unobserved categorical random variable .actors with the same value of the variable are considered to be in the same latent class .factor analysis : a statistical technique used to decompose the correlation ( or covariance ) matrix of a set of random variables into groups of related items .generalized estimating equation ( gee ) : a statistical method that corrects estimation errors for dependent observations without necessarily modeling the form of the dependence or specifying the full distribution of the data .random effect : a parameter for the effect of a unit ( or cluster ) that is drawn from a specified probability distribution . treating the unit effects as random draws from a common probability distribution allows information to be pooled across units for the estimation of each unit - specific parameter .fixed effect : a parameter in a model that reflects the effect of an actor belonging to a given unit ( or cluster ) . by virtue of modeling the unit effects as unrelated parameters, no information is shared between units and so estimates are based only on information within the unit .ordinary least squares : a commonly - used method for estimating the parameters of a regression model .the objective function is to minimize the squared distance of the fitted model to the observed values of the dependent variable .maximum likelihood : a method of estimating the parameters of a statistical model that typically embodies parametric assumptions .the procedure is to seek the values of the parameters that maximize the likelihood function of the data .likelihood function : an expression that quantifies the total information in the data as a function of model parameters .markov chain monte carlo : a numerical procedure used to fit bayesian statistical models .steady state : the state - space distribution of a markov chain describes the long - run proportion of time the random variable being modeled is in each state .often markov chains iterate through a transient phase in which the current state of the chain depends less and less on the initial state of the chain .the steady state phase occurs when successive samples have the same distribution ( i.e. , there is no dependence on the initial state ) .colinearity : the correlation between two predictors after conditioning on the other observed predictors ( if any ) .when predictors are colinear distinguishing their effects is difficult and the statistical properties of the estimated effects are more sensitive to the validity of the model .normal distribution : another name for the gaussian distribution .has a bell - shaped probability density function .covariance matrix : a matrix in which the element contains the covariance of items and .absolute or geodesic distance : the total distance along the edges of the network from one actor to another .cartesian distance : the distance between two points on a two - dimension surface or grid . adheres to pythagorus theorem .count data : observations made on a variable with the whole numbers ( 0 , 1 , 2 , ) as its state space . 77 .statistical inference : the process of establishing the level of certainty of knowledge about unknown parameters ( or hypothesis ) from data subject to random variation , such as when observations are measured imperfectly with no systematic bias or a sample from a population of interest is used to estimate population parameters . 78 .null model : the model of a network statistic typically represents what would be expected if the feature of interest was non - existent ( effect equal to 0 ) or outside the range of interest . 79 .permutation test : a statistical test of a null hypothesis against an alternative implemented by randomly re - shuffling the labels ( i.e. , the subscripts ) of the observations .the significance level of the test is evaluated by re - sampling the observed data 50 100 times and computing the proportion of times that the test is rejected . 1 .network science : the approach developed from 1995 onwards mostly within statistical physics and applied mathematics to study networked systems across many domains ( e.g. , physical , biological , social , etc ) .usually focuses on very large systems ; hence , theoretical results derived in the thermodynamic limit are good approximations to real - world systems .2 . thermodynamic limit : in statistical physics refers to the limit obtained for any quantity of interest as system size tends to infinity .many analytical results within network science are derived in this limit due to analytical tractability .3 . statistical physics : the branch of physics dealing with many body systems where the particles in the system obey a fix set of rules , such as newtonian mechanics , quantum mechanics , or any other rule set . as the number of bodies ( particles ) in a system grows , it becomes increasingly difficult ( and less informative ) to write down the equations of motion , a set of differential equations that govern the motion of the particles over time , for the system .however , one can describe these systems probabilistically . the word `` statistical '' is somewhat misleading as there is no statistics in the sense of statistical inference involved ; instead everything proceeds from a set of axioms , suggesting that `` probabilistic '' might be a better term . statistical physics ,also called statistical mechanics , gives a microscopic explanation to the phenomena that thermodynamics explains phenomenologically .4 . generative model : most network models within network science belong to this category .here one specifies the microscopic rules governing , for example , the attachment of new nodes to the existing network structure in models of network growth .cumulative advantage : a stylized modeling mechanism introduced by price in 1976 to capture phenomena where `` success breeds success . ''price applied the model to study citation patterns where power - law or power - law like distributions are observed for the distribution of the number of citations and successfully reproduced by the model .polya urn model : a stylized sampling model in probability theory where the composition of the system , the contents of the urn , changes as a consequence of each draw from the urnpower law : refers to the specific functional form of the distribution of quantity .also called pareto distribution .see scale - free network .preferential attachment : a stylized modeling mechanism introduced by barabasi and albert in 1999 where the probability of a new node to attach itself to an existing node of degree is an increasing function of ; in the case of linear preferential attachment , this probability is directly proportional to . in short ,the higher the degree of a node , the higher the rate at which it acquires new connections ( increases its degree )weak ties hypothesis : a hypothesis developed by sociologist mark granovetter in his extremely influential 1973 paper `` the strength of weak ties . ''the hypothesis , in short , states the following : the stronger the tie connecting persons and , the higher the fraction of friends they have in common .modularity : modularity is a quality - function used in network community detection , where its value is maximized ( in principle ) over the set of all possible partitions of the network nodes into communities .standard modularity reads as where is the community assignment of node and is kronecker delta ; other quantities as defined in the text . 11 .rate equations : rate equations , commonly used to model chemical reactions , are similar to master equations but instead of modeling the count of objects ( e.g. , number of nodes ) in a collection of discrete states ( e.g. , the number of -degree nodes for different values of ) , they are used to model the evolution of continuous variables , such as average degree , over timemaster equations : widely used in statistical physics , these differential equations model how the state of the system changes from one time point to the next . for example , if denotes the number of nodes of degree , given the model , one can write down the equation for , i.e. , the number of -degree nodes at time . 13 . fitness or affinity or attractiveness : a node attribute introduced to incorporate heterogeneity in the node population in a growing network model .for example , in a model based on preferential attachment , this could represent the inherent ability of a node to attract new edges , a mechanism that is superimposed on standard preferential attachment .community : a group of nodes in a network that are , in some sense , densely connected to other nodes in the community but sparsely connected to nodes outside the community .community detection : the set of methods and techniques developed fairly recently for finding communities in a given network ( graph ) .the number of communities is usually not specified _ a priori _ but , instead , needs to be determined from data .critical point : the value of a control parameter in a statistical mechanical system where the system exhibits critical behavior : previously localized phenomena now become correlated throughout the system which at this point behaves as one single entityphase diagram : a diagram displaying the phase ( liquid , gas , etc . ) of the system as one or more thermodynamic control parameters ( temperature , pressure , etc . ) are varied .phase transition : thermodynamic properties of a system are continuous functions of the thermodynamic parameters within a phase ; phase transitions ( e.g. , liquid to gas ) happen between phases where thermodynamic functions are discontinuous .network diameter : the longest of the shortest pairwise paths in the network , computed for each dyad ( node pair ) .hysteresis : the behavior of a system depends not only on its current state but also on its previous state or states . 21 .quality function : typically a real - valued function with a high - dimensional domain that specifies the `` goodness '' of , say , a given network partitioning .for example , given the community assignments of nodes , which can be seen as a point in an -dimensional hypercube , the standard modularity quality function returns a number indicating how good the given partitioning is . 22 .dynamic process : any process that unfolds on a network over time according to a set of pre - specified rules , such as epidemic processes , percolation , diffusion , synchronization , etc .slice : in the context of multi - slice community detection , refers to one graph in a collection of many within the same system , where a slice can capture the structure of a network at a given time ( time - dependent slice ) , at a particular resolution level ( multiscale slice ) , or can encode the structure of a network for one tie type when many are present ( multiplex slice ) . 24 . scale - free network :network with a power - law ( pareto ) degree distribution .erds - rnyi model : also known as poisson random graph ( after the fact that the degree distribution in the model follows a poisson distribution ) , bernoulli random graph ( after the fact that each edge corresponds to an outcome of a bernoulli process ) , or the random graph ( as the progenitor of all random graphs ) . starting with a fixed set of nodes , one considers each node pair in turn independently of the other node pairs and connects the nodes with probability .erds and rnyi first published the model in 1959 , although solomonoff and rapoport published a similar model earlier in 1951 .watts - strogatz model : a now canonical model by watts and strogatz that was introduced in 1998 .starting from a regular lattice structure characterized by high clustering and long paths , the model shows how randomly rewiring only a small fraction of edges ( or , alternative , adding a small number of randomly placed edges ) leads to a small - world characterized by high clustering and short paths .the model is conceptually appealing , and shows how to interpolate , using just one parameter , from a regular lattice structure in one extreme to an erds - rnyi graph in the other .mean - field approximation : sometimes called the zero - order approximation , this approximation replaces the value of a random variable by its average , thus ignoring any fluctuations ( deviations ) from the average that may actually occur .this approach is commonly used in statistical physics .ensemble : a collection of objects , such as networks , that have been generated with the same set of rules , where each object in the ensemble has a certain probability associated with it .for example , one could consider the ensemble of networks that consists of 6 nodes and 2 edges , each begin equiprobable . | this chapter introduces statistical methods used in the analysis of social networks and in the rapidly evolving parallel - field of network science . although several instances of social network analysis in health services research have appeared recently , the majority involve only the most basic methods and thus scratch the surface of what might be accomplished . cutting - edge methods using relevant examples and illustrations in health services research are provided . * keywords * : dyad ; homophily ; induction ; network science ; peer - effect ; relationship ; social network . |
probing the statistical properties of the large - scale structure of the universe has a great importance in studying the origin of our universe .recent galaxy surveys have been revealing the statistical properties of distribution of galaxies on very large scales .the number density of galaxies are not necessarily proportional to the density of mass , and this ambiguity is known as galaxy biasing problem ( kaiser 1984 ; davis et al .1985 ; bardeen et al . 1986 ) . since galaxies of different types have different clustering properties ( e.g. , dressler 1980 ; davis & geller 1976 ; giovanelli , haynes , & chincarini 1986 ; santiago & strauss 1992 ; loveday et al .1996 ; hermit et al . 1996 ; guzzo et al .1997 ) , not all types of galaxies can simultaneously be unbiased tracers of mass .this ambiguity is undesirable in extracting cosmological information from the data of galaxy distribution .the simplest model for the galaxy bias is the _ local _ , _ linear _ bias . in this simple model , the number density field of galaxy with a fixed smoothing scale assumed to be proportional to density field of mass with a same smoothing scale : where and are density contrast of galaxy and of mass , respectively , with a fixed smoothing length .this model is viable if ( a ) is dependent only on and ( b ) density contrast of mass is sufficiently small .the latter condition ( b ) is achieved by considering large scales on which density fluctuations are small enough so that only linear term becomes significant .the linear coefficient is called as bias parameter .the bias parameter is often assumed to be a constant , although it can depend on in general .the effect of nonlinearity should be taken into account when we are interested in nonlinear scales .on weakly nonlinear scale , this affect the estimation of higher order statistics ( e.g. , fry & gaztaaga 1993 ) .although the latter condition is reasonably considered to be valid if we are interested in linear scales , the former condition ( a ) is not trivial so far .the non - triviality of condition ( a ) leads us to a concept of stochastic bias which is recently argued ( dekel & lahav 1998 ; pen 1998 ; tegmark & peebles 1998 ; tegmark & blomley 1998 ; taruya , koyama & soda 1998 ; taruya & soda 1998 ; blanton et al .1998 ) . in the stochastic biasing scheme , is not supposed to be determined solely by , but the scatter in - relation is taken into account . in linear regime in which the density contrast of mass is small enough , and is approximated by random gaussian field , the two - point statistics fully characterize the statistics of the scatter . in literatures , the bias parameter and the dimensionless cross correlation are used to characterize the linear stochastic biasing scheme : where and are rms density fluctuations of mass and galaxies , respectively .the bias parameter in the equation ( [ eq1 ] ) is a generalization of the bias parameter in linear deterministic biasing scheme of equation ( [ eq0 ] ) . in deterministic case ,the cross correlation is always unity . from schwarz inequality, can not exceed unity and means that biasing is deterministic , .thus , the deviation from measures the stochasticity .if the smoothing scale is large enough so that and are considered as bivariate gaussian field , these three parameters , and contain all the statistical information about the stochastic biasing . in literatures ,these parameters are sometimes considered as free parameters to be determined by observation . however , if we could know the process of galaxy formation in detail , the parameters and would be derived from some fundamental physical processes .this is because the bias and its stochasticity come from our ignorance of the galaxy formation .therefore , there should be some theoretical constraint between and . at first sight, it seems difficult to find any constraint as we do not exactly know the process of galaxy formation .this is true especially on small scales where the nonlinear and nonlocal characters of galaxy formation plays an important role on the statistics of galaxy distributions .this problem is one of the most important issues in astrophysics and much numerical and analytical work is needed ( e.g. , rees & ostriker 1977 ; white & frenk 1991 ; cen & ostriker 1992 ; mo & white 1996 ) .however , on large scales , such undesirable characters can be expected to be small .scherrer & weinberg ( 1998 ) , based on the local biasing scheme , showed that the stochasticity actually vanishes on large scales and galaxy autocorrelation function behaves exactly as in deterministic biasing scheme .dekel & lahav ( 1999 ) also imply the same property based on a specific simple model . in this paper , based on a general nonlocal method, we show that stochasticity in fourier space asymptotically vanishes on linear scales under a certain condition , explicitly deriving the relation between the stochastic parameters , and the nonlinear , nonlocal functional form of galaxy formation .we will show the first coefficient of generalized wiener - hermite functional , which is defined below , of the nonlocal , nonlinear relations of galaxy formation will contribute to the galaxy statistics on large scales , if that coefficient does not vanish . in the derivation , we can use the technique developed in matsubara ( 1995 ) , in which the diagrammatic methods for the calculation of general nonlocal biasing are introduced .the diagrammatics are useful especially when the non - gaussianity and/or higher - order correlations are interested in . in this paper , however , we derive the result without employing diagrammatics for self - consistency and for the simplicity of the problem .the diagrammatics make it easier to generalize the present results to higher - order statistics . in 2, we revisit the mathematical methods for nonlinear , and nonlocal bias , which have been developed by matsubara ( 1995 ) .then we derive the relation between the stochastic bias parameters and the nonlocality of biasing . in 3 , we examine three types of biasing schemes , i.e. , the local lagrangian bias models , the cooperative model , and the peak model , according to the result of 2 . in 4 , we discuss the results and present the conclusions .in the present paradigm , the distribution of galaxies is determined by initial density fluctuations . whether a galaxy has formed at some place or not should be fully determined by the initial fluctuations . in this sense , the galaxy formation is deterministic , although it nonlinearly , nonlocally , and possibly chaotically depends on initial density fluctuations . in the stochastic biasing scheme ,this complex features of nonlinearity and nonlocality of the galaxy formation are expressed by phenomenological scatter of local relation .thus , in principle , stochasticity can be determined by nonlinear and nonlocal deterministic processes of galaxy formation . since both the present mass density field and the density field of galaxies are determined by initial density fluctuations , they are expressed by functionals and . in the following , instead of the initial fluctuations , we will alternatively use linearly extrapolated density fluctuations , where is a linear growth rate .the variable is simply a linear extrapolation of the evolution of density contrast , regardless of whether or not the small scale fluctuations are actually in linear regime .the introduction of the linearly extrapolated field is just for convenience and it simply represents the initial density fluctuations . in this notation , one can write the relations as ,{{\mbox{\boldmath }}}),\qquad { \rho_{\rm g}}({{\mbox{\boldmath } } } ) = f_{\rm g}([{\delta_{\rm l}}],{{\mbox{\boldmath } } } ) , \label{eq2}\end{aligned}\ ] ] where we introduce the notation x x x x x x x x x y y ] : \cdots { \cal p}[{\delta_{\rm l } } ] , \label{eq7}\end{aligned}\ ] ] where = { \cal n } \exp \left [ -\frac12 \int d^3x d^3y { \delta_{\rm l}}({{\mbox{\boldmath } } } ) { \xi_{\rm l}}^{-1}({{\mbox{\boldmath }}},{{\mbox{\boldmath } } } ) { \delta_{\rm l}}({{\mbox{\boldmath } } } ) \right ] .\label{eq8}\ ] ] the formal normalization constant is given by \exp \left [ -\frac12 \int d^3x d^3y { \delta_{\rm l}}({{\mbox{\boldmath } } } ) { \xi_{\rm l}}^{-1}({{\mbox{\boldmath }}},{{\mbox{\boldmath } } } ) { \delta_{\rm l}}({{\mbox{\boldmath } } } ) \right ] \right\}^{-1}. \label{eq9}\ ] ] because the degrees of freedom is infinite , formally diverges , but the proper regularization is always possible by discretizing the three dimensional continuum space .one can prove the orthogonality equation ( [ eq6 ] ) , just generalizing the proof of orthogonality of simple hermite polynomial , which is well known .we consider the functionals of equation ( [ eq5 ] ) as base functionals for the expansion of the functional of mass density field , and of galaxy density field : ) = \sum_{n=1}^\infty \frac1{n ! } \int d^3x_1 \cdots d^3x_n k_{\rm a}^{(n)}({{\mbox{\boldmath }}}-{{\mbox{\boldmath }}}_1,\cdots,{{\mbox{\boldmath }}}-{{\mbox{\boldmath }}}_n ) { \cal h}_{(n)}({{\mbox{\boldmath }}}_1,\ldots,{{\mbox{\boldmath }}}_n ) , \label{eq10}\ ] ]where a = m or g. the reason why term is not appeared in the above expression is that and for [ set in equation ( [ eq6 ] ) ] .this expansion is complete because the kernel is uniquely given by )\right\rangle \label{eq11 } \\ & = & \left\langle \frac{\delta^n \delta_{\rm a}({{\mbox{\boldmath }}},[{\delta_{\rm l } } ] ) } { \delta{\delta_{\rm l}}({{\mbox{\boldmath }}}_1)\cdots \delta{\delta_{\rm l}}({{\mbox{\boldmath }}}_n ) } \right\rangle .\label{eq12}\end{aligned}\ ] ] according to the expansion ( [ eq10 ] ) and the orthogonality relation ( [ eq6 ] ) , the two - point auto - correlation function of matter and of galaxies and the cross correlation function are given by where a , b = m , g , and .note that this expansion is valid as long as is a random gaussian field .if the initial density field is non - gaussian , there are additional terms in equation ( [ eq14 ] ) which depend on initial higher - order correlation functions [ see matsubara ( 1995 ) for detail ] . both the nonlocal kernels and do not depend on .thus , on large scales with respect to the separation , we can approximate equation ( [ eq14 ] ) by only considering lower order terms of , provided that the kernel does not have broad profile .if the kernel falls off slowly on large scales , we can not truncate the expansion ( [ eq14 ] ) . in the rest of this paper, we assume the lowest order term in the expansion ( [ eq14 ] ) actually dominates the higher - order terms . before proceeding to the analysis of the lowest order approximation, we consider the cases in which this assumption breaks down .imagine , for example , that , and on large scales. then the -th order term in the integral in expansion ( [ eq14 ] ) is approximately given by the last expression is derived by the fact that the integral is the form of convolution and the ( 3d ) fourier transform of is proportional to ( peebles 1980 ) . as seen by this expression , should be negative to ensure the equation ( [ eq14.5 ] ) actually falls off on large scales . for example, if falls off as , and , then equation ( [ eq14.5 ] ) does not fall off anymore . on the other hand , if , then equation ( [ eq14.5 ] ) falls off rapidly enough so that the higher order terms can be neglected . in the latter case ,the spatial integration of is finite .it is a natural assumption that the spatial integration of is finite .if it is infinite , as seen from equation ( [ eq10 ] ) , the information of density fluctuations of infinitely distant places affect the galaxy formation as much as , or more than the fluctuations of nearer places , which is unlikely in reality .we assume in the rest of this paper .in other words , we assume that fourier transform of higher - order kernels are finite in the limit .note that the expansion ( [ eq14 ] ) is essentially different from usual perturbative approach by taylor expansion of density contrast itself .instead , we employ the orthogonal expansion for galaxy and mass density fields , and the resulting expression , ( [ eq14 ] ) , can be interpreted as an asymptotic expansion by correlation function , .thus , we only assume the smallness of correlation function on large scales , whether the density contrast on small scale is large or not .since our argument in this section is extremely formal , it would be instructive to calculate an example .consider a model in which and is calculated up to second order in perturbation theory [ e.g. , fry ( 1984 ) ] : where e^{i { { \mbox{\boldmath }}}_1 \cdot { { \mbox{\boldmath }}}_1 + i { { \mbox{\boldmath }}}_2 \cdot { { \mbox{\boldmath }}}_2 } , \label{eq - a2}\end{aligned}\ ] ] and we assume the einstein - de sitter universe for simplicity . then , according to equation ( [ eq12 ] ) , we can calculate the kernels as since drops off as on large scales , the discussion at the end of the previous subsection suggests that -th order term in equation ( [ eq14 ] ) actually drops off as on large scales . in the following, we are interested in large scales and consider only the lowest order approximation of equation ( [ eq14 ] ) : assuming that this term does not vanish and that the higher - order terms are negligible . since this expression has the form of convolution , it becomes just products in fourier space . from statistical isotropy ,the fourier transform of , denoted as , is a function of the absolute value of the wave vector : where is the linear power spectrum as the true spectrum , there is possibly a constant term in addition to the above equation , which comes from the small scale inaccuracy of expression ( [ eq15 ] ) [ see scherrer & weinberg ( 1998 ) ; dekel & lahav ( 1998 ) ] .however , in the following , we consider as a merely mathematical quantity which represents just the fourier transform of of the equation ( [ eq15 ] ) . ] .the linear - scale power spectrum of mass is simply given by .this means on linear scales .thus , denoting , these equations are valid as long as . if , equation ( [ eq16 ] ) vanishes and is no longer the lowest order in the expansion of equation ( [ eq14 ] ) . in such case, we should consider the higher - order terms . in equations ( [ eq17])([eq19 ] ) , can be identified to the linear bias parameter in fourier space , and is given by , from fourier transform of equation ( [ eq12 ] ) , where is a fourier transform of linear density fluctuations , .these simple equations ( [ eq17])([eq20 ] ) are the primary results of this paper .these equations show , as long as , that there are no residual cross correlation in fourier space in linear regime ( except for the constant term which comes from the small scale behavior of correlation function ) and that the bias parameter in fourier space is which is scale - dependent and is related to nonlinearity and nonlocality of galaxy formation through equation ( [ eq20 ] ) .this means that the fourier - mode stochasticity which arise from the nonlinearity and nonlocality of the galaxy formation vanishes in linear regime .thus , the cross correlation in fourier space should approach to unity in large - scale limit , as long as the galaxy formation is such that .the case , can happen in special cases .it can happen , for example , when the large - scale linear power is completely erased by some peculiar form of nonlocal biasing , and mode - mode coupling from nonlinear scales dominates on linear scales .it also can happen when the biasing is represented by an even function of , e.g. , purely quadratic , , or quartic , , etc .we assume the galaxy formation does not have such special form and satisfies the condition in the rest of this paper .the vanishing stochasticity is an important constraint on large scales .at this point , the naive introduction of stochasticity in linear or quasi - linear regime , as is sometimes done in literatures , should be cautious . on large scales, could not be freely adjusted , but would be close to unity .this fact is already noticed in the paper by dekel & lahav ( 1998 ) , in which only a specific , simple model is considered .the same conclusion is derived by scherrer & weinberg ( 1998 ) based on local galaxy formation scheme .our conclusions apply not only to local schemes , but also to nonlocal schemes of galaxy formation .the difference between local and nonlocal scheme is that local bias generates constant bias factor , while nonlocal bias generates scale - dependent one .if there is a non - unity value of in large - scale limit , it means that there is some exotic stochasticity which is not relevant to the initial density fluctuations , or that .as all the structures in the universe are supposed to be formed from initial density fluctuations , there is no specific reason to introduce such kind of exotic processes , at least in the framework of the present standard theory of structure formation in the universe . of course , in nonlinear regime where the approximation of equation ( [ eq15 ] ) breaks down ,the stochasticity in fourier space arises by mode - mode coupling .since the dynamics of such nonlinear regime is complex enough to trace analytically , the concept of stochastic bias is useful especially in this regime . in the nonlinear regime, one should be aware that the nonlinearity of galaxy - density relation also dominate and that only parameters and are not sufficient to characterize the biasing properties ( dekel & lahav 1998 ) . the equations ( [ eq17])([eq19 ] ) is valid on scales larger than a nonlinear scale , because these equations are derived from lowest order approximation , which assumes is small .if the nonlocality of galaxy formation is small and is localized on some scales smaller than , then its fourier transform , , will be constant for . if , the bias factor is constant in the validity region of the equations .this is equivalent to the purely local galaxy formation .in contrast , if , there appears the scale - dependence of bias parameter besides mode - mode coupling .this scale - dependence comes from the nonlocality of the galaxy formation .the -dependence of thus is not negligible on scales below and its behavior can be describable by equation ( [ eq20 ] ) on scales above .even when there is no stochasticity in fourier space , it may still appear in real space on linear scales because of the scale - dependence of bias parameter , besides mode - mode coupling . actually , the galaxy - density cross correlation and galaxy correlation for a smoothed field with linear smoothing length are given by where is a fourier transform of a smoothing function .this expression and equation ( [ eq1 ] ) explicitly show the scale dependence and stochasticity of the biasing in real space .we define the following notation for k - space averaging for an arbitrary function : then , equation ( [ eq1 ] ) reduces to , simply , to obtain more insight on this scale - dependence and stochasticity , let us expand the bias parameter in terms of , where is the scale of nonlocality of galaxy formation as above ( terms of odd power of do not appear for reflection symmetry ) : with this expansion , equation ( [ eq23 - 2 ] ) is expanded in a straightforward manner , and the results are ( r _ * k_{\rm g})^{-4 } + { \cal o}(r _ * k_{\rm g})^{-6 } , \label{eq24 } \\ & & r(r ) = 1 - \frac92 \gamma^2(1-\gamma^2 ) \left ( \frac{b_{\rm f}^{(1)}}{b_{\rm f}^{(0 ) } } \right)^2 ( r _ * k_{\rm g})^{-4 } + { \cal o}(r _ * k_{\rm g})^{-6}. \label{eq25}\end{aligned}\ ] ] in this expression , spectral parameters and ( bardeen et al . 1986 , bbks , hereafter ) are given by where notation of the equation ( [ eq23 - 1 ] ) is applied .the parameter is of order unity , and is of order .for example , if the power spectrum has the form of power - law , , and the smoothing function is gaussian , , these parameters are given by and .equations ( [ eq24 ] ) and ( [ eq25 ] ) give the expression for stochastic bias parameter from nonlocality of galaxy formation .these equations represent the minimal stochasticity which is inevitable when bias is scale dependent .we have neglected the mode coupling near nonlinear scales .the scale - dependence and stochasticity appears on the scale of the nonlocality , . on scales larger than nonlocality, such scale - dependence and stochasticity disappears .especially , stochasticity rapidly vanishes for the lack of term in equation ( [ eq25 ] ) .these results do not depend on specific details of galaxy formation as long as , and higher - order terms are negligible .the information of galaxy formation involves only through two parameters , and , up to the order , or three parameters , and up to the order .these parameters are related to galaxy formation through equation ( [ eq20 ] ) which should be calculable if we could know the details of galaxy formation process .otherwise , they can be considered as free parameters to be fitted by observation , instead of fitting functions and , which have the infinite degrees of freedom .so far the argument is quite formal . in this section ,we consider specific models of galaxy formation , i.e. , the lagrangian local biasing models , the cooperative model , and the peak model , as simple examples .although the quantitative correspondence of these models and the actual galaxy formation still needs investigation , these examples can give qualitative aspects on the nonlocal galaxy formation .in the lagrangian local galaxy formation models , the number density of galaxies is a local function of a smoothed linear density field . the smoothed density field , however , is a nonlocal function of linear density field , .thus , in some sense , local galaxy formation models fall in the category of nonlocal models , with particularly simple functional form : where is an usual one - variable function of the smoothed linear density field .the smoothed linear density field , , is defined by where is a smoothing kernel function with smoothing length . with this particularly simple form, the bias parameter in fourier space , given by equation ( [ eq20 ] ) , reduces to where is a fourier transform of the smoothing kernel .all the other higher - order kernels ( ) vanish .this result is essentially the same which is derived previously in real space ( szalay 1988 ; coles 1993 ) : since . in usual nonlocal biasing models , the smoothing length is taken to be in nonlinear regime . as we consider linear scales , andthere is no scale - dependence on and thus there is no stochasticity in real space .this result is consistent with the work by scherrer & weinberg ( 1998 ) , who showed the local models produce constant bias factor and there is negligible stochasticity on large scales .examples of local models are density - threshold bias , , where is a step function ( kaiser 1984 ; jensen & szalay 1986 ) , weighted bias , ( catelan et al .1994 ) , cen - ostriker bias , ^ 2\}$ ] ( cen & ostriker 1992 ) , etc . in nonlinear regime, there is really scale - dependence on ( mann , peacock , & heavens 1998 ; narayanan , berlind , & weinberg 1998 ) .recently , the mo & white model ( mo & white 1996 ) for the clustering of dark matter halos , which is an extension of press - schechter formalism ( press & schechter 1974 ; cole & kaiser 1989 ; bond et al . 1991 ) , is interested in ( e.g. , catelan et al .1998 ; catelan , matarrese & porciani 1998 ; jing 1998 ; sheth & lemson 1998 ) .this model is also an example of the lagrangian local biasing model , because the halo density contrast in their model is determined by linear density field which is smoothed on scale .next , we consider cooperative galaxy formation model introduced by bower et al .( 1993 ) , in which they showed that a large - scale ( ) , but weak modulation of galaxy luminosities can reconcile the discrepancy between the scdm power spectrum and apm galaxy data ( see also babul & white 1991 ) . as a simple example , they identify sites for galaxy formation as places where density contrast satisfies the following relation where is the density contrast smoothed on a scale , which represent the large - scale modulation , and is a constant which is called as `` the modulation coefficient '' .this simple model is mathematically equivalent to the standard density - threshold bias model , but for the new field defined by it is easy to see that this new field is just a new density field with a smoothing function , where and are smoothing function for and , respectively .the bias parameter for the cooperative model in fourier space is similar to equation ( [ eq42 ] ) : \label{eqc03}\\ & = & \sqrt{\frac{2}{\pi } } \frac{e^{-{\nu'}^2/2 } } { { \rm erfc}\left(\nu'/\sqrt{2}\right ) } \left [ w_{\rm s}(ks ) + \kappa w_{\rm mod}(k r_{\rm mod } ) \right ] , \label{eqc04}\end{aligned}\ ] ] where .this expression and equation ( [ eq23 - 2 ] ) enable us to evaluate the stochastic parameters .in figure 1 , we plot the scale dependence of stochastic parameters of the cooperative model . in this figure, we assume a cdm power spectrum of bbks ^{-1/2 } , \label{eq37}\\ & & \qquad\qquad\qquad q = \frac{k}{\gamma h { \rm mpc}^{-1}},\end{aligned}\ ] ] with primordial spectral index and shape parameter and linear amplitude which corresponds to .the top - hat smoothing function is applied for smoothing , while the gaussian smoothing function is applied for smoothing and : and we set , and .the modulation coefficient is adjusted so as to produce the 2.5% rms modulation of the threshold , according to bower et al .( 1993 ) , i.e. , .this required taking , , for , , , respectively and bower et al.s are due to different fitting formula for power spectrum . ] .as one can see from the figure , there appears the scale - dependence of bias parameter on scales of so that the galaxy clustering on large scales are enhanced by cooperative bias .this fact is a main motivation for bower et al .( 1993 ) to introduce the cooperative model .the stochastic parameter is very close to unity except on the modulation scale , where there is weak stochasticity due to scale dependence of bias parameter .next example of nonlocal galaxy formation is the peak model . in the peak model , the sites for galaxy formationare identified as high peaks of initial density field with a fixed smoothing length ( see bbks ) . treating the constraint properly for density peaksis difficult but there are several approximations . in this paper , we approximate the density peaks by density extrema ( otto , politzer & wise 1986 ; cline et al . 1987 ; catelan et al .the number density of density extrema above threshold is given by where is a smoothed linear density field with smoothing length .density extrema are identical to density peaks above some moderate threshold where almost all density extrema would be density peaks . from the general consideration in the previous section , it is straightforward to obtain parameters and of stochastic bias for this model . from equation ( [ eq20 ] ) and after tedious calculation of this model are finite in large - scale limit .] , we obtain w_{\rm s}(ks ) , \label{eq29}\end{aligned}\ ] ] where is a fourier transform of smoothing function for .other quantities in this expression are defined by hermite polynomial are defined with the normalization , equation ( [ eq29 ] ) explicitly show the scale - dependence of bias parameter in fourier space .the scale of nonlocality corresponds to , which is of order of smoothing length for obtaining density peaks .the stochastic parameters , in linear regime for density peaks are derived from the equation ( [ eq23 - 2 ] ) .assuming , the result is where spectral indices of various kind are defined as follows : equations ( [ eq33 ] ) and ( [ eq34 ] ) describe stochastic parameters in real space .it is known that if we take both high threshold limit , , and large scale limit , , the correlation function of the peak model reduces to that of linear bias , ( kaiser 1984 ; bbks 1986 ) .this property is easily confirmed from equation ( [ eq33 ] ) , where for . in figure 2, these equations are plotted for with , where we assume cdm power spectrum of bbks , equation ( [ eq37 ] ) , with primordial spectral index and shape parameter and linear amplitude which corresponds to .the top - hat smoothing function is applied for smoothing , while the gaussian smoothing function is applied for smoothing : the large smoothing lengths , do not correspond to galaxy formation , but rather they correspond to cluster formation , since the cluster of galaxies are density peaks of large smoothing length .as seen in the figure , once the smoothing scale exceeds the smoothing length , which is the scale of galaxy or cluster formation in this model , the parameter of stochasticity rapidly converges to unity , which means there is no stochasticity above that scale . asstochasticity vanishes , the bias parameter converges to a constant on large scales .in this paper we explicitly derive the stochasticity parameters of the bias in linear regime from the nonlocality of the galaxy formation . by using the generalized wiener - hermite functionals, we can derive the two - point correlation on linear scales which is valid even if the galaxy formation process itself is both nonlinear and nonlocal .this is in contrast to the usual taylor expansion which can not treat the strongly nonlinear features of galaxy formation .wiener - hermite functionals are orthogonal functionals and we do not have to assume the smallness of itself , and even do not have to know the exact nonlinear evolution of density contrast , .instead , we assume only the smallness of correlation function on large scales .we show that the stochasticity in fourier space does not exist in linear regime ( except for the constant term which comes from the small scale behavior of correlation function ) , and that the biasing parameter in fourier space is given by .this conclusion is true as long as the galaxy formation process satisfies the relation , , and higher - order kernels ( ) do not increase with scales when .this property in fourier space is simply because the galaxy - galaxy and galaxy - mass correlation functions can be expressed as convolutions of mass correlation function at lowest order of the expansion by [ equation ( [ eq15 ] ) ] .a local model of galaxy formation has a constant bias factor , while the nonlocal model has a scale - dependent one , besides mode - mode coupling . in the linear regime where mode - mode coupling is negligible , stochasticity in real spacecomes simply from the scale - dependence of the biasing when the galaxy formation is nonlocal and .thus , naive introduction of stochasticity in fourier mode in the linear regime should be avoided .one can not introduce stochasticity in fourier mode in the linear regime simply because of the lack of knowledge about galaxy formation .if there is any stochasticity in fourier mode in the linear regime , it means that there are exotic process in the galaxy formation which does not come solely from the initial density field and such process should be correlated on linear scales , unless galaxy formation process has a special form to satisfy .such kind of exotic process is not likely , at least in the present framework of the standard theory of structure formation in the universe .we should note our analyses are restricted to the linear regime . in the nonlinear regime, there are mode coupling from both nonlinearity of density evolution and nonlinearity of galaxy formation and it makes the fourier mode stochastic . in the region where stochasticity is prominent , the nonlinear density evolution , which is difficult to track analytically , is also prominent , so that the phenomenological approach of stochastic bias should be effectively applied in nonlinear regime ( dekel & lahav 1998 ) . in strongly nonlinear regime ,phenomenological approach by the ( hyper ) extended perturbation theory ( colombi et al .1997 ; scoccimarro & frieman 1998 ) can shed light on how nonlinearity makes cross correlation deviate from unity .in this theory , the higher order cumulants of mass density field is given by , where is a constant predicted by tree - level perturbation theory , and .although this theory contains an extrapolation of weakly nonlinear result to strongly nonlinear regime , it phenomenologically describe the numerical results . in strongly nonlinear regime ,a mere averaging and cumulant are approximately equivalent in this ansatz : .thus , if , one can obtain and . finally , one has .this value of depends on the scale , and departs significantly from unity on small scales .the conclusion that the stochasticity is weak on linear scales is good news for determining the redshift distortion parameter on linear scales from a redshift survey .the linear redshift distortion of power spectrum is given by , in the plane - parallel limit ( kaiser 1987 ; pen 1998 ) , p_{\rm g}(k ) , \label{eq39}\end{aligned}\ ] ] where is redshift - space power spectrum of galaxies , and is a direction cosine of the angle between the wave vector and the line of sight [ see hamilton ( 1992 ) for an expression for two - point correlation function and szalay , matsubara & landy ( 1998 ) for its generalization to non - plane - parallel case ] .since we see on linear scales except some special cases , we do not need to fit from the observation when we use only the linear redshift distortion .however , the previous analyses so far usually assume the bias parameter as a scale - independent constant .this assumption is justified if the scale of nonlocality of galaxy formation is actually below the linear scale .if it is not , the scale - dependence of should also be determined by observation ( or by theories , if possible ) .the forthcoming large - scale redshift surveys will reveal the galaxy distribution especially on linear scales , on which we have not had sufficient data so far .as shown in this paper , the linear clustering properties are analytically tractable even when the galaxy formation itself is a too complex phenomenon to analytically track .the exploration of linear - scale galaxy distribution can overcome our ignorance of detailed galaxy formation processes , and will give a great insight on the primordial features of our universe .appel , p. , & de friet , j. k. 1926 ; fonctions hypergomtriques et hypersphriques , polynmes dhermite;paris;ganthier - villars ; babul , a. , & white , s. d. m. 1991;mnras;253;31p ; bardeen , j. m. , bond , j. r. , kaiser , n. & szalay , a. s. 1986;apj;304;15 ; blanton , m. , cen , r. , ostriker , j. p. , & strauss , m. a. 1998;apj;submitted;astro - ph/9807029 ; bond , j. r. , cole , s , efstathiou , g. , & kaiser , n. 1991;apj;379;440 ; bower , r. g. , coles , p. , frenk , c. s. & white , s. d. m. 1993;apj;405;403 ; catelan , p. , coles , p. , matarrese , s. , & moscardini , l. 1994;mnras;268;966 ; catelan , p. , lucchin , f. , & matarrese , s. 1988;phys .lett.;61;267 ; catelan , p. , lucchin , f. , matarrese , s. , & porciani , c. 1998;mnras;297;692 ; catelan , p. , matarrese , s. , & porciani , c. 1998;apj;502;l1 ; cen , r. y. , & ostriker , j. p. 1992;apj;399;l113 ; cline , j. m. , politzer , h. d. , rey , s .- j . ,& wise , m. b. 1987;commun .phys.;112;217 ; cole , s. , & kaiser , n. 1989;mnras;237;1127 ; coles , p. 1993;mnras;262;1065; colombi , s. , bernardeau , f. , bouchet , f. r. , & hernquist , l. 1997;mnras;287;241 ; davis , m. , & geller , m. j. 1976;apj;208;13 ; davis , m. , efstathiou , g. , frenk , c. s. & white , s. d. m. 1985;apj;292;371 ; dressler , a. 1980;apj;236;351 ; dekel , a. , & lahav , o. 1998;preprint;astro - ph/9806193 ; fry , j. n. 1984;apj;279;499 ; fry , j. n. & gaztaaga , e. 1993;apj;413;447 ; giovanelli , r. , haynes , m. p. , & chincarini , g. l. 1986;apj;300;77 ; guzzo , l. , strauss , m. a. , fisher , k. b. , giovanelli , r. , & haynes , m. p. 1997;apj;489;37; hamilton , a. j. s. 1992;apj;385;l5 ; hermit , s. , santiago , b. x. , lahav , o. , strauss , m. a. , davis , m. , dressler , a. , & huchra , j. p. 1996;mnras;283;709; jensen , l. g. , & szalay , a. s. 1986;apj;305;l5 ; jing , y .- p .1998;apj;503;l9 ; kaiser , n. 1984;apj;284;l9 ; kaiser , n. 1987;mnras;227;1 ; loveday , j. , efstathiou , g. , maddox , s. j. , & peterson , b. a. 1996;apj;468;1 ; matsubara , t. 1995;apjs;101;1 ; mann , r. g. , peacock , j. a. , & heavens , a. f. 1998;mnras;293;209 ; mo , h. j. , & white , s. d. m. 1996;mnras;282;347 ; narayanan , v. k. , berlind , a. a. , & weinberg d. h. 1998;apj;submitted;astro - ph/9812002 ; otto , s. , politzer , h. d. , & wise , m. b. 1986 ; phys .lett.;56;1878 ; peebles , p. j. e. 1980;the large - scale structure of the universe;princeton university press;princeton ; pen , u. 1998;apj;504;601 ; press , w. h. , & schechter , p. 1974;apj;187;425 ; rees , m. j. & ostriker , j. p. 1977;mnras;179;541; santiago , b. x. , & strauss , m. a. 1992;apj;387;9 ; scherrer , r. j. & weinberg , d. h. 1998;apj;504;607 ; scoccimarro , r. , & frieman , j. a. 1998;astro - ph/9811184 ; ; szalay , a. s. 1988;apj;333;21 ; szalay , a. s. , matsubara , t. & landy , s. d. 1998;apj;498;l1 ; taruya , a. , koyama , k. , & soda , j. 1998;apj;in press;astro - ph/9807005 ; taruya , a. , & soda , j. 1998;apj;submitted;astro - ph/9809204 ; tegmark , m. , & blomley , b. c. 1998;apjl;submitted;astro - ph/9809324 ; tegmark , m. , & peebles , p. j. e. 1998;apj;500;l79 ; white , s. d. m. , & frenk , c. s. 1991;apj;379;52 ; | if one wants to represent the galaxy number density at some point in terms of only the mass density at the same point , there appears the stochasticity in such a relation , which is referred to as `` stochastic bias '' . the stochasticity is there because the galaxy number density is not merely a local function of a mass density field , but it is a nonlocal functional , instead . thus , the phenomenological stochasticity of the bias should be accounted for by nonlocal features of galaxy formation processes . based on mathematical arguments , we show that there are simple relations between biasing and nonlocality on linear scales of density fluctuations , and that the stochasticity in fourier space does not exist on linear scales under a certain condition , even if the galaxy formation itself is a complex nonlinear and nonlocal precess . the stochasticity in real space , however , arise from the scale - dependence of bias parameter , . as examples , we derive the stochastic bias parameters of simple nonlocal models of galaxy formation , i.e. , the local lagrangian bias models , the cooperative model , and the peak model . we show that the stochasticity in real space is also weak , except on the scales of nonlocality of the galaxy formation . therefore , we do not have to worry too much about the stochasticity on linear scales , especially in fourier space , even if we do not know the details of galaxy formation process . |
recently , a new breed of topic models , dubbed counting grids ( cg ) , has been shown to have advantages in unsupervised learning over previous topic models , while at the same time providing a natural representation for visualization and user interface design .cg models are _ generative _ models based on a grid of word distributions , which can best be thought of as the grounds for a massive venn diagram of documents .the intersections among multiple documents ( bags of words ) create little intersection units with a very small number of words in them ( or rather , a very sparse distribution of the words ) .the grid arrangement of these sparse distributions , which we will refer to here as _ microtopics _ , facilitates fast cumulative sum based inference and learning algorithms that chop up the documents into much smaller constitutive pieces than what traditional topic models typically do .for example , fig .[ fig : fig0 ] shows a small part of such a grid with a few representative words with greatest probability from each microtopic .each of the science magazine abstracts used to train this grid is assumed to have been generated from a group of microtopics found in a single 4 4 window with equal weight given to all component microtopics .thus , each microtopic can be 16 times sparser than the set of documents grouped into the window .a document may share a window with another very similar document , but it is also mapped so that it only partially overlaps with a window that is the source for a set of slightly less related documents .the varying window overlap literally results in a varying overlap in document themes .this modeling assumption results in a trained grid where nearby microtopics tend to be related to each other as they are often used together to generate a document .consider , e.g. , the lower right 4 window in fig .[ fig : fig0 ] . the word distributions in these 16 cells are such that a variety of science papers on evidence of ancient life on earth could be generated by sampling words from there .( note that each cell , though of very low entropy , contains a distribution over the entire vocabulary . ) in the posterior distribution , this window is by far the most likely source for an article on a bizarre microorganism that produced nitrogen in cretaceous oceans . in the 4 windowtwo cells to the left of this example we find mapped a variety of articles on even more ancient events on earth , e.g. on how sulfur isotopes reveal a deep mantle storage of ancient crust .but there we also start to see words which increase the fit for articles that describe similar events on other planets .further movement to the left gets us away from the earth and into astronomy . to demonstrate the refinement of the microtopics compared to topics from a typical topic model , the color labeling of the grid was created so as to reflect the kullback-leibler ( kl ) divergence of the individual microtopics to the topics trained on the same data through latent dirichlet allocation ( lda ) .the lda topics , hand - labeled after unsupervised training , correspond to fairly broad topics , while the cg represents the data as a group of slowly evolving microtopics .for example , all the yellow coded microtopics map to the `` physics '' lda topic , but they occupy a contiguous area in which from left to right the focus slowly shifts from electromagnetism and particle physics to material science . furthermore , it is interesting to see the microtopics that occupy the boundaries between coarser topics that lda model found , capturing the links among astronomy , physics and biology .it is immediately evident that the 2d cgs can have great use in data visualization , though the model can be trained for arbitrary dimensionality .these models combine topic modeling and data embedding ideas in a way that facilitates intuitive regularization controls and allows creation of much larger sets of organized sparse topics .furthermore , they lend them selves to elegant visualization and browsing strategies , and we encourage the reader to see the example http://research.microsoft.com/en-us/um/people/jojic/cgbrowser.zip. however , the existing em algorithm for cg learning is prone to local minima problems which occasionally lead to under performance .in addition , no direct testing of the microtopic coherence has been performed to date , which makes it unclear if they are meaningful outside their windowed grouping .after all , a variety of sophisticated topic models have been developed and tested by the research community , but lda seems to still beat them often in practice .e.g. , [ 16,17 ] raise doubts that various reported perplexity improvements over the basic lda model are meaningful as they are sensitive to smoothing constants in the model , and also fail to translate to improvements in human judgement of topic quality . in fact , lda usually outperforms more complex models on tasks that involve human judgement , which may be the main reason why practitioners of data science prefer this basic model to others .here we develop hierarchical versions of cg models , which in our experiments produced embeddings of considerably higher quality .we show that layering into deeper architectures primarily aids in avoiding bad local minima , rather than increasing representational capacity : the trained hierarchical model can be collapsed into an original counting grid form but with a much higher likelihood compared to the grids fit to the same data using em with random restarts .the better data fit then translates into quantitatively better summaries of the data , as shown in numerical experiments as well as human evaluations of microtopics obtained through crowdsourcing .[ fig : fig0 ][ [ the - ccg - grids ] ] the ( c)cg grids : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the basic counting grid is a set of distributions on the -dimensional toroidal discrete grid indexed by . the grids in this paper are bi - dimensional and typically from to in size .the index indexes a particular word in the vocabulary $ ] .thus , is the probability of the word at the -dimensional discrete location , and at every location on the grid .the model generates bags of words , each represented by a list of words with each word taking an integer value between and .the modeling assumption in the basic cg model is that each bag is generated from the distributions in a single window of a preset size , e.g. , .a bag can be generated by first picking a window at a -dimensional location , denoted as , then generating each of the words by sampling a location for a particular microtopic uniformly within the window , and finally by sampling from that microtopic .because the conditional distribution is a preset uniform distribution over the grid locations inside the window placed at location , the variable can be summed out , and the generation can directly use the grouped histograms where is the area of the window , e.g. 25 when 5 windows are used .in other words , the position of the window in the grid is a latent variable given which we can write the probability of the bag as as the grid is toroidal , a window can start at any position and there is as many distributions as there are distributions .the former will have a considerably higher entropy as they are averages of many distributions .although the basic cg model is essentially a simple mixture assuming the existence of a single source ( one window ) for all the features in one bag , it can have a very large number of ( highly related ) choices to choose from .topic models , on the other hand , are admixtures that capture word co - occurrence statistics by using a much smaller number of topics that can be more freely combined to explain a single document .componential counting grids ( ccg ) combine these ideas , allowing multiple groups of broader topics to be mixed to explain a single document .the entropic distributions are still made of sparse microtopics in the same way as in cg so that the ccg model can have a much larger number of topics than an lda model without overtraining .more precisely , each word can be generated from a different window , placed at location , but the choice of the window follows the same prior distributions for all words . within the window at location word comes from a particular grid location , and from that grid distribution the word is assumed to have been generated .the probability of a bag is now in a well - fit ccg model , each data point has an inferred distribution that usually hits multiple places in the grid , while in a cg , each data point tends to have a rather peaky posterior location distribution because the model is a mixture .both models can be learned efficiently using the em algorithm because the inference of the hidden variables , as well as updates of and can be performed using summed area tables , and are thus considerably faster than most of the sophisticated sampling procedures used to train other topic models. an intriguing property of these models is that even on a grid with microtopics and just as many grouped topics , there is no room for too many independent groups . with a window size , for example , we can place only windows without overlap , and the remaining windows are overlapping the pieces of these 16 . the ratio between grid and window size is referred to as the _ capacity _ of the model , and the training set size necessary to avoid overtraining the model only needs to be 1 - 2 orders of magnitude above the capacity number .thus a grid of 1024 microtopics may very well be trainable with thousands of data points , rather than 100s of thousands that traditional topic models usually require for that many topics .[ fig : dig ] [ [ raw - image - embedding - using - ccgs ] ] raw image embedding using ( c)cgs : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in previous applications of cg models to computer vision , images were represented as spatially disordered bags of features .we experimented with embedding raw images with full spatial information preserved , and we present this here as we feel that the image data helps in illuminating the benefits of hierarchical learning .an image described by a full intensity function could be considered as a set of words , each word being an image location . for a image, we have a vocabulary of size .the number of repetitions of word is then set to be proportional to the intensity i(x , y ) .( in case of color images , the number of features is simply tripled with each color channel treated in this way ) . in other words , an unwrapped image is considered to be a word ( location ) histogram . and distributionscan then also be seen as images , as they provide weights for different image locations .if we tile the image representations of these distributions we get additional insight into cgs as an embedding method .[ fig : dig ] shows a portion of a grid trained on 2000 mnist digits assuming a window averaging . to illustrate the generative model , in c )we show the partial window sums for two overlapping windows over .the green and blue areas form a window that generates a version of digit 3 , which can be seen at the top left of this portion of the grid ( panel b ) ) .the blue and red , on the other hand , combine into a window that represent a digit 2 at the position ( 3,3 ) in panel b ) .partial sums for green , blue and red areas are shown in c ) and these partial sums , color coded and overlapped are also illustrated in d ) .careful observation of b ) or the full grid in the appendix , demonstrates the slow deformation of digits from one to another in the distributions .the appendix has additional examples of image dataset embedding , including rendered 3d head models and images of bold eagles retrieved by internet search .the cg distributions shown here look like little strokes , while distributions are full digits .the ccg model , on the other hand , combines multiple distributions to represent a single image , and so looks like a grid of strokes fig .[ fig : digits2]a , while distributions are even sparser .[ [ hierarchical - grids ] ] hierarchical grids : + + + + + + + + + + + + + + + + + + + by learning a model in which microtopics join forces with their neighbors to explain the data , ( c-)cg models tend to exhibit high degrees of relatedness of nearby topics . as we slowly move away from one microtopic , the meaning of the topics we go over gradually shifts to related narrowly defined topics as illustrated by fig . [fig : fig0 ] ; this makes these grids attractive to hci applications .but this also means that simple learning algorithms can be prone to local minima , as random initializations of the em learning sometimes result in grouping certain related topics into large chunks , and sometime breaking these same chunks into multiple ones with more potential for suboptimal microtopics along boundaries .to illustrate this , in fig .[ fig : digits2]a we show a grid of strokes ( eq . [ eq : h ] ) learned from 2000 mnist digits using a ccg model assuming a window averaging .nearby features are highly related to each other as they are the result of adding up features in overlapping windows over ( which is not shown ) .ccg is an admixture model , and so each digit indexed by has a relatively rich posterior distribution over the locations in the grid that point to different strokes .fig : digits2 ] , we show one of the main principal components of variation in as an image of the size of the grid . for three peaks there , we also show -features at those locations .the combination of these three sparse features creates a longer contiguous stroke , which indicates that this longer stroke is often found in the data .thus , the separation of these features across three distant parts of the map is likely a result of a local minimum in basic em training . to transfer this reasoning to text models ,consider the 5th cell in the first row in fig .[ fig : fig0 ] with words hiv , aids , and the blue cell in the middle of the last column with words selection , adaptive .the separation of these two things in faraway locations may very well be a result of a local minimum , which could be detected if location posteriors exhibit correlation .this illustration points to an idea on how to build better models .the distribution over locations that a data point maps to ( a posteriori ) could be considered a new representation of the data point ( digit in this case ) , with the mapped grid locations considered as features , and the posterior probabilities for these locations considered as feature counts .thus another layer of a generative model can be added to generate the locations in the grid below , fig .[ fig : gm]c - d. it is particularly useful to use another microtopic grid model as this added layer , because of the inherent relatedness of the nearby locations in the grid. the layer above can thus be either another admixture grid model ( componential counting grid - ccg ) , or a mixture ( cg ) , and this layering can be continued to create a deep model .as cg is a mixture model , it terminates the layering : its posterior distributions are peaky and thus uncorrelated .however , an arbitrary number of ccgs can be stacked on top of each other in this manner , terminating on top with a cg layer to form a hierarchical cg ( hcg ) model , or terminating in a ccg layer to form a hierarchical ccg ( hccg ) model . in each layer , the pointers to features below are grouped , which should result in creating a contiguous longer stroke as discussed above in a grid cell that contains a combination of pointers to the lower layers . for the sake of brevity, we only derive the hcg learning algorithm with a single intermediate ccg layer . the extension to hccg and higher order hierarchiesis reported in the appendix .variational inference and learning procedure for counting grid - based models utilizes cumulative sums and is only slower than training an individual ( c)cg layer by a factor proportional to the number of layers . the graphical model for hcgis shown in fig .[ fig : gm]c , where location variables pointing to grids in different layers have the same name , but carry a disambiguating superscript . _ to avoid superscripts in the equations below , we renamed the cg s location variable from to and dropped the superscript in the layer above_. the bottom ccg layer follows the latter is a pre - set distribution over the grid locations , uniform inside . instead of the prior the locationsare generated from a top layer cg , indexed by ( in the figure ) , this equation also shows that the lower - levels grid locations act as observations in the higher level .we use the fully factorized variational posterior to write the negative free energy bounding the non - constant part of the loglikelihood of the data as we maximize with the em algorithm which iterates e- and m - steps until convergence .e : the m step re - estimates the model parameters using these updatedposteriors : \nonumber \\\pi_{{\tiny cg},{{\bf i}}}({{\bf l } } ) \hspace{-0.3 cm } & \propto & \hspace{-0.1 cm } \hat{\pi}_{{\tiny cg},{{\bf i}}}({{\bf l } } ) \cdot \sum_{t , n } q^t(\ell_n = { { \bf l}})\hspace{-0.cm}\cdot\hspace{-0.2 cm } \sum_{{{\bf k}}| { { \bf i}}\in w_{{\bf k } } } \hspace{-0.2cm}\frac { q^t ( k_n = { { \bf i}})}{\hat{h}_{{\tiny cg},{{\bf i}}}({{\bf l } } ) } \nonumber \end{aligned}\ ] ] where the last ( cg ) update is performed analogous with .interestingly , training these hierarchical models stage by stage , reminiscent of deep models where such incremental learning was practically useful .+ although it has been shown that a deep neural network can be compressed into a shallow broader one through post training , the stacked ( c-)cg models can be collapsed mathematically . in this sense we can view hcg and hccg as _ hierarchical learning algorithms _ for cg and ccg , which are easier to visualize than deeper models .for example , for hcg in fig .[ fig : gm]c - d , it is straightforward to see that the following grid defined over the original features , can be used as a single layer grid that describes the same data distribution as the two - layer model are the grouped microtopics in the window - eq .[ eq : h ] ] . however , the grids estimated from the hierarchical models should be more compact as the scattered groups of features are progressively merged in each new layer ._ learning in hierarchical models is thus more gradual and results in better local maxima , and we show below that the results are far superior to regular em learning of the collapsed cg or ccg models . _in all the experiments we used models with two extra layers , although , in some experiments , we found that three levels worked slightly better . in general , the optimal number of layers will depend on the particular application .[ [ likelihood - comparison ] ] likelihood comparison : + + + + + + + + + + + + + + + + + + + + + + in the first experiment we compared the local maxima on models learned using the ( full ) mnist data set .the two layer hcg model was first pre - trained stage - wise as , e.g. , , by training the higher level on the posterior distribution from the lower level as the input .then , the model was refined by further variational em training .the procedure is repeated 20 times with different random initializations to produce twenty hierarchical models . as discussed above, these models can be collapsed to a cg model by integrating out intermediate layers ( [ eq : collapse ] ) .these models were then compared with twenty models learned by directly learning cg models through previously published standard em learning algorithm starting from twenty random initializations . despite being collapsible to the same mathematical form ,the hcg models consistently produced higher likelihood than the cg models directly learned using the standard method ._ in fact , each cg model created by collapsing one of the learned hcg models had log likelihood at least two standard deviations above the highest log likelihood learned by basic em ( p - value ) . _both learning approaches used the computation time equivalent to 1000 iterations of standard em , which was more than enough for convergence .[ [ document - classification ] ] document classification : + + + + + + + + + + + + + + + + + + + + + + + + next we ran test to see if the increased likelihood obtainable with a better learning algorithm translates into increased quality of representation when posterior distributions for individual text documents are considered as features in classification tasks .we considered the 20-newsgroup dataset ( 20n ) and the mastercook dataset ( mc ) composed by 4000 recipes divided in 15 classes .previous work reduced 20-newsgroup dataset into subsets with varying similarities and we considered the hardest subset composed by posts from the very similar newsgroups ` comp.os.ms-windows ` , ` comp.windows.x ` and ` comp.graphics ` .we considered the same complexities as in , using 10-fold cross validation and classified test document using maximum likelihood .results for both datasets are shown in tab . [tab : doccl ] ..document classification .when bold , hierarchical grids outperformed the basic grids with statistical significance ( hcg p - value .01e-4 , hccg p - values 1e-3 ) .`` linsvm '' stands for linear support vector machines which we reported as baseline . [ cols="<,^,^,^,^,^",options="header " , ] [ tab : doccl ] [ [ evaluation - of - microtopic - quality - using - quantitative - measures - related - to - the - use - in - visualization - and - indexing ] ] evaluation of microtopic quality using quantitative measures related to the use in visualization and indexing : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we evaluated the coherence and the clarity of the microtopics comparing the collapsed ( 2 layers ) hierarchical grids - hcg and hccg with regular grids , latent dirichlet allocation ( lda ) , the correlated topic model ( ctm ) which allows to learn a large set of correlated topics and few non - parametric topic models .+ generative models are often evaluated in terms of perplexity .however different models , even different learning algorithms applied to the same model , are very difficult to compare and better perplexity does not always indicate better quality of topics as judged by human evaluators . on the other hand , the subjective evaluation of topic qualityis highly related to measures that have to do with data indexing , e.g. quality of word combinations when used for information retrieval .thus we start with a novel evaluation procedure for topic models which is strongly related to information indexing and then show that we obtain similar evaluation results when we use human judgement . in the following experiments, we considered a corpus composed of science magazine reports and scientific articles from the last 20 years .this is a very diverse corpus similar to the one used in . as preprocessing step , we removed stop - words and applied the porters stemmer algorithm .we considered grids of size and fixing the window size to .( previous literature showed that counting grids are only sensitive to the ratio between grid and window area , as long as windows are sufficiently big . ) we varied number of topics for lda and ctm in .for each complexity we trained 5 models starting with different random initializations and we averaged the results . in each repetition, we considered a random third of this corpus , for total of roughly documents , different words and more than tokens. to evaluate ( micro)topics , we repetitively sampled _k_-tuples of words and checked for consistency , diversity and clarity of the indexed content . in the following , we describe the procedure used for evaluating grids .an equivalent procedure was used to evaluate other topic models for comparison .+ to pick a tuple of words , we sampled a grid location .then , we repetitively sampled the microtopic to obtain the words in the tuple .we did not allow repetitions of words in the tuple .we considered different -tuples , not allowing repeated tuples .+ then we checked for consistency , diversity and clarity of content indexed by each tuple .the * consistency * is quantified in terms of the average number of documents from the dataset that contained _ all _ words in .the * diversity * of indexed content is illustrated through the cumulative graph of acquired unique documents as more and more -tuples are sampled and used to retrieve documents containing them .as this last curve depends on the sample order , we further repeated the process 5 times for a total of 25k different samples .finally the * clarity * , measures the ambiguity of a query with respect to a collection of documents and it has been used to identify ineffective queries , on average , without relevance information .formally , the query clarity is measured as the entropy between the n - tuple and the language model ( unigram distributions ) as where .we estimated the likelihood of an individual document model generating the tuple and obtain using uniform prior probabilities for documents that contains a word in the tuple , and a zero prior for the rest .finally , to estimate we employed montecarlo sampling .+ results are illustrated in fig.[fig : results ] and should be appreciated by looking at all three measures together , as some can be over - optimized at the expense of others .the diversity curve that consistently grows as more tuples are sampled indicates that the sampled tuples belong to different subsets of the data , and are thus discriminative in segmenting the data into different clusters .the average tuple consistency , on the other hand , demonstrates that the sampled tuples do occur in large chunks of the data , demonstrating that the induced clusters are of significant size .the clarity measure shows that the clusters made of texts retrieved using different tuples have clear differentiation from the rest of the dataset in usage of all the words in the dictionary .we report results for the grids and the best result of lda and ctm which peaked respectively at 80 and 60 topics .results for other grid sizes can be found in the additional material ; they are stable across complexities with slightly better performances for larger grids .+ all grid models show good consistency of words selected as they are optimized so that documents words map into overlapping windows . through positioning and intersection of many related documentsthe words end up being arranged in a fine - grained manner so as to reflect their higher - order co - occurrence statistics .hierarchical learning greatly improved the results despite the fact that hccg and hcg can be reduced to ( c)cgs through marginalization ( [ eq : collapse ] ) .+ overall hccg strongly outperformed all the methods , especially with a total gain of 0.5 bits on clarity , which is around third of the score for lda / ctm . despite allowing for correlated topics that enable ctm to learn larger topic models , ctm trails lda in these graphs as topics were over expanded .we also considered non - parametric topic models such as `` dilan '' and the hierarchical dirichlet process but their best results were poor and we did not reported them in the figure .to get an idea , both models only indexed 25% of the content after 5000 2-tuples samples and had a clarity lower of 0.7 - 1.2 bits than other topic models .[ fig : intrusion ] [ [ human - judgments - of - topic - coherence ] ] human judgments of topic coherence : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we next tested the quality of the inferred topics . topic coherence is often measured based on co - occurrence of the top words per topic . while good as a quick sanity check of a single learned model , when this measure is used to compare models , it will favor models that lock onto top themes and distribute the rest of the words in the tails of the topic distributions .the lda models usually have a large drop off in topic coherence when the number of topics is increased to force the model to address more correlations in the data . indeed ,using this measure , lda topics outperform cg topics in case of small models . butas the number of topics grows , the microtopics trained by hcg significantly outperform both lda and cg ( see the appendix ) .a more interesting measure of topic quality , which not only depends on individual topic coherence but also on meaningful separation of different topics , requires human evaluation of _ word intrusions_. in a word intrusion task , six randomly ordered words are presented to a human subject who then guesses which word is an outlier . in the original procedure a target topic is randomly selected and then the five words with _highest _ probability are picked .then , an intruder is added to this set .it is selected at random from the low probability words of the target topic that have high probability in some other topic . finally the six words are shuffled and presented to the subject .if the target topic shows a lack of coherence or distinction from the intruding topic , the subject will often fail to correctly identify the intruder .this task is again geared towards only getting the top words right in a topic model and ignoring the rest of the distribution , which makes it unsuitable to comparison with microtopic models which attempt to extract much more correlation from the data .thus instead of picking the top words from each topic , we sampled the words from the target topic to create the in - group . after sampling the location of a microtopic from the grid , we picked three randomly chosen words from or from the small groups of microtopics in the window of size 2 , and 3 around ( the latter is equivalent to computing the window distributions using windows of smaller size than the ones used in training and should give us the indication if the granularity assumed in the window size was exaggerated : if it is then averaging of nearby topics should significantly reduce the noise due to forced topic splitting ) .for each of these groups we choose the intruder word using the standard procedure .if in this harder task humans can identify intruders better for microtopic models than for lda models , this would indicate that the microtopics are not simply random subsamples of broader topics captured in and similar in entropy to lda topics .they would be a meaningful breakup of broad topics into finer ones .we compared lda ( known to performed better than ctm on intrusion tasks ) , hcg , and hccg , on randomly crawled 10k wikipedia articles and used amazon mechanical turk ( 24000 completed tasks from 345 different people ) .the trained grids were of size 32 32 and the windows 5 5 .the optimal lda size was chosen using likelihood crossvalidation over the range of complexities as in the previous experiments ( the peak performance there was at 80 topics ) .results are shown in fig.[fig : intrusion ] as a function of the euclidean distance on the grid of the intruder word from the topic .hccg outperformed lda ( p - values for the 3 tasks 1.20e-11 , 1.88e-5 , 2.97e-05 ) and hcg (p - values for the 3 tasks 3.97e-18 , 1.01e-11 , 3.14e-19 ) indicating that learning microtopics is possible with a good algorithm .overall , _ users were able to solve correctly 71% of hccg problems and only 58% of lda problems_. interestingly , the performance of hccg and hcg does not seem to depend on the distance of the intruder word : even picking intruder word from a very close location rather than from a far away one lead to no additional confusion for the user .this shows that hccg chops up the data into meaningful microtopics which are then combined into a large number of groups that do not over broaden the scope .hccg and hcg also outperformed respectively cg and ccg ( see the appendix ) .[ [ learning - to - separate - mixed - digits . ] ] learning to separate mixed digits .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + finally , we show that an hccg model can be used to perform a task that eludes most unsupervised _ and _ supervised models .we created a set of 10000 images , each containing two different mnist digits overlapped , fig .[ fig : digitmix ] .we trained an hccg model consisting of five layers on this data stagewise by feeding from one layer to the next .windows of size 5 5 were used in all layers . from layer to layer , the new representations of the image consist of growing combinations of low level features from the bottom layer ( sparseness of which is similar to fig .[ fig : digits2]a ) .the hierarchical grouping is further encouraged by simply smoothing with a gaussian kernel with deviation of 0.75 , before feeding it to the next layer ( this is motivated by the fact that nearby features in are related and so if two distant locations should be grouped , so should those locations neighbors ) . once the model is collapsed to a single hccg grid the components no longer look like short strokes but like whole digits , mostly free of overlap : the model has learned to approximately separate the images into constitutive digits .reasoning on overlapping digits even eludes deep neural networks trained in a supervised manner , but here we did not use the information about which two digits are present in each of the training images .we show that with new learning algorithms based on a hierarchy of ccg models , possibly terminated on the top with a cg , it is possible to learn large grids of sparse related microtopics from relatively small datasets .these microtopics correspond to intersections of multiple documents , and are considerably narrower than what traditional topic models can achieve without overtraining on the same data . yet , these microtopics are well formed , as both the numerical measures of consistency , diversity and clarity and the user study on 345 mechanical turkers show .another approach to capturing sparse intersections of broader topics is through product of expert models , e.g. rbms , which consist of relatively broad topics but model the data through intersections rather than admixing .rbms are also often stacked into deep structures . in future workit would be interesting to compare these models , though the tasks we used here would have to be somewhat changed to focus on the intersection modeling , rather than the topic coherence ( as this is not what rbm topics are optimized for ) .hccg and hcg models have a clear advantage in that it is easy to visualize how the data is represented , which is useful both to end users in hci applications , and to machine learning experts during model development and debugging .another parallel between the stacks of ccgs and other deep models is that the uniform connectivity of units is directly enforced through window constraints , rather than encouraged by dropout .finally , in this specific context we illustrate a broader phenomenon that requires more methodical and broader treatment by the machine learning community .a more complex ( deeper ) model showed here large advantages in terms of training likelihood , but these advantages were _ not _ due to the expanded parameter space , because the resulting model is equivalent to a collapsed single layer model .rather than being a reflection of increased representational abilities of the model , better likelihoods were thus the result of better fitting algorithm that consists of training a deep model ( and then collapsing it into a simpler but equivalent parameterization ) .similar phenomena are likely regularly encountered elsewhere in machine learning , but not always recognized as such , as in the absence of the full knowledge of the extrema of the fitting criterion , an increase in performance is often inappropriately ascribed to better modeling rather than better model fitting . | the counting grid is a grid of _ microtopics _ , sparse word / feature distributions . the generative model associated with the grid does not use these microtopics individually , but in predefined groups which can only be ( ad)mixed as such . each allowed group corresponds to one of all possible overlapping rectangular windows into the grid . the capacity of the model is controlled by the ratio of the grid size and the window size . this paper builds upon the basic counting grid model and it shows that hierarchical reasoning helps avoid bad local minima , produces better classification accuracy and , most interestingly , allows for extraction of large numbers of coherent microtopics even from small datasets . we evaluate this in terms of consistency , diversity and clarity of the indexed content , as well as in a user study on word intrusion tasks . we demonstrate that these models work well as a technique for embedding raw images and discuss interesting parallels between hierarchical cg models and other deep architectures . |
the discovery that , according to quantum field theory in its general curved spacetime version , black holes should radiate away their energy , has had profound impact on both our understanding of the interplay between gravitation and quantum physics teaching us for instance that the laws of black hole mechanics are in fact the laws of thermodynamics when applied to situations involving black holes and in contributing to our realization that there is much that we still need to understand . regarding the latter we are referring , of course , to what is commonly known as the `` black hole information paradox '' .there have been many attempts to address this issue on the basis of proposed theories incorporating quantum treatments of gravitation , and it is fair to say that none of those seem to offer truly satisfactory resolution . fora review see for instance .in fact , there is even a controversy as to whether there is or there is not , a paradox or some open issue that needs confronting . in , this question has been discussed and clarified .the basic issue that seems to underlie the various postures in this respect is associated to the view that one takes regarding the nature of the singularity that is generically found deep in the black hole interior .if one views this singularity as a fundamental boundary of spacetime , there is in fact no paradox whatsoever , as one can say that information either is registered on " or else escapes through " that boundary . on the other hand ,if one views , as do most researchers working in the various approaches to quantum gravity , the singularity as something that must be ultimately cured " by an appropriate quantum theory of gravitation , in the sense of replacing it by something amenable to treatment by such theories and not as any kind of essential boundary ( for instance the proposal within loop quantum gravity discussed in ) , one must explain how to reconcile the unitarity of quantum mechanical evolution ( a feature that among other things requires reversibility and thus the preservation of information ) with the thermal nature of the hawking radiation . without any reasonable reconciliation of the divergent conclusionsone would be entitled to describe the situation as a paradox . in order to explore the most explicit version of the problem it is customary to consider a black hole formed by the gravitational collapse of a large lump of matter ( with mass of the order of , say , a few solar masses ) characterized by a pure quantum mechanical state .the problem one faces then is to reconcile the purity of the initial state with the thermal nature of the hawking radiation .the issue has been studied extensively ( see for instance the nice reviews , or the works ) as it is considered one of the major challenges of contemporary theoretical physics the approaches that have been considered in the search for such a reconciliation seem to be relatively limited. faced with the fundamental assumption ( * ) : _ the validity of quantum field theory in curved space - times , at least , in regions where curvature is far from the planckian regime _ , these approaches essentially represent variations of the following ideas : \1 ) somehow , during the late time " part of the black hole evaporation , the hawking radiation is not truly thermal , and is in fact highly correlated with the early - time radiation ( which must remain thermal due to ( * ) ) , so that the full state of the radiation field is pure .\2 ) the black hole evaporation is not complete , and modifications associated with quantum gravity lead to the formation of a stable remnant .such a remnant would have to be in a highly entangled state with the emitted radiation so that the full state of the ( radiation field + remnant ) , is pure .\3 ) the hawking radiation is thermal all the way until the eventual evaporation of the black hole but the information somehow crosses the region where the singularity would have been found , and that is now described in terms of the quantum gravity theory .alternative proposals might involve some combinations of the three proposals above .however it seems clear that , at least one of them should be able to account for the fate of most of the information .that is , if none of them can account for anything more than a very small fraction thereof , then the three alternatives together will not be able to account for more than a slightly larger fraction of the full information that needs to be recovered , if the process is to be compatible with the unitarity of quantum mechanics .the fact is that each of these 3 alternatives have serious drawbacks .\1 ) this idea , which is generally framed within the context of the so called black hole complementarity proposals , has been the subject of recent detailed studies which show , based on the so called monogamy of quantum entanglement , that one of the consequences of such entanglement ( even forgetting for the moment the question of how would such correlations be generated ) would be the formation of firewalls " ( or regions of divergent energy momentum of the quantum field ) around the black hole horizon .\2 ) here the issue is that one would be postulating the existence of peculiar kinds of objects , the remnants , typically with a mass of few times the planck mass , which must have an enormous number of internal states , essentially as many as those of a large star .that is because the full state of the ( radiation field + remnant ) must be pure while the reduced density matrix characterizing the radiation field is thermal , and has an energy content of few solar masses .\3 ) this alternative seems to be favored by researchers working in loop quantum gravity , and has been considered in some detail in . here , there are two issues that need to be clarified .first , one needs to explain precisely how the information crosses the quantum gravity region that replaces the classical singularity , in particular given that in the lqg context , that region seems to be characterized by signature changes in the metric .secondly , there seems to an even more problematic aspect of this proposal : the fact that , after the complete evaporation of the black hole , the information missing in the thermal radiation would have to be encoded in the quantum gravity degrees of freedom ( dof ) , which however would have an essentially vanishing associated energy . that is , the quantum gravity dof would have to be entangled with the hawking radiation in such a way that the complete state of the quantum gravity sector plus the quantum matter field sector , would be pure , and yet the energy would be essentially all in the radiated quanta of the field .we must note that there have been other proposals such as those considered in but we feel it is fair to say that none of these has gained any kind of universal acceptability within the community interested in the issues , as each faces some difficulties of its own .we want to explore a possible resolution of the paradox , by assuming that qg would indeed replace the singularity by something else , suitably described in terms of the fundamental dof of such a theory , but that quantum theory would have to be modified along the lines of the proposals put forward to address the measurement problem " by treating collapse of the wave function as a physical process , occurring spontaneously and independently of observers or measuring devices " , and that the corresponding modification is such , as suggested in , that essentially all the initial information is actually lost .the first aspect we must note about the general proposal is that its setting is within the general context of semi - classical gravity .that is , a scheme where the gravitational degrees of freedom are treated using a classical spacetime metric , while the matter degrees of freedom are treated using the formalism of quantum field theory in curved space - times .the first reaction of many people towards this is to cite the paper which supposedly rules out the viability of semi - classical gravity . herewe must first point out the various caveats raised about such a conclusion in and note , in particular , that the work in centered mainly in the consideration of a formulation in which quantum theory did not involve any sort of collapse of the quantum state , a situation that contrasts explicitly with what we will be focusing on .the other point noted in is that if one wants to consider semi - classical gravity together with a version of quantum theory involving the collapse of the quantum state , one faces the problem that the semi - classical einstein equation can not hold during a collapse simply because einstein s tensor is by construction divergenceless while the expectation value of the energy momentum tensor would generically have a non - vanishing divergence .the point is that one can view semi - classical gravity , not as a fundamental theory , but as providing a suitably approximated description in limited circumstances , something akin to say the hydrodynamical description of a fluid which , as we know , corresponds only to the description of something that at a deeper level needs to be described in terms of molecules moving and interacting among themselves in rather complex ways . following the analogy, we view the metric description of gravity and the characterization of the matter sector using quantum field theory ( and connected to gravity via einstein s semi - classical equations ) just as an approximated description of limited validity .in fact this is a point of view that has been explored in the cosmological setting to deal with certain difficulties that arise in the inflationary cosmological account for the emergence of the seeds of cosmic structure .the introduction of dynamical collapse within the general framework can be treated in a scheme where one allows an instantaneous violation of the equations , in association with the collapse of the quantum state taking place on a given spatial hypersurface , and in analogy with israel s matching conditions requiring continuity of the metric across such a hypersurface .the details of that formalism were first described in .we will not discuss these issues further here as they have been thoroughly treated in the above reference and also in the previous works by some of us on the black hole information problem .here it is worth pausing to reconsider in more detail certain aspects of the discussion around the issues of energy content .the setting of the discussion is that of black holes in asymptotically flat space - times . for these space - timeswe have a well defined notion of adm mass which is taken as the covariant measure of the energy content of the spacetime , and the quantity that is conserved in the sense that the evaluation of the adm mass gives the same number when computed using any cauchy hypersurface ( which in the extended spacetime ends at ) .moreover as the spacetime extensions also include the regions and i.e asymptotic future and past null infinity respectively ) one can use the notion of bondi mass associated with any hypersurface ending at a section , ( that is , , together with the segment of starting at and ending at would be a cauchy hypersurface ) .the point is that the bondi mass at should be equal to the initial adm mass of the spacetime minus the amount of energy that has been radiated to the segment of starting at and ending at .we must now clarify in what sense are we going to be using the notions of adm mass and bondi mass as being associated with cauchy hypersurfaces and partial cauchy hypersurfaces .let us concentrate for a moment on the spacetime that lies well to the past of the singularity ( or the would be singularity that presumably is cured by qg ) .the point is that , although formally the expression for say the adm mass is associated with an integral at infinity " the behavior of the metric variables at infinity is conditioned by the energy momentum content associated with the matter fields by the einstein equation . in other wordswe can compute the adm mass using the cauchy data for the gravitational sector on , data which are tied to the energy momentum of the matter fields through the hamiltonian and momentum constraints . inthat regard we might want to understand how the various components of the matter field contribute to these constraints in each one of the hypersurfaces in question .we can say , for instance , that associated with the initial set up , we have a cauchy hypersurface ( see fig .[ penrose4d ] ) where we have a large lump of matter with a spacetime that is only very mildly curved and characterized by a pure quantum mechanical state of the matter fields , the adm mass is which , as we noted , is of the order of a few . in that casewe would say that the energy of the spacetime , is represented almost completely by that encoded in the energy momentum on the matter fields . at relatively late times , but still at the past of the singularity , we might consider a cauchy hypersurface , that starts at stays close to and finally enters the horizon and ends at the center of the gravitationally collapsed lump of matter which at this stage is well within the black hole horizon . alternatively , we might consider deforming into a hypersurface that ends on the section which we call the hypersurface ( this hypersurface is , of course , not a cauchy hypersurface ) .if we want now , to account for the energy content in terms of data on , as represented by ( which should be equal to ) , we would have to say that there is a very important component of the energy content , corresponding to the energy momentum tensor of the outgoing hawking radiation , located in the part of which lies on the region exterior to the horizon , while the energy contained in the original lump of matter has been red - shifted by the gravitational potential associated with the black hole , and at the same time there is a negative contribution to the energy content associated with the in - falling counterpart of the hawking radiation , which might be considered as also lying in the proximity of the intersection of the event horizon with .the situation is depicted , for the realistic 4 dimensional case in figure [ penrose4d ] , and for the 2 dimensional cghs model in [ cghs ] . in terms of would say that we need to account for the relatively small value of the bondi mass at its endpoint , a value that is obtained by subtracting from the energy carried away by the hawking radiation that has reached to the past of .that small value of the bondi mass would , in turn , be accounted for , in terms of the data on , as resulting from the red shifted energy of the original lump of matter , and the negative contribution associated with the in - falling counterpart of the hawking radiation .we note that the situation on , as far as energy is concerned is very similar to that on any hypersurface characterizing the situation well after the evaporation of the black hole such as in the accompanying figure .now , we might in a similar way , want to consider the fate of the information in the picture above .that is we want to consider how is the information , that was present in the quantum state characterizing the initial state of system at , accounted for , in terms of the quantum state characterizing the system on ?the point is that , by deforming into ( where is the part of to the past of ) , we can describe the state , as an entangled state , which on is just the hawking thermal radiation , and on is also a highly mixed density matrix , but such that the complete state is pure .the point that we want to make here is that the above situation seems to be afflicted by the same troublesome aspects which were raised in the context of the alternative 3 ) above .that is , the state of the system on is one with an enormous number of degrees of freedom and yet a very small value of the energy .it seems therefore that if we have an explanation for the loss of information in the black hole evaporation that relies on losses associated only with the qg region ( i.e. losses that , in the gr language , would be described as produced by the singularity ) we would still face the uncomfortable aspects that lead to the rejection of alternative 3 ) above , but this time associated with the situation prior to the singularity ( i.e. for instance the situation on ) .we think that in addressing the problem via the introduction of modifications of quantum theory that last problem is dramatically ameliorated .the notion that one could learn to live with information being lost in association with the evaporation of black holes , has been considered in some detail in , where the earlier arguments indicating that such proposals would necessarily involve large violations of known conservation laws or dramatic violations of causal behavior have been dispelled . in that analysis, however , the resulting picture seems to be that all the information loss occurs at the singularity , i.e at the region that would be described in non - metric terms in a quantum theory of gravitation , while , in the regions where the metric description would be appropriate , one would have exact quantum mechanical unitary evolution .our view is that such an approach offers a less unified view of physics than the one we are advancing , and that , as a result , it might be more vulnerable to questions of self consistency .for instance , if we accept that there are violations to the quantum mechanical unitary evolution , but that those only occur in connection with black hole evaporation , we might have problems , with an ultimate quantum gravity theory , that can be expected to include the possibility of processes involving virtual black holes .in other words , we might have to face up to the expected result of such a theory indicating that all physical processes must involve contributions from all possible intermediate states according to a path integral formulation of the process , and that those intermediate states would involve also virtual black holes , which in turn would have associated violations of unitarity .as first discussed in , this kind of problem seems less likely to arise in a more unified version we are considering , where the violation of unitarity is an integral aspect of the fundamental physical laws as envisaged in the various proposals for modifications of quantum theory that have been made in the context of the search for a resolution of the so called measurement problem " - . it is worthwhile reminding the reader that the so called measurement problem in quantum theory is tied to the interpretational difficulties that arise when one does not want to introduce , in the treatment , some artificial classical/ quantum cut ( sometimes presented as a macro/ micro physics cut ) and instead , one wants to consider that everything , including potential observers and measuring apparatuses , should be treated in a quantum mechanical language .we direct the reader to the works in for a good overview , to for a more extensive collection of postures , or to for a very clear recent analysis .the relevance of this issue to the problem at hand can be seen from the fact that quantum theory calls for purely unitary evolution only when one is dealing with a completely isolated system in which all degrees of freedom are treated quantum mechanically .many proposals to deal with the general interpretational difficulties of quantum theory , and in particular with the measurement problem have been considered since the inception of the theory and there is also a good body of literature devoted to the problems of many of these proposals .we want to focus on the dynamical reduction theories which involve a modification of quantum dynamics involving spontaneous reduction of the quantum state .these kind of proposals are commonly known as collapse theories " and have a rather long tradition . for recent reviewssee .recently various relativistic versions of spontaneous dynamical collapse theories have been put forward , .we could not end this introduction without acknowledging the strong inspiration that we have drawn from r. penrose s discussions connecting foundational aspects of quantum theory to ideas about the nature of quantum gravity .in fact in a very early analysis r. penrose noted that if one wanted to obtain a self - consistent picture of a situation involving thermodynamical equilibrium that included black holes one would need to have a theory of quantum mechanics that incorporated some violation of unitarity in ordinary conditions ( involving no black holes ) .we view that analysis as providing further support for our approach in contrast to those where violation of unitarity is only associated with the singularity in evaporating black holes .some of penrose s recent works on such issues , are also relevant , in a broader sense , to our general views underlying this proposal .we will present here an concrete version of the above approach based on the theory developed in .the article is organized as follows : in section ii we present a brief description of the cghs 2 dimensional model of black hole formation and evaporation , in section iii we present a relativistic model of dynamical collapse , section iv describes the general setting in which we will put together the two elements previously described , and in section v we will use them to describe the evolution of the quantum state of the mater field thus accounting for the loss of information . in section vi we discuss some subtle points regarding the energetic aspects of the proposal and we end in section vii with the general conclusions indicating what has been achieved and what would need be left as issues for further research .we have added two appendices for the interested reader convenience : appendix a discusses in detail the foliation independence of the proposal , exhibiting its general covariance , and appendix b presents in some detail the manner in which the delicate issue regarding the expectation of unbounded energy creation is resolved by the introduction of the _ pointer field_.the two dimensional model , first introduced by callan - giddings - harvey - strominger ( cghs ) involving black hole formation is a very convenient toy model for the study of issues related to the formation and evaporation of two dimensional black holes .we now review the basic features of this model . for more detailswe refer the reader to .the cghs action is -\frac{1}{2}(\nabla f)^2\right ] } , \nonumber \end{aligned}\ ] ] where is the ricci scalar for the metric , is the dilaton field , considered in this model as part of the gravity sector , is a cosmological constant and is a scalar field , representing matter . the solution corresponding to the cghs model is shown in fig .[ cg ] .it corresponds to a null shell of matter collapsing gravitationally along the world line and leading to the formation of a black hole . for ,this solution is known as the dilaton vacuum ( region i and i ) .the metric is found to be which is flat , whereas for the solution is described by the black hole metric ( region ii , iii ) represented by where . here ( null ) kruskal - type coordinates ( ) are useful to describe the global structure of the spacetime . on the other hand , for physical studies involving quantum field theory ( qft ) in curved spacetime , it is convenient to use special coordinates for the various regions . in the dilation vacuum region ,the natural coordinates are , and thus the metric can be expressed as with . in the bh exterior ( regionii ) , a natural set of coordinates is provided by , so that the metric in this region is with and in order to exhibit the asymptotic flatness , we express the bh metric in schwarzschild - like coordinates ( ) which are defined through the implicit formulas so that we get .the temporal and spatial kruskal coordinates can be related to schwarzschild - like time and space coordinates , through and .now we consider the quantum treatment of the matter field .we will consider the null past asymptotic regions and as the _ in _ region and the black hole ( exterior and interior ) region as the asymptotic _ out _ region . in the _ in _ region ,the field operator can be expanded as where , and the basis of functions ( modes ) are : and with .the superscripts and refer to the right and left moving modes respectively .these modes define the bases of field quantization and thus the right _ in _ vacuum ( ) and the left _ in _ vacuum ( ) whose tensor product ( ) defines our _ in _ vacuum .as is well known , one might also proceed to the construction of the field theory in terms of modes that are natural in the _ out _ region by expanding the field operator in terms of the complete set of modes having support both outside ( exterior ) and inside ( interior ) the event horizon . once more we can write the field operator in the form where in the above we have used the convention whereby modes and operators with and without tildes correspond to the regions inside and outside the horizon , respectively . for the mode functions in the exterior to the horizon we use : and similarly we can choose the set of modes in the black hole interior , ensuring that the basis of modes in the _ out _ region is complete .the left moving modes are kept the same as before ( since these modes travel from the black hole exterior to the interior ) , while for the right moving mode we take : .following , we now replace the above delocalized plane wave modes by a complete orthonormal set of discrete wave packets modes , given by where the integers and .these wave packets are naturally peaked about with width respectively .the next step in our analysis is to consider the bogolubov transformations . in our case, the relevant non - trivial one refers to the right moving sector , and the corresponding transformation from _ in _ to _ exterior _ modes is what accounts for the hawking radiation .we note that the initial state , corresponding to the vacuum for the right moving modes and the left moving pulse forming the black hole can be written as : where particle states consist of arbitrary but _ finite _ number of particles , is a normalization constant , and the coefficients s are determined using the bogolubov transformations .their explicit expressions can be seen in .it is well known that , if one ignores the degrees of freedom of the quantum field lying in the black hole interior , and describes just the exterior dof of freedom , one ends up ( partially ) describing the state in terms of a density matrix .that is , one obtains the reduced density matrix by tracing over the interior degrees of freedom ( dof ) , and in this case one ends up , with a density matrix corresponding to a thermal state .note , at this point this density matrix represents , in the language of an _ improper _ mixture , as it arises after ignoring part of the system which as a whole is in a pure state .we will therefore say that what we obtain at this point is an _ improper thermal state_. see discussion in section for a more exhaustive discussion and analysis of this issue .it is also discussed in previous works that the task of accounting for the information loss in black hole evaporation within the approach we are considering requires among other things showing how , as the result of the dynamics , one ends up with a _ proper thermal state _( i.e. , one that describes an actual mixed state that is not a partial description of a pure state ) starting with an _ initial pure state_.for the purpose of presenting relativistic collapse models in generality we employ the interaction picture in which the quantum state of matter is assigned to a space - like hypersurface . as we advance the hypersurface to the future via some arbitrary foliation of spacetime , the state changes according to where is the interaction hamiltonian density .the functional derivative is defined as where is the invariant spacetime volume enclosed by and with ( meaning that no point in is to the future of ) .covariance requires that = 0 ] .this can be formally solved to give = t\exp\left [ -i \int_{\sigma_1}^{\sigma_2 } \hat{\cal h}_{\rm int}(x ) dv\right].\end{aligned}\ ] ] where is the time ordering operator , and .from time to time we suppose that the state undergoes a discrete collapse event associated to a spacetime point .when the hypersurface crosses the point , the state ceases for an instant to satisfy equation ( [ tom0 ] ) and instead changes according to the rule is the collapse operator at and is a random variable which corresponds to the collapse outcome .one normally assumes that there is a fixed probability of a collapse event occurring in any incremental spacetime region of invariant volume .this results in collapse events which have a poisson distribution with density , in any unit volume of spacetime .this distribution of collapse events in spacetime is covariantly defined and makes no reference to any preferred foliation .the collapse operators must satisfy the completeness condition this allows us to define the probability density for the outcome , for a collapse event on the state at point , by the completeness condition ensures that ( [ prob ] ) is normalized .this formula corresponds to the standard formula for the quantum probability of a generalized measurement with measurement operator .the collapse outcomes thus occur with standard quantum probability . in appendix [ax : folindep ] we demonstrate that if the following microcausality conditions hold , = 0 , \label{com2}\end{aligned}\ ] ] and = 0 , \label{com3}\end{aligned}\ ] ] for spacelike separated and , then ( i ) given a poisson distributed set of collapse locations ( with labels , which give an arbitrary total ordering which respects the causal ordering of the spacetime ) occurring between hypersurfaces and , and a compete set of collapse outcomes at these locations , the state dynamics leads to an unambiguous and foliation - independent change of state between and ; and ( ii ) the probability rule specifies the joint probability of complete sets of collapse outcomes independently of spacetime foliation , given only the state on the initial surface .the joint probability density for the set of outcomes can be determined from ( [ prob ] ) by repeatedly making use of the definition of conditional probability , and is given by where depends on as and where the choice of foliation is arbitrary and corresponds to the arbitrary total ordering of . at this pointone can take the view that the resulting state histories with respect to different foliations are merely different descriptions of the same events .alternatively , one can regard the collapse outcomes as the primitives of the theory from which the quantum state histories are derived .the covariant form of the collapse dynamics together with the absence of any foliation dependence result in an adequate framework for a relativistic collapse model . to realize such a model we must propose a form for a collapse operator which satisfies the above requirements .we begin by choosing where is an , as yet unspecified hermitian operator , and is a new fundamental parameter .this collapse operator describes a quasi projection of the state of the system onto an approximate eigenstate of about the point meaning that , if the state previous to the collapse event was represented in terms of eigenstates of , the collapse effect is to diminish the relative amplitude of eigenstates whose eigenvalues are far from with respect to those that have eigenvalues close to . the effect of many such collapses is to drive the state towards a -eigenstate .this collapse operator automatically satisfies the completeness condition .the microcausality conditions are satisfied if = 0 \quad\text{and}\quad \left[\hat{b}(x ) , \hat{\cal h}_{\rm int}(y)\right ] = 0,\end{aligned}\ ] ] for space - like and .we therefore propose that , for a theory of a scalar field such as the one we are considering in this work , where is the scalar field operator .this meets the above conditions for any interaction hamiltonian given as a function of .however , with this choice we face an immediate problem .if we calculate the average energy change in the field as a result of a collapse event we find |\psi_{\sigma}\rangle}{\langle\psi_{\sigma}|\psi_{\sigma}\rangle } = \frac{1}{2\zeta^2}\delta^{d-1}(0)\langle|\hat{f}(x)|^2\rangle,\end{aligned}\ ] ] for a dimensional spacetime where is the hamiltonian operator for the scalar field , and the final expression is the first order term in the large expansion .this expression is infinite for a continuum spacetime .this could be ameliorated by a spacetime with fundamental discreteness ( which should not enter in conflict with special relativity ) . with discreteness length scale we could approximate , and might then , by appropriate choices for the parameters of theory , be able to construct a model in which the collapse of massive objects is sufficiently rapid whilst the average energy increase is sufficiently small to satisfy experimental lower bounds .( there are three parameters in this model : ; the discreteness length scale ; and the spacetime density of collapse events , which could possibly be taken to correspond to the effective density of spacetime points , reducing the number of parameters to two . )alternatively we propose the use of a new field to mediate the collapse process with the effect of preventing infinite energy increase .this construction is outlined in appendix [ ax : aux ] where the effective collapse process satisfied by the scalar field is derived . in either the discrete space model or the auxiliary field model ,the end result is a collapse model which drives the scalar field towards eigenstates of the operator . as in any eventthis is the end result , we will be making free use of it throughout this paper . thus from here on wewill mostly ignore the details of precisely how we deal with the problem of energy increase . to understand the collapse rate we introduce the density matrix representation a collapse event at point on the surface converts the pure state into another pure state with a smaller uncertainty in .however as the specific state is stochastically determined , it is convenient to pass to a description in terms of ensembles .that is we consider the statistical mixture representing the ensemble of a large number of identical systems characterized by the same state just before the collapse event , and their collective change , just after such an event .this is thus described by : } = \int dz \hat{l}_x(z)\hat\rho_{\sigma}\hat{l}_x(z).\end{aligned}\ ] ] this equation describes how the pure state at any stage is transformed into an ensemble of possible resultant states , each element of which results from a particular value of the as yet unknown collapse outcome .the change in the statistical density matrix operator characterizing the ensemble is then , \right],\end{aligned}\ ] ] in the large limit . if we choose a foliation parametrized by , with lapse function and spatial metric on the timeslices , and assume that there is a spacetime collapse density of then we can write -\int d^{d-1 } x n\sqrt{h } \frac{\mu}{8\zeta^2 } \left[|\hat{f}(x)|^2,[|\hat{f}(x)|^2,\hat\rho_{t}]\right ] , \label{phicoll}\end{aligned}\ ] ] where stands for the determinant of the components of the metric in the coordinates .the first term corresponds to the unitary dynamics of the interaction hamiltonian , which would vanish in the case of a free field theory such as the one we are considering .it is convenient at this point to consider the evolution in terms of a basis of instantaneous field eigenstates for the hypersurface ( corresponding to a leaf of the foliation , _ constant_. that is are field eigenstates on the hypesurface ( i.e. states which satisfy , ) . such states form a complete basis of states for each value of .it thus follows that , \langle f |\hat\rho_t|f'\rangle,\end{aligned}\ ] ] = \int d^{d-1 } x n\sqrt{h } \frac{\mu}{8\zeta^2 } \left[|f(x)|^2-|f'(x)|^2\right]^2 .\label{rate}\end{aligned}\ ] ] the coupling parameter is usually taken as a constant but as first suggested in we will assume it is a local function of curvature scalars . for concretenesswe take where is an increasing function of its argument , , and is the weyl tensor for the spacetime metric .this feature ensures not only that the collapse effects will be much larger in regions of high curvature than in regions where the spacetime is close to flat but it might also be used to ensure that the completely flat regions where among other things the matter content corresponds to the vacuum , the effects of collapse disappear completely . in the two dimensional setting of the cghs model ,the weyl curvature is zero and so as a substitute we take to be an increasing function of the scalar curvature .the upshot is that the particular relativistic collapse model determined by the proposal ( [ propose ] ) leads to collapse in the state basis at a rate given by ( [ rate ] ) .the collapse process will not , however , lead to a precise field eigen - states , simply because the collapse is only assumed to narrow the uncertainty , and the free dynamics of the field , will cause dispersion of the field state in competition with the collapse . in fact , what the result of eq .( [ rate ] ) shows , is that that states with different are distinguished , rather than states with different .this would mean , in principle , that at the end of the collapse process .we would be left with states having a relatively well defined value of but possibly different values of .we do not think this will be a problem , because once the unitary dynamics and the interactions are taken into account , this kind of situation would be very unstable : any kind of unitary process which distinguished from say , would lead to differences which would be subsequently distinguished by the collapse process .in fact , a more realistic analysis , where backreaction effects would have to be considered , indicates that energetics will strongly disfavor field configurations with large spatio - temporal fluctuations in the phase .this follows from the fact such configurations will have a relatively large energy momentum tensor , and thus a large spacetime curvature , and as a consequence , they will be subjected to an increased collapse rate .the ensuing randomness in the dynamics will only decrease when a configuration with a rather smooth is arrived at .this is analogous to the effect considered in .thus it is natural to expect that ultimately the collapse will be to the basis . on the other hand , as we will be assuming that the collapse rate increases with curvature in an unbounded fashion , we can expect that collapse effects will accumulate and dominate over any dispersion in the high curvature region near to the black hole singularity , more precisely as the quantum gravity region is approached , and thus we will assume that the collapse process leads to a field eigenstate on hypersurfaces that are close enough to that region as an idealization . finally , the particular choice can be justified by demonstrating that this reduces to the well established csl model , , in the non relativistic limit of a massive complex scalar field of mass ( the non relativistic limit of a real scalar field or a massless field is less obvious ) .this can be seen from the well know correspondence the collapse basis is then where is the number density of non relativistic particles . up to a spatial smearing function , this is the collapse basis for the csl model .the smearing is introduced either by spacetime discreteness or the use of an auxiliary field to mediate the collapse process ( see appendix [ ax : aux ] ) .the situation we want to consider is that corresponding to the formation of a black hole by the gravitational collapse of an initial matter distribution characterized by a pure quantum state describing a relatively localized excitation of the field .the spacetime is supposed to be described by a manifold with a metric defined on except for a compact set corresponding to the region where a full quantum gravity treatment is required and that is taken to just surround the location of the classical singularity .this characterizes the formation and evaporation of an essentially schwarzschild black hole , supplemented by the region that is not susceptible to a metric characterization and where a full quantum theory of gravity is needed to provide a suitable description .we assume that is a compact boundary surrounding the quantum gravity region , which , by assumption , corresponds to that region where otherwise ( i.e , in the absence of a radical modification of gr due to qg effects ) we would have encountered the black hole singularity .we will further make some relatively mild ( and rather common ) assumptions about quantum gravity .\i ) the first assumption , which we have already mentioned , is that qg will cure the singularities of general relativity , however in doing so it will require that there would be regions where the standard metric characterization of spacetime does not apply .this is what in our case was referred as the set \ii ) we will assume that quantum gravity does not lead at the effective level to dramatic violations of basic conservation laws such as energy or momentum .\iii ) we will assume that the spacetime region that results at the other side of the qg region is a reasonable and rather simple spacetime .with these assumptions we can already make some simple predictions about the nature of the full spacetime .given that by assumption the effects of the collapse dynamics will be strong only in the region with high curvature , and more explicitly in the regions where the value of ( in the two dimensional models ) is large , the dynamics characterizing the early evolution of our initial pulse of matter will be essentially the same as that found in the standard accounts of black hole formation and evaporation : the pulse will contract due to its own gravitational pull , and as shown by birkoff s theorem the exterior region will be described by the schwarzschild metric ; the pulse will eventually cross the corresponding schwarzschild radius , and generate a killing horizon for the exterior time - like killing field . the early exterior region and even the region to the interior of the killing horizon but close to it at early times are regions of small curvature and thus the picture based on standard quantum field theory in curved spacetime that leads to hawking radiation will remain unchanged .this by itself indicates that essentially all the initial adm mass of the spacetime would be radiated in the form of hawking radiation and will reach ( asymptotic null infinity ) .next let us consider the spacetime that emerges at the other side of the singularity .given that essentially all the initial energy has been radiated to and in light of assumptions ii ) above the resulting spacetime should correspond to one associated with a vanishing mass ( this would be the bondi mass corresponding to a spacetime hypersurface lying to the future of region and intersecting in a segment to the future of that containing the hawking flux ) .this conclusion , together with assumption iii ) indicates that this spacetime region should be a simple vacuum spacetime which we take for simplicity to correspond to a flat minkowski region .let us now focus on the state of the quantum field .the initial state , as we indicated , corresponds to the _ in _ vacuum except for a pulse of matter falling under its own gravity and leading to the formation of a black hole .the state can be represented in the first qft construction in section ii as : where the sum is over the sets of occupation number for all modes ( which indicates that the mode is excited by quanta ) , is the total energy of the state according to the notion of energy associated with the asymptotic region , is the hawking thermal coefficient , and is a normalization constant . at these late timesthe excitations associated with the in - falling pulse are all located in the region interior to the killing horizon so that we can write the state ( [ initial state ] ) simply as : where the part in parenthesis corresponds to the black hole interior region and the rest to the exterior .the point of writing things in this manner is to underscore the fact that both the collapse dynamics and the changes in the state associated with quantum gravity will only affect the modes in the black hole interior region . in the case of the collapse dynamicsthis follows from the assumption that the collapse parameter is strongly dependent of curvature and thus its effects will only be relevant in regions of high curvature .as we have explained , one of the assumptions that underlies the present approach to deal with the information question during the hawking evaporation of the black hole is that the collapse dynamics , although valid everywhere , deviate most strongly from the unitary evolution of standard quantum theory in the regions where curvature becomes large . in the two dimensional context , this is achieved by assuming the that parameter controlling the strength of the modifications is a function of the scalar curvature .thus the changes to the quantum state of the system result mainly from the nontrivial evolution occurring in the region interior to the black hole horizon , and to the future of the matter shell .for simplicity we will therefore ignore the modification of the quantum state of the field resulting from the dynamics in the exterior region and the flat region before the matter shell and focus only in the effects of the collapse dynamics in the interior of the black hole lying to the future .we call this _ the collapse region_. with these considerations we take the initial state at a hypersurface lying well to the past of the collapse region ( for instance on in fig .[ cghs ] ) .this can be expressed in terms of the corresponding density matrix as : we can now simplify things using the basis of eigen - states of collapse operators which we will refer to as _ the collapse basis_. we thus rewrite the above density matrix in the form next we use the fact that , in the collapse region , especially in the late part thereof , the collapse dynamics becomes extremely strong and effective and thus drives the state of the system to one of the eigen - states of the collapse operators .this allows us to write the state representing an ensemble of systems initially prepared in the same state ( [ initial state ] ) , at any hypersurface lying just before the would - be classical singularity or more precisely the quantum gravity region(see fig .[ cghs ] ) , after the complete collapse process has taken place as , finally , we need to consider the system as it emerges on the other side of the quantum gravity region , i.e the state describing the ensemble after the would - be classical singularity .as we have discussed in the introduction we assume that quantum gravity would resolve the singularity and lead on the other side of it , to some reasonable spacetime and state of the quantum fields .we now consider the characterization of the system on a hypersurface lying just to the future of the would - be classical singularity .such a hypersurface would not be a cauchy hupersurface as it would intersect rather than .as such one can partially characterize the state of fields on it by the value of the bondi mass .it is clear , as have argued in the the introduction , that if we assume that quantum gravity does not lead to large violations of energy and momentum conservation laws , the only possible value for this bondi mass would have to be the mass of the initial matter shell minus the energy emitted as hawking radiation , which is present to the past of the singularity on .this remaining mass will thus have to be very small .the task for quantum gravity is to turn the internal state , post singularity into a straightforward low energy state . for simplicityassume that it is the vacuum being chosen by the collapse process .this means that the final state characterizing the ensemble of systems ( on ) should be of the form : that is , the system has evolved from an initially pure state to state representing the proper thermal state of radiation on the early part of and the vacuum state afterwards .one of the most serious challenges one faces when attempting to construct relativistic models of spontaneous dynamical reduction of the wave function , either of the discrete or continuous kind , is their intrinsic tendency to predict the violation of energy conservation by infinite amounts : the problem is resolved in the non - relativistic setting where one can easily control the magnitude of that kind of effect , by relying on suitable spatial smearings of the collapse operators , usually taken to be the position operators for the individual particles that make up the system .when passing to a relativistic context the tendency is for energy violation to become unbounded unless special care is used in the construction of the theory to ensure it does not .this issue becomes relevant in the present context at two places .first and foremost at the point where one wants to consider the back reaction of the spacetime metric to the changes in the quantum state of the field induced by the collapse dynamics. the second place where the issue appears is the point where one considers the role of the quantum gravity region . in a previous treatmentthe argument was that , provided that quantum gravity did not result in large violations of energy conservation one can expect the state after the quantum gravity region to correspond energetically to the content of the region just before the would be singularity , and that this region would have almost vanishing energy content being made up of the positive energy contribution of the collapsing matter shell and the negative energy contribution of the in - falling counterpart to the hawking flux .we would face a serious problem with this argument if the region just before the would - be singularity could contain an arbitrarily large amount of energy as a result of the unboundedness of the violation of energy conservation brought about by the collapse dynamics .in that case we would not be able to reasonably argue for the step ( [ post - singularity - vacuum ] ) .there are various schemes whereby this issue can be tackled : \1 ) we might consider a fundamental discreteness of spacetime ( which however as discussed in should not be tied to violations of special relativity ) .\2 ) we might adjust the choice of collapse operators and provide a sensible spacetime smearing scheme for them that relies on the energy momentum of the matter fields or on the geometric structure of the curved spacetime . in this contextis it worth noting that when one considers that the parameter controlling the strength or intrinsic rate of the collapse dynamics depends on the spatial curvature ( i.e. the ricci scalar in 2 dimensions and something like the weyl tensor , through , in the more realistic 4 dimensional case ) one might assume that in flat space - times the collapse rate actually vanishes removing most concerns about the stability of the vacuum in these theories . in that case one would adopt the position that the collapse associated with individual particles in the non - relativistic quantum mechanical context is actually derived from the small deformation of flat spacetime associated to that same particle .that is , one would consider that the particle s energy momentum curves the spacetime and this in turn turns - on the quantum collapse dynamics .this is only a rough idea at this point but one that certainly seems worthy of further exploration .\3 ) we might rely on the effective smearing provided by the use of the auxiliary pointer field as a way to introduce the smearing procedure without seriously affecting the simplicity of the treatment as discussed in the appendix [ ax : aux ] below .we have studied the possibility of accounting for the information loss in the processes of formation and hawking evaporation of a black hole through the explicit use of a relativistic version of a dynamical reduction theory . in previous worksit has been argued that the consideration of theories involving departure from the standard schrdinger unitary dynamics offers a promising path to dealing with what many researchers in the community considered as one of the most challenging paradoxes of modern theoretical physics .those works were carried out using a non - relativistic version of dynamical reduction theories known as continuous spontaneous localization , and one of the open issues in those treatments was whether similar results could be obtained relying on fully relativistic settings .the present work provides a positive answer in the form of proof of existence of a relativistic approach that leads essentially to the same results as those of the previous non - relativistic treatments .however it is clear that we are not yet in the possession of a fully satisfactory scheme . forthat we need to consider in detail the issues related to energy production and its possible back reaction effects .furthermore eventually one would like to consider the issue of uniqueness and completeness in the sense of determining the collapse operators valid for a general setting that reduces to the appropriate ones ( i.e smeared particle position operators ) in the non - relativistic situations ( the ones treated by the standard csl or grw theories ) , and finding the dependence of the parameters such as on the spacetime curvature .db is supported by the templeton world charity foundation .skm is an international research fellow of the japan society for the promotion of science .ds acknowledges partial financial support from dgapa - unam project ig100316 and by conacyt project 101712 .consider two collapse events occurring at spacelike separated points and with ( meaning that the points and are not to the past of and not to the future of ) .an explicit choice of foliation places and in a sequence .suppose that occurs first on surface and occurs second on surface with .we therefore have which follows from ( [ prob ] ) by making use of the definition of conditional probability .now suppose instead that we choose an alternate foliation in which the collapse event at occurs first on surface and occurs second on surface with .now we now show that given the conditions = 0 , \label{acom2}\end{aligned}\ ] ] and = 0 , \label{acom3}\end{aligned}\ ] ] for spacelike separated and , then . in order to do thiswe define a surface on which both points and are found .we can then write \hat{u}[\sigma_2,\sigma_{xy}]\hat{l}_y(z_y)\hat{l}_x(z_x)\hat{u}[\sigma_{xy},\sigma_1 ] \hat{u}[\sigma_1,\sigma_i]|\psi_{\sigma_i}\rangle \nonumber\\ & = \hat{u}[\sigma_f,\sigma_{xy}]\hat{l}_x(z_x)\hat{l}_y(z_y)\hat{u}[\sigma_{xy},\sigma_i]|\psi_{\sigma_i}\rangle.\end{aligned}\ ] ] the second line uses the fact that \right ] = 0,\end{aligned}\ ] ] if is found on both and , which follows from ( [ acom3 ] ) .the third line follows from ( [ acom2 ] ) .next we define the surface on which both points and are found and such that , along with a further surface , also containing and and satisfying ; ; and .we then have \hat{l}_x(z_x)\hat{l}_y(z_y)\hat{u}[\sigma''_{xy},\sigma_i]|\psi_{\sigma_i}\rangle \nonumber\\ & = \hat{u}[\sigma_f,\sigma'_{xy}]\hat{u}[\sigma'_{xy},\sigma''_{xy } ] \hat{l}_x(z_x)\hat{l}_y(z_y)\hat{u}[\sigma''_{xy},\sigma_i]|\psi_{\sigma_i } \rangle \nonumber\\ & = \hat{u}[\sigma_f,\sigma'_{xy}]\hat{l}_x(z_x)\hat{l}_y(z_y)\hat{u}[\sigma'_{xy},\sigma_i]|\psi_{\sigma_i}\rangle \nonumber\\ & = \hat{u}[\sigma_f,\sigma'_2]\hat{u}[\sigma'_2,\sigma'_{xy}]\hat{l}_x(z_x)\hat{l}_y(z_y)\hat{u}[\sigma'_{xy},\sigma'_1 ] \hat{u}[\sigma'_1,\sigma_i]|\psi_{\sigma_i}\rangle \nonumber\\ & = \hat{u}[\sigma_f,\sigma'_2]\hat{l}_x(z_x)\hat{u}[\sigma'_2,\sigma'_{xy}]\hat{u}[\sigma'_{xy},\sigma'_1]\hat{l}_y(z_y)\hat{u}[\sigma'_1,\sigma_i]|\psi_{\sigma_i}\rangle \nonumber\\ & = |\psi'_{\sigma_f}\rangle.\end{aligned}\ ] ] using this result it follows from ( [ prob2a ] ) and ( [ prob2b ] ) that the probability density for the pair of collapse outcomes and is independent of the choice of foliation .iteration of the above procedure for further collapses demonstrates foliation independence of the complete set of collapse outcomes occurring at the set of collapse locations between any and .a way to understand the infinite energy increase of the collapse dynamics described in section [ covcoll ] is to notice that each collapse on the quantum state occurs at a single point on the spacetime .this results in sharp spatio - temporal discontinuities in the state of the field , and hence a large energy increase . in order to prevent thisthe collapse should happen smoothly .for a quantum field this means that whenever a local collapse occurs it should act over some spacetime region rather than at an infinitesimal space time point .this requires some form of smeared interaction . in order to facilitate this we use a new type of relativistic quantum field which we call the _ pointer field _ as introduced in field has an independent degree of freedom at each space time point. we will denote it by ( not to be confused with the matter field ) .the commutation properties of the pointer field are as follows : = \frac{1}{\sqrt{g(x)}}\delta^4(x - x ' ) ; \quad \left[\hat\psi(x),\hat\psi(x')\right ] = 0.\end{aligned}\ ] ] notice that the dirac delta extends over the whole space time , not just over a hyper surface . given these annihilation and creation operators we can define a smeared field operator .\end{aligned}\ ] ] and a smeared number density operator which will be the collapse basis of equation ( [ l ] ) the smearing functions and assumed to be defined in terms of the ( fixed ) space time properties such as local curvature .they should each satisfy certain properties of smoothness and finiteness under integration to be determined by their consequences for the relativistic collapse theory .we assume that there is an interaction between ordinary matter fields and the pointer field of the form where is a coupling parameter , which , as we will see below , affects the effective collapse rate of the matter field .the state now includes the state of both matter field and the pointer field .the required micro causality conditions ( [ com2 ] ) and ( [ com3 ] ) can be used to begin to constrain the form of and . in generalit follows from the above commutation properties that = 0 ; \quad \left[\hat{b}(x),\hat{b}(x')\right ] = 0 , \label{com0}\end{aligned}\ ] ] for all space time points and .it also follows that for space like separated and , = 0 , \label{com}\end{aligned}\ ] ] provided that the domain of only includes points where is inside the future light cone of , and the domain of only includes points where is inside the past light cone of .this commutation property result in ( [ com3 ] ) . asa concrete proposal for on a spacetime with metric we will take : where the integration measure along the geodesic is the differential of the invariant line element , rather than volume element where is some positive integer , is a suitably chosen dimensional parameter , is the characteristic function of , the chronological future of , that is iff and vanishes otherwise , is the causal geodesic connecting and ( which we will assumed to be unique in a convex normal neighborhood of . for points outside this region we can replace the prescription to one where we replace the integral along a single geodesic by the sum of integrals over all such geodesics . ] ) , is the tangent to the geodesic by proper time , and is the bell tensor of the spacetime metric .we note that as the are future directed time - like vectors the integrand in the above equation is positive semi - definite .that is .in fact , generically this quantity will vanish only along the principal null directions of the weyl tensor , which as we know , are in general just a discrete set of directions ( 4 in 4 spacetime dimensions ). it would be only when such null directions coincide with a tangent along the full null geodesic connecting and that such an integral would vanish .it is therefore , only in those very unusual cases ( where such a exists ) , that for points that approach the points , that the integral might fail to be bounded from below by a positive number . otherwise , in the generic situations , the functions will rapidly decrease as the point gets further from even along the directions that approach those in the null cone " corresponding to the boundary of the chronological future of : . to understand how the collapse mechanism works consider first the interaction term .this has the effect of coupling the state of the field to the state of the field .an excited matter field will lead to an excitation of the pointer field in the local region determined by . for a matter field in a superposition of different statesthis interaction will lead to an entanglement between the different states and different states of the pointer field .the state of the pointer field is the one that is now subjected directly to the collapse dynamics of sction iii .the state of the system is an element of the product space between the hilbert space of the matter fields and that of the pointer field .the action of the collapse operator leads to a collapse of the pointer field in the ( smeared ) number density basis .they act as quasi projectors onto a number density state with a central value determined by the random choice .this causes the pointer field to collapse towards a state of definite number density .since the matter field is entangled with the pointer field , the collapse of the pointer field induces a collapse of the matter field state in the field state basis .now we derive an effective characterization of the resulting collapse dynamics involving just the quantum field . starting with the fully covariant model we will trace out the the pointer field leaving an approximate dynamical equation for the state .we start by choosing a particular foliation parametrized by a time coordinate , and write the spacetime metric in terms of the standard 3 + 1 decomposition as : where is the lapse function and are the components of the shift vector characterizing the foliation and are the components of the induced metric on the corresponding spatial hyper surface .the equation describing the interaction between the scalar field and the pointer field is then where for now we ignore the collapses .the state is always pure and so the density matrix can be written as we further assume that the pointer field is always in an approximate vacuum state .] so that we can write this assumption requires that the coupling parameter is weak and that the pointer field collapses are able to resolve very small differences in number density so that the collapse occurs even after a very weak interaction between and .this requires choosing to be sufficiently small .the master equation describing the development of the density matrix is ,\end{aligned}\ ] ] where the interaction hamiltonian is given by we can use the born approximation , valid for weak interactions , to find an approximate solution to the master equation .\end{aligned}\ ] ] this can then be inserted back into the master equation - \int_0^t dt ' \left[\hat{h}_t,\left[\hat{h}_{t'},\hat\rho_t\right]\right].\end{aligned}\ ] ] next we take the partial trace over the pointer field degrees of freedom .to do this we use = \nu \int d^3 x n\sqrt{h } \left[|\hat{f}(x)|^2,\hat\rho^{f}_0\right ] { \rm tr}_{\psi}\left[\hat{a}(x)\hat\rho^{a}_0\right],\end{aligned}\ ] ] and \right ] = \nu^2 \int d^3 x n\sqrt{h}\int d^3 x ' n'\sqrt{h ' } \left[|\hat{f}(x)|^2,\left[|\hat{f}(x')|^2,\hat\rho^{f}_t\right]\right ] { \rm tr}_{\psi}\left[\hat{a}(x)\hat{a}(x')\hat\rho^{a}_t\right].\end{aligned}\ ] ] now assume that the function can be reasonably well approximated by a delta function where is a spacetime dependent scale factor .together with the assumption that the pointer field density matrix is approximately vacuum this results in = 0,\end{aligned}\ ] ] and = \frac{\eta^2(x)}{\sqrt{g(x)}}\delta^4(x - x').\end{aligned}\ ] ] putting all this together we find the master equation for the scalar field to be of the form \right ] .\label{masterxi}\end{aligned}\ ] ] this is of precisely the same form as equation ( [ phicoll ] ) once we identify with .this form of the master equation predicts infinite energy increase .this is tempered by choosing a form for which is not a delta function and ( [ masterxi ] ) should be considered an idealized limit for describing collapse . in section [ covcoll ]we demonstrated that a relativistic master equation of this form reduces to the non relativistic csl model .the correspondence can be made more precise by choosing a suitable frame of reference defined by the coordinates in which [ invariantly defined according to ( [ eq : sa ] ) ] takes the approximate form , where we assume that we have tuned the choice of the parameter so as to ensure that , in ordinary laboratory situations , where the spacetime is almost flat ( except for the curvature induced by the few particles involved ) , reduces approximately to the standard csl smearing function . with the grw length scale .the resulting non relativistic csl model is well known and produces finite energy increase which can be kept suitably small by appropriate choice of the parameters .l. susskind , l. thorlacius and j. uglum , `` the stretched horizon and black hole com- plementarity , '' phys .d 48 , 3743 ( 1993 ) [ hep - th/9306069 ] . c. r. stephens , g. t hooft and b. f. whiting , `` black hole evaporation without infor- mation loss , '' class .11 , 621 ( 1994 ) [ gr - qc/9310006 ] .l. parker and d. toms , `` quantum field theory in curved spacetime , '' cambridge university press ( 2007 ) ; r. m. wald , `` quantum field theory in curved spacetime and black hole thermodynamics , '' the university of chicago press ( 1994 ) .a. perez , h. sahlmman and d. sudarsky , on the quantum mechanical origin of the seeds of cosmic structure , " classical and quantum gravity * 23 * , 2317 ( 2006 ) ; d. sudarsky , shortcomings in the understanding of why cosmological perturbations look classical , " international journal of modern physics d * 20 * , 509 ( 2011 ) [ arxiv:0906.0315 [ gr - qc ] ] ; s. j. landau , c. g. scoccola and d. sudarsky , cosmological constraints on nonstandard inflationary quantum collapse models , " physics review d * 85 * , 123001 ( 2012 ) [ arxiv:1112.1830 [ astro-ph.co ] ] ; g. len garca , s. j. landau and d. sudarsky , quantum origin of the primordial fluctuation spectrum and its statistics , " physics review d * 88 * , 023526 ( 2013 ) [ arxiv:1107.3054 [ astro-ph.co ] ] ; p. caate , p. pearle and d. sudarsky , csl quantum origin of the primordial fluctuation , " physics review d * 87 * , 104024 ( 2013 ) [ arxiv:1211.3463[gr - qc ] ]. s. k. modak , l. ortz , i. pea and d. sudarsky , `` non - paradoxical loss of information in black hole evaporation in a quantum collapse model , '' phys .d * 91 * , no .12 , 124009 ( 2015 ) [ arxiv:1408.3062 [ gr - qc ] ] .g. ghirardi , a. rimini , t. weber , a model for a unified quantum description of macroscopic and microscopic systems , " in a. l. accardi ( ed . ) quantum probability and applications , p. p. 223- 232 , springer , heidelberg ( 1985 ) .d. albert , quantum mechanics and experience ( harvard university press , 1992 ) , chapters 4 and 5 ; j. bell , quantum mechanics for cosmologists " , in quantum gravity ii , oxford university press , 1981 ; d. home , conceptual foundations of quantum physics : an overview from modern perspectives ( plenum , 1997 ) .chapter 2 ; e. wigner , `` the problem of measurement , '' am .j. of physics * 31 * , 6 ( 1963 ) ; a. lagget , `` macroscopic quantum systems and the quantum theory of measurement , '' prog .. suppl . * 69 * , 80 ( 1980 ) ; j. s. bell , speakable and unspeakable in quantum mechanics , " ( cambridge university press 1987 ) ; j. s. bell , against measurement , " phys .world * 3 * , 33 ( 1990 ) . for reviews about the various approaches to the measurement problem in quantum mechanicssee , for instance , the classical reference m. jammer , `` philosophy of quantum mechanics .the interpretations of quantum mechanics in historical perspective , '' ( john wiley and sons , new york 1974 ) ; a. peres , quantum theory : concepts and methods " ( kluwer , academic publishers , 1993 ) ; r. omnes , the interpretation of quantum mechanics , " ( princeton university press 1994 ) ; and the more specific critiques s. l. adler why decoherence has not solved the measurement problem : a response to pw anderson , " stud .mod . phys . * 34 * , 135 - 142 ( 2003 ) [ arxiv : quant - ph/0112095 ] ; a. elby , why modal interpretations of quantum mechanics do nt solve the measurement problem , found . of phys .* 6 * , 5 - 19 ( 1993 ) .d. durr , s. goldstein , and n. zangh , `` bohmian mechanics and the meaning of the wave function , '' in cohen , r. s. , horne , m. , and stachel , j. , eds . , experimental metaphysics quantum mechanical studies for abner shimony , volume one ; boston studies in the philosophy of science 193 , ( kluwer academic publishers , 1997 ) ; j. s. bell , on the impossible pilot wave " , foundations of physics 12 ( 1982 ) , pp .989 - 99 ; d. wallace , the emergent multiverse .oxford university press , 2012 ; c. fuchs and a. peres , quantum theory needs no interpretation `` '' .physics today 53(3 ) ( 2000 ) , pp .70 - 71 ; o. lombardi and d. dieks , modal interpretations of quantum mechanics " , the stanford encyclopedia of philosophy , 2014 ; e. joos et al , decoherence and the appearance of a classical world in quantum theory , 2nd edition ( springer , 2003 ) ; w. zurek , decoherence and the transition from quantum to classical , " phys .44 , no . 10 , 1991 . a. kent , against many - worlds interpretations " , online at http://xxx.arxiv.org/abs/gr-qc/9703089 ; h. brown , and d. wallace , solving the measurement problem : de broglie - bohm loses out to everett " .foundations of physics 35 ( 2005 ) , pp.517 - 540 ; j. bub , interpreting the quantum world ( cambridge , 1997 ) , chapter 8 , pp .212 - 236 .( rather critical discussion of the decoherence - based approaches ) .r. penrose , `` the emperor s new mind , '' ( oxford university press 1989 ) ; r. penrose , `` on gravity s role in quantum state reduction , '' in physics meets philosophy at the planck scale , callender , c. ( ed . )( 2001 ) .r. penrose , `` on the gravitization of quantum mechanics 1 : quantum state reduction , '' found .phys . * 44 * , 557 - 575 ( 2014 ) ; r. penrose , `` on the gravitization of quantum mechanics 2 : conformal cyclic cosmology , '' found .* 44 * , 873 - 890 ( 2014 ) . c. g. callan ,s. b. giddings , j. a. harvey , and a. strominger , evanescent black holes , " phys .d * 45 * , r1005 ( 1992 ) .a. fabbri and j. navarro - salas , modeling black hole evaporation ( imperial college press , london 2005 ) .j. collins , a. perez , d. sudarsky , l. urrutia , and h. vucetich , lorentz invariance in quantum gravity : a new fine tunning problem ? , " _ phys .lett . _ * 93 * , 191301 , ( 2004 ) ; c. rovelli and s. speziale , reconcile planck - scale discreteness and the lorentz - fitzgerald contraction , _ phys .* d 67 * , 064019 ( 2003 ) [ arxiv : gr - qc/0205108 ] ; f. dowker and r. sorkin , `` quantum gravity phenomenology , lorentz invariance and discreteness , '' [ arxiv : gr - qc/0311055 ] . | we study a proposal for the resolution of the black hole information puzzle within the context of modified versions of quantum theory involving spontaneous reduction of the quantum state . the theories of this kind , which were developed in order to address the so called _ measurement problem _ in quantum theory have , in the past , been framed in a non - relativistic setting and in that form they were previously applied to the black hole information problem . here , and for the first time , we show in a simple toy model , a treatment of the problem within a fully relativistic setting . we also discuss the issues that the present analysis leaves as open problems to be dealt with in future refinements of the present approach . |
many natural phenomena have shown self - similarity or fractal properties in diverse areas .it is well known that time series in neuroimaging such as electroencephalogram , magnetoencephalogram , and functional mri have fractal properties .most of fractal time series can be well modeled as long memory processes .a long memory process generally has scale - invariant properties in low frequencies ; for example , the scale - dependent wavelet variance has asymptotically power law relationship in low - frequency scales , which implies its fractal property .such a property also has been demonstrated by achard __ by exploiting the taylor expansion of wavelet variances ; regrettably , they did not provide the detailed proof . here , we show the detailed procedure of achard _a long memory process can be defined as follows as suggested by moulines __ ; that is , a real - valued discrete process is said to have long memory if it is covariance stationary and have the spectral density for where is a non - negative symmetric function and is bounded on .[ theorem : wv ] let is the wavelet coefficient of the time series at the -th scale and the -th location , and let be the squared gain function of the wavelet filter such that then , satisfies achard _ et al . _ made its taylor expansion to estimate the long memory parameter of as shown in the corollary [ corollary : awv ] .we give the detailed proof of the corollary [ corollary : awv ] .[ corollary : awv ] let .then , where it is well known we have the following taylor series from and , let us define four integrals : since if for then , from and the integrals - , rcl[eq : s(f)-int - sol ] ( j)&=&22^j+1_2/2^j+1 ^ 2/ 2^j ( s(f ) ) df + & = & 2^j+2(1-d ) + & = & 2^j+2(1-d ) + note that and thus , rcl[eq : s(f)-int - sol2 ] ( j)&=&2^j+2(1-d ) + & = & 2^j+2(1-d ) + where rcl[eq : s(f)-int - sol3 ] a_2&:= & ( + d/12 ) m_12 + & = & ( 2)^2 ( + ) . + then , we can have rcl[eq : s(f)-int - sol4 ] ( j)&=&2 ^ 2dj + & = & 2 ^ 2dj + where in the corollary , as the scale increases , the wavelet variance converges ; in other words , it implies that we can simply estimate the long memory parameter by linear regression method if we have the estimates of wavelet variances .we verified achard _ et al . _s proof on asymptotic properties of wavelet variances of a long memory process .unfortunately , achard _s taylor expansion of wavelet variance is not perfectly consistent with our result ; while in our result , they computed as . on the other hand ,the assumption on short memory such that loses generality of their proof since there exist more general classes of short memory ; indeed , moulines __ proved that the asymptotics of wavelet variances hold when the short memory belongs to the function set defined as the set of even non - negative functions on $ ] such that nevertheless , their method is relatively simpler than other mathematical proofs .moreover , we will be able to apply their method to investigate the asymptotic properties of multivariate long memory processes as achard _et al . _ already attempted for bivariate long memory processes .e. moulines , f. roueff , m. taqqu , _ on the spectral density of the wavelet coefficients of long - memory time series with application to the log - regression estimation of the memory parameter _ ,journal of time series analysis . 28 ( 2007 ) 155187 . | a long memory process has self - similarity or scale - invariant properties in low frequencies . we prove that the log of the scale - dependent wavelet variance for a long memory process is asymptotically proportional to scales by using the taylor expansion of wavelet variances . memory process , taylor expansion |
the rapid development of technology , the distance elimination through communication networks and competitiveness have influenced the field of education .tele - education is a new form of education which is the result of the above factors . in a few words , tele - education aims at education through distance , without the simultaneous presence of the trainers and trainees and the use of new technology such as networks , multimedia etc .it is a form of education which is adjusted to the needs of each trainee and where time and distance do nt matter . on the other side, there is an endless ability of expansion and there is no restriction in the number of the participants .there are many tele - education platforms either synchronous or asynchronous see for example , .nevertheless , there are a few platforms that can support the evaluation of trainees automatically .the s.a.t.e.p .application was designed as an autonomous platform for synchronous/ asynchronous tele - education through which the administrator just inserts the various files and questions and the platform produces the tests randomly .the application was created under the philisophy of open source software , which gives the potential to anyone having the basic knowledge of programming languages , to adjust the platform to his needs with no cost see also and .we make use of html , javascript , mysql , apache server , php and css .the html was used for the creation of the web site design .javascript was used mainly for the control of the elements existing in the forms and generally whenever there is a dynamic content for the local process of the data ( client - side ) .mysql was used for the design of database .mysql is a very powerful tool for the management of database , through which is given the potential to insert , modify and delete tables , data and relations among tables . through mysql server ,is ensured that only authorized users have access to it and that a number of users can work simultaneously on the same database .apache is an open source web server .it s highly adaptive with advanced attributes . through csswere created the menus of the web site , for the formatting of the various elements and also for their behavior in certain events . finally , php is a server side script language which is available for all operating systems . for the creation of the source code of the web sitewere also used open source programs such as cssed and tswebeditor .the s.a.t.e.p .application is used in the `` web programming language '' course of the m.sc .applied informatics course in kozani .in the s.a.t.e.p . application three types of users are supported .there are the guest users , who do nt have access to the system , the registered users who do have access to the platform and finally administrators .the guest users can only see the name of lecture files , apply for registration to the platform and finally communicate with the administrator . the system has been designed in a way such that not everyone can access the platform . only the administrators have the ability to allow the users to have access .when a user wants to have access to the system he fills his personal details in a form .the application makes use of a captcha textbox such that to deny access to bots that make automatic registrations .also there is a check for duplicates not only for the personal details but also for the e - mail . if all these informations are unique the administrator grants the user the ability to use the platform .also , at this point if a user wants to recover his password , he can complete his username and the new password will be sent to his e - mail address .in addition there is a second group of users , the registered users . in this groupthey can view and edit some of their account information , can read the lecture files , do the lecture tests in order to test their knowledge and communicate with other users or the trainer through a chat module .finally they can take exams that are defined by the admin . as shown in figure 3 , a registered user can edit his username , e - mail and password .the registration index , the name and the last name can not be edited by the user , but only from the administrator .one of the most important module of the application is the tests and exams procedure .all the questions appear in random order from a pool a large number of questions that are exist in the database .so if someone takes the first lecture test more than once , he might not even have one common question .furthermore , if two users take the same test simultaneously , either the questions do nt appear in the same order or they are nt the same .moreover , in multiple choice questions the answers appear in random order , with the first one always selected .the check of the answers is done automatically and the results are stored in the database . the registered user can communicate with the administrator via a contact form .the day / time , name and e - mail of the user are completed automatically by the system and all he has to do is to compose the message .the third and last group of users are the administrators .they can insert , modify and delete users , lectures , as well as all the types of questions such as gap filling and/or multiple choice questions .they can also communicate with other users with chat or email , define the date of the tests and exams , as well as their duration .after the date/ time duration is defined , the users are informed automatically via email .when the users decide to be members of the platform and apply for , they do nt immediately obtain access .the administrator must authenticate that they are really community `` members '' and then either grant access to them or delete them from the system minimizing the possibility of `` bots '' scripts . moreover , the administrator is able to view all the registered users and their personal details , but he can edit only some of them such as their register number , last name and name .in addition , he has the potential to search for registered users based on their register number , name , last name , username or e - mail by inserting a string of characters .the results are displayed based on the category chosen . in the `` delete users '' section , the users are displayed in order based on their register number and the administrator is able to delete them by choosing the corresponding checkbox .the administrator has also the ability to insert , delete and edit the lectures . due to the fact that the lectures that can be taken in each course vary , the administrator has the ability to add lectures adjusted to the needs of each course .the opposite procedure is the deletion of the lectures . during that ,not only the lecture files but also the questions related to the specific lecture are deleted .furthermore , the administrator is able to display , upload and delete files of each lecture ( figure 12 ). he can also select the lecture whose files need to be processed . in figure 13 the name , size and file type of the lecture files are shown . in the column `` action '' the administrator can selectwhich files he wishes to delete from a specific lecture . by pressing the`` submit '' button , the files that have been marked as `` delete '' will be erased from the database .another important module of the application that the administrator has access is the questions category .the questions in the platform are divided in multiple choice questions and gap filling questions . during the insertion of the questions according to the type of question the form of the fields changes .for instance , if the type of question that has been chosen by the administrator is a gap filling question , the result will be as shown in figure 14 .the administrator is asked to fill in both the question and the right answer , but also to choose in which lecture the specific question belongs . in case that the administrator chooses to insert a multiple choice question , the menu is adjusted as shown in figure 15 .the administrator is asked to choose in which lecture the question belongs and fill in the fields of the question , the right answer and at least of one wrong answer .the administrator can also delete one or more questions using a menu . during the view of gap - filling questions for deletion, the questions are ordered according to the lecture .the fields of the question , the answer and the lecture in which they belong are displayed , as well as the checkbox that is chosen for question deletion .the administrator can change the page size , he can also search for some question by entering a string in the textbox .the search is executed either there is a full search string or a part of the word we are looking for . the same procedure also applies to multiple choice and gap - filling questions as well . during processing the administrator can make changes to the questions and to the answers . administrator also has the ability to send massive e - mails to all registered users . finally , the administrator sets the dates and the duration of the tests / exams .the users are informed via e - mail for any changes automatically .mysql is the most well - known database management system .it is an open source tool and its major advantage is that it is constantly improved in regular time intervals . as a result mysql has evolved into a fast and extremely powerful database .it performs all functions like storing , sorting , searching and recall of data in an efficient way .moreover , it uses sql , the standard worldwide query language .finally , it is distributed free .the database which is used in the application has been designed to use the storage engine innodb , so that the restrictions of the foreign keys can be created .it consists of twelve tables , eight of which are the most important as they are connected to each other with relations .its complete diagram is as follows : the table register consists of : field am ( type : integer ) which is the primary key . the fields name , surname , username , password , email and department are varchar type . in this tablethe records which represent the users applications in order to access the system , are stored .the table admins consists of : the field idadm ( type : integer ) which is the primary key .the fields name , surname , username , password , email and department are varchar type . in this table all the administrators of the system , are stored .the table misc consists of fields idmisc ( type : integer ) which is the primary key , time ( type : time , format : hh : mm : ss ) and date ( type : date , format : mmyyyydd ) . in this tableis stored the date and the time that the final and lecture tests will take place .the table history consists of fields id_gen ( type : integer ) which is the primary key , am ( type : integer ) , date ( type : date , format : mmyyyydd ) and percent ( type : float ) . in this tableare stored all the final tests that the user gave , with every date and percentage he achieved each time . in the table lectures are stored only the lectures of the lessons .it consists of idd ( type : integer ) which is the primary key and lecture ( type : varchar ) . in table lecture_filesare stored the file paths that are related with lectures .it consists of i d ( type : integer ) , idd ( type : integer ) , name ( type : varchar ) , type ( type : varchar ) and size ( type : integer ) . the primary key is i d whereas idd is the foreign key which defines in which lecture the file of the lesson is referred . in the filed name is stored the full path of the file . in the field typeis stored the type of the file whereas in the field size its size . in tablesmultiple_questions and filling_questions are stored multiple choice questions and filling questions respectively .the fields of multiple_questions table are ide ( type : integer ) , idd ( type : integer ) whereas question , ra , wa1 , wa2 , wa3 are varchar type .ide is the primary key whereas idd is a foreign key which defines in which lecture each multiple choice question , belongs .the field question contains the question , ra contains the right answer and in the rest three fields are the wrong answers .the fields of the table filling_questions are idf ( type : integer ) , idd ( type : integer ) , question ( type : varchar ) and answer ( type : varchar ) .idf is the primary key whereas idd is a foreign key which defines in which lecture each filling question , belongs . in the field questionsthe question is stored , whereas in the field answer the right answer .each of user_mult_test and user_fill_test tables consist of two fields , where both of these fields are keys .fields idum ( type : integer ) and iduf ( type : integer ) are the primary keys respectively .these two tables are intermediate stops for the creation of the final table user_complete_test that is shown below .their presence is necessary and the reason is , that for the various tests which the system provides the number of questions for each type is different . in the final test of the system for instance are chosen twenty multiple choice questions and ten filling questions . supposing we want to create a test with ten multiple choice and five gapfilling questions .the questions are grouped in the user_multr_test and user_field_test tables and each group is put as a foreign key in the user_complete_test_table . in the table users are stored the registered users of the system .it consists of the fields am ( type : integer ) which is the primary key and name , surname , username , password , email , department which are varchar .the fields are equally important with fields of the register table . in the final user_complete_test tableare stored all the tests that the user gave , either they refer to lecture tests or final tests .its fields are idum ( type : integer ) , iduf ( type : integer ) , am ( type : integer ) , date ( type : date , format : mmyyyydd ) and percent ( type : float ) .the table does nt have a primary key of its own , but three foreign keys ( idum , iduf , am ) are its primary . in the fieldam is stored the record number of the student to whom the test corresponds . in the fieldsidum and iduf are stored the multiple choice question and filling groups that have been chosen for the specific am test . in the field dateis stored the date that the test is held and in the field percent the percentage that the student achieved .in the present paper the s.a.t.e.p . :synchronous - asynchronous tele - education platform for educational purposes is developed and can be easily parametrized to meet the needs for any educational inst .the administrator can upload the files that the user intends to read , but also inserts the questions of each lecture .the trainee can read / download the files of each lesson and also test his knowledge on what he has studied through a series of questions / tests .the automatic mix of the exam questions and their automatic correction as well , makes the current assignment different from the rest of the content management systems ( cms ) . unlike cms, the current platform is ready to be used and does nt need the addition of extra plugin .html , xhtml , and css : complete , gary b. shelly and denise m. woods , 2010 .php solutions : dynamic web design made easy , david powers , 2010 .high performance mysql : optimization , backups , and replication , baron schwartz & peter zaitsev & vadim tkachenko , 2012 .apache : the definitive guide 3rd edition , ben laurie and peter laurie , oreilly , 2003 .e - airquality : a dynamic web based application for evaluating the air quality index for the city of kozani , hellas , 15th panhellenic conference on informatics ( pci ) , sept .30 2011-oct . 2 2011 , pp .171 - 174 , 2011 .a.q.m.e.i.s . : air quality meteorological and enviromental information system in western macedonia , hellas , arxiv:1406.0975 , 2014 .tele - learning : the challenge for the third millennium ( ifip advances in information and communication technology ) , edited by don passey and mike kendal , 2011 . international perspectives on tele - education and virtual learning environments , edited by graham orange and dave hobs , 2010 . | s.a.t.e.p . : synchronous - asynchronous tele - education platform is a software application for educational purposes , with a lot of parametrizing ( configuration ) features written entirely from scratch . it aims at the training and examination of computer skills , a platform that can be adjusted to the needs of each lesson . in the application the trainer and the administrator can define the number of the lectures and upload files for each one of them . furthermore , he can insert , modify and delete questions which are used for evaluation tests but also for the trainees examinations . the trainee can read / download the files of each lesson and also test his knowledge on what he has studied through a series of questions / tests . a chat module where registered users as well as system administrator can discuss and solve questions is also developed . web based application ; mysql ; php ; open source software;tele - education ; |
thanks to the development of gravitational wave astronomy , the possibility of studying and understanding new features of the universe is becoming a reality . in this regard, there is an ongoing effort to learn about all the physical aspects of one of the main sources of gravitational waves ( gw ) for the future space - based laser interferometer space antenna ( lisa) : extreme - mass - ratio inspirals ( emris ) .emris are formed when a massive black hole ( mbh ) located at a galactic centre , with masses in the range , captures a stellar - mass compact object ( sco ) , with masses in the range ( white dwarf , neutron star or a stellar black hole ) .the sco inspirals towards the mbh following a long series of highly eccentric orbits that shrink due to the loss of energy and angular momentum through the emission of gws .this inspiral is produced by _effects related to the action of the sco s gravitational field onto its own trajectory . in order to model emri systems and their gw emission in a way that we can produce useful gw templates for analyzing the data stream produced by lisa observations we need to consider the details of the gravitational backreaction responsible of the inspiral .this is a very challenging task for theorists .nevertheless , the problem can be simplified due to the extreme mass ratios of these systems , in the range .this allows us to describe them in the framework of bh perturbation where the sco is modeled as an accelerated point - like mass in the mbh background and the gravitational backreaction is pictured as the action of a local force , the _ self - force_. for the purposes of our work , we can further simplify the problem by studying an analogous emri system which consist on a scalar charged point particle , , orbiting around a schwarzschild mbh ( see , e.g. ) that fixes the spacetime geometry . in this simplified model , the inspiral proceeds due to the emission of a scalar field , , generated by the sco motion and which affects the sco trajectory through the action of a self - force given by the gradient of the scalar field : where and denote the sco s trajectory and unit four - velocity respectively .we use this simplified model as a test bed to illustrate the techniques that we have developed for self - force computations . in the case of non - rotating mbh ,the spherical symmetry of the bh spacetime provides additional simplifications of the problem .in particular , the scalar field can be decomposed into scalar spherical harmonics , , which are decoupled between them . on the other hand, the point - like description of the sco produces divergences in the retarded scalar field and hence we need to regularize it . in order to do so, we use the _ mode sum _ regularization scheme , which provides an analytical expression for the singular contribution of the retarded field , , at the particle location . in this way , by subtracting it from the full retarded field , , we obtain a smooth and differentiable field at the particle location , . then , the meaningful expression for the self - force is : thus , what we need is a ( numerical ) technique to compute the full retarded field , , and to use the mode sum scheme to obtain the regularized self - force . in what follows we report on workwe have carried out recently on the development of a new accurate and efficient technique for self - force computations of eccentric emris .this work is the extension of a previous one , where these techniques were introduced for the case of circular emris .our computational scheme consists in diving the computational domain in a number of subdomains in such a way that the sco ( the particle ) is always located at the interface between two subdomains .this has two main advantages : ( i ) we avoid introducing a spatial scale to resolve the point particle .( ii ) the equations for the scalar field , which nominally have singular source terms , become homogeneous equations at each subdomain .these equations , assuming that appropriate initial date is precribed , lead to smooth solutions .this fact translates into good convergence properties of the numerical method used to implement this method . because of these properties we call this formalism the _ particle - without - particle _ ( pwp ) formulation .we evolve the individual wave - type equations at each subdomain by using time domain methods , which perform well for eccentric orbits .this technique was already implemented for the case of circular orbits and has also been presented at the previous lisa symposium .we have recently extended the method to make computations also in the case of eccentric orbits . in what followswe summarize the main results of this work .we start by describing the multidomain structure of our pwp formulation ( see figure [ multidomain ] ) .once we have expanded the scalar field in spherical harmonics , each mode satisfies an independent ( not coupled to the other modes ) 1 + 1 wave type equation of the regge - wheeler type ( with the potential associated to a scalar field ) .then , the spatial domain is one - dimensional : ] .each of these regions can be in turn divided into more subdomains ( see figure [ multidomain ] ) : , where ] .we can obtain expressions for the jumps by inserting ( [ globalsolution ] ) into the field equations ( see and ). using these junction conditions we communicate the solutions of the homogenous equations at each subdomain .the numerical method that we use to solve our equations is the pseudospectral collocation ( psc ) method .we discretize each subdomain by using a _ lobatto - chebyshev _ grid .then , in the psc method the variables have a _spectral _ representation in terms of an expansion in chebyshev polynomials , ( $ ] , ) , and a _ physical _ representation in terms of the values of the variables at the collocation points : where is the mapping between the spectral and physical domains ( the time dependence appears only in the eccentric case and is the way in which we keep the particle at the interface between two subdomains ) , are the spectral coefficients , and are cardinal functions associated to the _ lobatto - chebyshev _grid .one can change from one representation to the other by means either of matrices or fast fourier transforms ( as we do in our implementation ) .an important feature of the psc method is that it provides exponential convergence for smooth functions , which is the case of our solutions after applying the pwp formulation .this is illustrated in figure [ convergencewithparticle ] , where we show some convergence plots ., of the variable .we show the results for the harmonic modes ( left column ) and ( right column ) and for the orbital parameters ( top row ) and ( bottom row ) , where here denotes the eccentricity and the _semilatus rectum_. we observe spectral convergence ( straight line ) until we reach machine roundoff error ( plateau ) .the data for these plots has been obtained from the subdomain at the right of the particle .[ convergencewithparticle],title="fig:",scaledwidth=45.0% ] , of the variable .we show the results for the harmonic modes ( left column ) and ( right column ) and for the orbital parameters ( top row ) and ( bottom row ) , where here denotes the eccentricity and the _ semilatus rectum_. we observe spectral convergence ( straight line ) until we reach machine roundoff error ( plateau ) .the data for these plots has been obtained from the subdomain at the right of the particle .[ convergencewithparticle],title="fig:",scaledwidth=45.0% ] , of the variable .we show the results for the harmonic modes ( left column ) and ( right column ) and for the orbital parameters ( top row ) and ( bottom row ) , where here denotes the eccentricity and the _ semilatus rectum_. we observe spectral convergence ( straight line ) until we reach machine roundoff error ( plateau ) .the data for these plots has been obtained from the subdomain at the right of the particle .[ convergencewithparticle],title="fig:",scaledwidth=45.0% ] , of the variable .we show the results for the harmonic modes ( left column ) and ( right column ) and for the orbital parameters ( top row ) and ( bottom row ) , where here denotes the eccentricity and the _semilatus rectum_. we observe spectral convergence ( straight line ) until we reach machine roundoff error ( plateau ). the data for these plots has been obtained from the subdomain at the right of the particle .[ convergencewithparticle],title="fig:",scaledwidth=45.0% ] once we have performed the spatial discretization using the psc method , we obtain a set of ordinary differential equations that can be evolved using the method of lines .the particular implementation that we use involves a runge - kutta 4 solver .the evolution must include the communication between subdomains that we have discussed above .we use two different numerical techniques to communicate the subdomains : ( i ) the _ penalty method _ , where the system is driven dynamically to satify the junction conditions , and ( ii ) the _ direct communication of characteristic fields _ , where the junction conditions are imposed by communicating the characteristic fields of the first - order system of partial differential equations . to illustrate the ability of our method to resolve the field and its derivatives around the particle we show some snapshots of the evolution in figure [ scalar_variables ] .until now we have introduced the foundations of our method and the particular numerical techniques that we use in order to implement it . we have shown that the pwp formulation lead to smooth solutions at each computational subdomain , which in turn exploits the exponential convergence of the psc method as shown in figure [ convergencewithparticle ] .we have also shown how well this method can resolve the field and its derivatives around the particle location ( see figure [ scalar_variables ] ) .in particular , we can see how the jumps in the time and radial derivatives of the scalar field modes are well resolved ( see the inset plots in figure [ scalar_variables ] where we zoom in the area near the particle ) .finally , we have computed the values of the gradient of the regularized components of the gradient of the scalar field , , which contain information equivalent to the self - force ( see eq .( [ self_force ] ) ) , which is the projection orthogonal to the particle four - velocity of . in figure [ self_force_evolution ]we show the evolution of the components of the regularized field obtained by truncating the sum over harmonic modes in such a way that we only include modes with .this figure shows the evolution of the regularized field for two different orbits , which in terms of the eccentricity and semilatus rectum are : ( i ) and ( ii ) . in order to quote some numerical results ,we have computed the values of the self - force components using the method of the comunication of the characteristic field described in sec .[ pwpformulation ] .we obtain : , and for the eccentric orbit ( i ) and , and for the eccentric orbit ( ii ) .all our calculations have used between 8 and 39 subdomains and 50 collocation points per domain .the average time for a full self - force calculation ( which involves the calculation of 231 harmonic modes ) in a computer with two quad - core intel xeon processors at 2.8 ghz is always in the range 20 - 30 minutes .these calculations can be further optimized by distributing the subdomains and collocation points so that the resolution is adapted further to our physical problem , and is this is the subject of ongoing work .the calculations can be easily parallelized , either by spreading the computational calculations for individual modes or for individual subdomains . in the last casethe parallelization is not trivial as we need to pass the relevant information for subdomain communication .looking to the future , we are currently working on extending these techniques to the gravitational case , where the challenge comes from the fact that each harmonic mode is described by a set of coupled 1 + 1 wave type equations , but where the pwp formulation can be used as it has been described here .pcm is supported by a predoctoral fpu fellowship of the spanish ministry of science and innovation ( micinn ) .cfs acknowledges support from the ramn y cajal programme of the ministry of education and science of spain and by a marie curie international reintegration grant ( mirg - ct-2007 - 205005/phy ) within the 7th european community framework programme . financial support from the contracts esp2007 - 61712 ( mec ) and fis2008 - 06078-c03 - 01 ( ministry of science and innovation of spain )is gratefully acknowledged .the authors have used the resources of the centre de supercomputaci de catalunya ( cesca ) and the centro de supercomputacin de galicia ( cesga ; project number icts-2009 - 40 ) .1 url # 1#1urlprefix[2][]#2 | the gravitational - wave signals emitted by extreme - mass - ratio inspirals will be hidden in the instrumental lisa noise and the foreground noise produced by galactic binaries in the lisa band . then , we need accurate gravitational - wave templates to extract these signals from the noise and obtain the relevant physical parameters . this means that in the modeling of these systems we have to take into account how the orbit of the stellar - mass compact object is modified by the action of its own gravitational field . this effect can be described as the action of a local force , the _ self - force_. we present a time - domain technique to compute the self - force for geodesic eccentric orbits around a non - rotating massive black hole . to illustrate the method we have applied it to a testbed model consisting of scalar charged particle orbiting a non - dynamical black hole . a key feature of our method is that it does not introduce a small scale associated with the stellar - mass compact object . this is achieved by using a multidomain framework where the particle is located at the interface between two subdomains . in this way , we just have to evolve homogeneous wave - like equations with smooth solutions that have to be communicated across the subdomain boundaries using appropriate junction conditions . the numerical technique that we use to implement this scheme is the pseudospectral collocation method . we show the suitability of this technique for the modeling of extreme - mass - ratio inspirals and show that it can provide accurate results for the self - force . |
ieee 802.11a / b / g / n based wireless local area networks ( lans ) in `` infrastructure mode '' are very common in many places . in this paper, we are concerned with an analytical model for evaluating the performance of tcp - controlled downloads in a wlan .`` tcp '' is the transmission control protocol , which is regarded as the workhorse of the internet ; numerous applications , including web browsing , file transfer , and secure e - commerce , rely on tcp as the transport protocol .the system we consider is shown in figure [ fig : ap_sta ] .a detailed analysis of the aggregate throughput of tcp - controlled file downloads for a `` single rate '' access point ( ap ) is given in kuriakose et al . ; in this , all stas are assumed to be associated with the ap at the same rate . in practice , many data rates are possible and hence considering multiple rates is important .the aggregate download throughput is evaluated for the _ two rates _ case in krusheel and kuri . in this paper, we consider an arbitrary but finite number of possible rates of association between stations ( sta ) and a single ap .we are motivated to study an analytical model because of the improved understanding that it leads to , and the useful insights that it can provide .closed - form expressions or numerical calculation procedures are helpful because other features and capabilities can be built upon them .one possible application , which we are studying now , is to utilize the results reported here in devising a better ap - sta association policy .our approach is to model the number of stas with tcp acknowledgements ( acks ) in their medium access control ( mac ) queues as an embedded discrete time markov chain ( dtmc ) , embedded at the instants of successful transmission events .we consider a successful transmission from the ap as a reward .this leads to viewing the aggregate tcp throughput in the framework of renewal reward theory given in kumar .almost the entire existing literature considers a _ single _ rate of association only .this is rather limiting , because in practice , it is extremely likely that a wlan will have stas associated at a number of rates allowed by the technology ( for example , one of 6 mbps , 12 mbps , 18 mbps , 24 mbps , 30 mbps , 36 mbps , 48 mbps or 54 mbps in 802.11 g , and one of 1 mbps , 2 mbps , 5.5 mbps or 11 mbps in 802.11b ) . a first step towards considering multiple association rateswas taken in krusheel and kuri , but there , only 2 possible association rates were considered . in this paper , _any _ number of association rates is allowed . because of this , our model is applicable to any variant of wlan technology , for example : 802.11a / b / g / n .the contributions of this paper are as follows .we present a model for analyzing the performance of tcp - controlled file transfers with rates of association .this generalizes earlier work .secondly , our model incorporates tcp - specific aspects like `` delayed acks ; '' this is a technique to reduce the frequency of tcp ack generation by a tcp receiver . in most implementations ,a tcp receiver generates a tcp ack for every tcp packet received ; our model is general , and considers that one tcp ack is generated for every tcp packets .our analytical results are in excellent agreement with simulations , with the discrepancy being less than % in all cases .the paper is organized as follows : in section [ sec : related_work ] , related works are discussed . in section[ sec : system_model ] , we state the assumptions in first part and then present our analysis . in section [ sec : evaluation ] , we present performance evaluation results . in section [ sec : discussion ]we discuss the results .finally , the paper is concluded in section [ sec : conclusion ] .the literature on throughput modelling in a wlan can be classified into several groups depending on the approach . in the first group , all wlan entities ( stas and the ap )are assumed to be _ saturated _ , _ i.e. _ , each entity is backlogged permanently .bianchi , kumar et al . and cali et al . consider this saturated traffic model .however , our interest is in modelling aggregate _throughput , and the saturated traffic model does not capture the situation well . to see why unsaturated traffic makes a difference , we consider figure [ fig : unsaturated ] .the left part shows a saturated traffic scenario , where all wlan entities have packets to transmit ; therefore , entities contend for the channel .the right part shows the situation with tcp in the picture .essentially , for many tcp connections , the entire window of packets sits at the ap , leaving the corresponding stas with nothing to send .this means that the number of contending wlan entities is much smaller as mentioned in kuriakose et al .this indicates why approaches relying on a model with saturated nodes are inadequate .the second group considers tcp traffic .kuriakose et al . propose a model for tcp - controlled file downloads in a _ single rate _ wlan ; _ i.e. , _ one in which all stas are associated with a single ap at the _ same _ rate .bruno et al . , , , , and vendictis et al . generalize this and analyze tcp - controlled file uploads as well as downloads ; however , it is assumed again that all stas are associated at the same rate .similarly , yu et al . provide an analysis for a given number of stas and a maximum tcp receive window size by using the well - known -persistent model proposed in cali et al .as noted above , these papers analyze tcp - controlled file transfers ( in some cases udp traffic is allowed as well ) but limit themselves to a single rate of association . in the third category ,bharadwaj et al . consider _ finite _ ap buffers , in contrast to the previous two , where ap buffers were assumed to be infinite ; however , the single rate assumption is retained .the three groups mentioned above focus on _ long _ file transfers , where the tcp sender is assumed to have a file that is infinite in size .miorandi et al . model a different situation motivated by web browsing over a wlan . a queuing modelis proposed to compute the mean session delay for short - lived tcp flows .the impact of maximum tcp congestion window size on this delay is studied as well . even though a fair amount of work modelling tcp - controlled transfers has been done , we are unaware of any work that allows _ multiple _ ap - sta association rates . clearly , this is the situation observed most often in practice , where the distance between the ap and a sta governs the rate of association . in this paper , we consider an arbitrary ( but finite ) number of rates of association between stas and the ap ; to the best of our knowledge , this is the first paper to consider this general model .we consider stations associated with an ap as shown in figure [ fig : ap_sta ] .all the nodes contend for the channel using the dcf mechanism as given in ieee 802.11a / b / g / n .the stations are associated with the ap at different physical rates ( stas at rate , stas at rate , stas at rate ) .we assume that there are no link errors .this is not merely a simplifying assumption ; the `` auto rate fallback '' mechanism , implemented widely in stas and aps , is intended to ensure that we have an _ error - free but lower rate _channel rather than a higher rate but error - prone channel .thus , our assumption of no link errors is consistent with the usual mode of wlan operation .packets in the medium are lost only due to collisions .each station has a single tcp connection to download long files from the server and all tcp connections have equal window sizes .the ap transmits tcp packets for the stations and the stations return tcp - ack packets .further , we assume that the ap uses the rts - cts mechanism while sending packets to stations and stations use basic access to send ack packets ( rts : request to send , cts : clear to send ; these are control packets that reserve the wireless medium for the subsequent long data packet ) . in ieee802.11 wlans , the parameter determines whether the rts - cts exchange will precede a packet transmission . in most operational wlans ,the rts threshold is set such that tcp data packets are larger than the rts threshold , and hence sent after rts - cts exchange , while tcp ack packets are smaller than the rts threshold , and hence sent without rts - cts exchange ( the latter is referred to as `` basic access '' ) . upon reception of data packets, a sta generates an ack packet and it is enqueued at the mac layer for transmission .we assume that all nodes have sufficiently large buffers , so that packets are not lost due to buffer overflow .also , tcp timeouts do not occur .tcp start - up transients are ignored by considering all connections to be in congestion avoidance . for long file transfers ( which we are considering in this paper ) , this is a reasonable assumption because the initial start - up phase of a tcp connection lasts for a time that is completely negligible compared to the connection lifetime ; the tcp connection moves quickly to the congestion avoidance phase and remains there .the value of rtt is very small , since files are downloaded from a server located on the lan as shown in figure [ fig : ap_sta ] .thus , several tcp connections exist simultaneously and all stas with tcp ack packets , and the ap ( which is full of tcp data packets for the stas ) , contend for the channel .since no preference is given to the ap , and it has to serve all stas , the ap becomes a bottleneck , and it is modelled as being backlogged permanently .the aggregate throughput of the ap is shared equally among all stations .let be the number of stations associated with the ap at the physical transmission rate , where with .given that the ap wins the channel , the conditional probability that it sends a tcp data packet to a station at rate is .we assume that is large .our results will show that or , ( with ) suffices for the analysis to be applicable .figure [ fig : channel_activity ] shows a possible sample path of the events on the wlan channel .the random epochs indicate the end of the successful transmission from either the ap or one of the stations .we observe that most stas have empty mac queues , because , in order for many stas to have tcp - ack packets , the ap must have had a long run of successes and this is unlikely because no special preference is given to the ap .so when the ap succeeds in transmitting , the packet is likely to be for a sta with an empty mac queue . at epoch , let be the number of stations at rate , ready with an ack .let be the number of nonempty stas .if there are nonempty stas and a nonempty ap , each nonempty wlan entity attempts to transmit with probability as in kumar et al . , where is the attempt probability with _ saturated _ entities .it can be seen that evolves as a discrete time markov chain ( dtmc ) over the epochs .this allows us to consider as a markov renewal sequence , and as a semi - markov process .stations associated with the ap at different data rates . ]we have a multidimensional dtmc which is shown in figure [ fig : markovchain ] ; transition probabilities are indicated as well .a sta generates a tcp ack after receiving tcp packets ; this is incorporated in our model in the following way . when the dtmc state is , there are backlogged wlan entities ( including the ap ) ; so , the probability that a station at rate wins the channel is , since each entity is equally likely to win the contention .further , given that a sta at rate wins the channel , the conditional probability that it will generate a tcp - ack is . by inspection, we can say that the dtmc is irreducible ; further , the detailed balanced equation holds for a properly chosen set of equilibrium probabilities . the detailed balance equation ( dbe )is here , is the stationary distribution of the dtmc . from the set of equations given in and the stationary distribution is to obtain the throughput, we use markov regenerative analysis , culminating in the renewal reward theorem given in wolff , kumar . for a given state , successive entries into state form renewal epochs . to obtain the mean time between successive entries ( the mean renewal cycle lengths ), we obtain the mean sojourn time in the state .let be the sojourn time in a state .conditioning on various events ( idle slot , collision or successful transmission ) that can happen in the next time slot , the following expression for the mean cycle length can be written down : + in the above expression , is the probability of the slot being idle , is the probability that the ap wins the contention and transmits the data packet at rate and is the probability that a sta associated at rate wins the channel ( `` s '' in the suffix stands for `` success '' ) .correspondingly , the conditional expected sojourn times in state , given the events are , respectively , , and .detailed expressions for these quantities are provided in the appendix .the and terms on the right side of correspond to collision events .the third term in arises when the ap transmits a tcp data packet to a station at rate and some other stations are involved in a collision ; in other words , the third term captures the situations in which the ap is involved in a collision .the fifth term in captures collision events in which the ap is _ not _ involved ; we have a sta transmitting a tcp ack packet to the ap at rate and one or more other stas transmitting simultaneously .the various probabilities have been obtained by using the attempt probability , when there are contending nodes . from equationwe have we are interested in finding the long run time average of successful transmissions from the ap .we obtain this by applying the renewal reward theorem of wolf . to get the mean renewal cycle length, we can use the mean sojourn time given in equation and use theorem 5.3 in .the mean reward in a cycle can be obtained as follows .a reward of 1 is earned when the ap transmits a tcp data packet successfully by winning the channel .the probability of the ap winning the channel is .similarly , a reward of 0 is earned with probability .therefore , the expected reward is . putting all this together, the aggregate tcp throughput can be calculated as + verify the accuracy of the model , we performed experiments using the qualnet 4.5 network simulator .we considered 802.11b physical data rates : 1mbps , 2mbps , 5.5mbps and 11mbps ; higher rates correspond to smaller distances between the stas and the ap . in table[ table : aps_stas ] , results are given for a few cases of this multirate scenario .for example , the first row considers a total of 10 stas associated with the ap , out of which 2 stas are associated at 11 mbps and 2 mbps respectively , while 3 stas are associated at 5.5 mbps and 1 mbps , respectively .the values of in equation are calculated by using the number of stas associated with the ap .for example , in the first row in table [ table : aps_stas ] , ( for 11 mbps ) is . | we consider several wlan stations associated at rates , , ... , with an access point . each station ( sta ) is downloading a long file from a local server , located on the lan to which the access point ( ap ) is attached , using tcp . we assume that a tcp ack will be produced after the reception of packets at an sta . we model these simultaneous tcp - controlled transfers using a semi - markov process . our analytical approach leads to a procedure to compute aggregate download , as well as per - sta throughputs , numerically , and the results match simulations very well . wlan , association , access points , infrastructure mode . |
over the past ten years , the study of phase transition phenomena has been one of the most exciting areas in computer science and artificial intelligence .numerous studies have established that for many np - complete problems ( e.g. , sat and csp ) , the hardest random instances occur , while a control parameter is varied accordingly , between an under - constrained region where all instances are almost surely satisfiable and an over - constrained region where all instances are almost surely unsatisfiable . in the transition region , there is a threshold where half the instances are satisfiable . generating hard instances is important both for understanding the complexity of the problems and for providing challenging benchmarks .another remarkable progress in artificial intelligence has been the development of incomplete algorithms for various kinds of problems . and, since this progress , one important issue has been to produce hard satisfiable instances in order to evaluate the efficiency of such algorithms , as the approach that involves exploiting a complete algorithm in order to keep random satisfiable instances generated at the threshold can only be used for instances of limited size . also , it has been shown that generating hard ( forced ) satisfiable instances is related to some open problems in cryptography such as computing a one - way function . in this paper, we mainly focus on random csp ( constraint satisfaction problem ) instances .initially , four `` standard '' models , denoted a , b , c and d , have been introduced to generate random binary csp instances .however , have identified a shortcoming of all these models .indeed , they prove that random instances generated using these models suffer from ( trivial ) unsatisfiability as the number of variables increases . to overcome the deficiency of these standard models ,several alternatives have been proposed . on the one hand , have proposed a model e and a generalized model .however , model e does not permit to tune the density of the instances and the generalized model requires an awkward exploitation of probability distributions .also , other alternatives correspond to incorporating some `` structure '' in the generated random instances .roughly speaking , it involves ensuring that the generated instances be arc consistent or path consistent .the main drawback of all these approaches is that generating random instances is no more quite a natural and easy task . on the other hand , , and have revisited standard models by controlling the way parameters change as the problem size increases .the alternative model d scheme of guarantees the occurrence of a phase transition when some parameters are controlled and when the constraint tightness is within a certain range .the two revised models , called rb and rd , of provide the same guarantee by varying one of two control parameters around a critical value that , in addition , can be computed . also , identify a range of suitable parameter settings in order to exhibit a non - trivial threshold of satisfiability .their theoretical results apply to binary instances taken from model a and to `` symmetric '' binary instances from a so - called model b which , not corresponding to the standard one , associates the same relation with every constraint . the models rb and rd present several nice features : * it is quite easy to generate random instances of any arity as no particular structure has to be integrated , or property enforced , in such instances . *the existence of an asymptotic phase transition can be guaranteed while applying a limited restriction on domain size and on constraint tightness . for instances involving constraints of arity , the domain size is required to be greater than the k root of the number of variables and the ( threshold value of the ) constraint tightness is required to be at most .* when the asymptotic phase transition exists , a threshold point can be precisely located , and all instances generated following models rb and rd have the guarantee to be hard at the threshold , i.e. , to have an exponential tree - resolution complexity . *it is possible to generate forced satisfiable instances whose hardness is similar to unforced satisfiable ones .this paper is organized as follows . after introducing models rb and rd , as well as some theoretical results ( section [ sec : the ] ) ,we provide a formal analysis about generating both forced and unforced hard satisfiable instances ( section [ sec : generating ] ) .then , we present the results of a large series of experiments that we have conducted ( section [ sec : experimental ] ) , and , before concluding , we discuss some related work ( section [ sec : related ] ) .a constraint network consists of a finite set of variables such that each variable has an associated domain denoting the set of values allowed for , and a a finite set of constraints such that each constraint has an associated relation denoting the set of tuples allowed for the variables involved in .a solution is an assignment of values to all the variables such that all the constraints are satisfied .a constraint network is said to be satisfiable ( sat , for short ) if it admits at least a solution .the constraint satisfaction problem ( csp ) , whose task is to determine if a given constraint network , also called csp instance , is satisfiable , is np - complete . in this section ,we introduce some theoretical results taken from .first , we introduce a model , denoted rb , that represents an alternative to model b. note that , unlike model b , model rb allows selecting constraints with repetition .but the main difference of model rb with respect to model b is that the domain size of each variable grows polynomially with the number of variables .a class of random csp instances of model rb is denoted rb(,,,, ) where : * denotes the arity of each constraint , * denotes the number of variables , * determines the domain size of each variable , * determines the number of constraints , * denotes the tightness of each constraint . to build one instance rb(,,,, ) , we select with repetition constraints , each one formed by selecting distinct variables and distinct unallowed tuples ( as denotes a proportion ) . when fixed , and give an indication about the growth of the domain sizes and of the number of constraints as increases since and , respectively .it is then possible , for example , to determine the critical value of where the hardest instances must occur .indeed , we have which is equivalent to the expression of given by .another model , denoted model rd , is similar to model rb except that denotes a probability instead of a proportion . for convenience , in this paper , we will exclusively refer to model rb although all given results hold for both models . in , it is proved that model rb , under certain conditions , not only avoids trivial asymptotic behaviors but also guarantees exact phase transitions .more precisely , with pr denoting a probability distribution , the following theorems hold .[ the:1 ] if , and are constants then = \left \ { \begin{array}{cc } 1 & if~ r < r_{cr } \\ 0 & if~ r > r_{cr } \end{array } \right.\end{aligned}\ ] ] where .[ the:2 ] if , and are constants then = \left \ { \begin{array}{cc } 1 & if~ p < p_{cr } \\ 0 & if~ p > p_{cr } \end{array } \right.\end{aligned}\ ] ] where .remark that the condition is equivalent to given in .theorems [ the:1 ] and [ the:2 ] indicate that a phase transition is guaranteed provided that the domain size is not too small and the constraint tightness or the threshold value of the constraint tightness not too large . as an illustration , for instances involving binary ( resp .ternary ) constraints , the domain size is required to be greater than the square ( resp .cubic ) root of the number of variables and the constraint tightness or threshold value of the tightness is required to be at most ( resp . ) .the following theorem establishes that unsatisfiable instances of model rb almost surely have the guarantee to be hard . a similar result for modela has been obtained by with respect to binary instances .[ the:3 ] if rb(,,,, ) and , , and are constants , then , almost surely as the number of variables tends to infinity . ], has no tree - like resolution of length less than .the proof , which is based on a strategy following some results of , is omitted but can be found in . to summarize , model rb guarantees exact phase transitions and hard instances at the threshold .it then contradicts the statement of about the requirement of an extremely low tightness for all existing random models in order to have non - trivial threshold behaviors and guaranteed hard instances at the threshold .for csp and sat , there is a natural strategy to generate _ forced _satisfiable instances , i.e. , instances on which a solution is imposed .it suffices to proceed as follows : first generate a random ( total ) assignment and then generate a random instance with variables and constraints ( clauses for sat ) such that any constraint violating is rejected . is then a _ forced _ solution .this strategy , quite simple and easy to implement , allows generating hard forced satisfiable instances of model rb provided that theorem or holds .nevertheless , this statement deserves a theoretical analysis .assuming that denotes the domain size ( = for sat ) , we have exactly possible ( total ) assignments , denoted by , and possible assignment pairs where an _ assignment pair _ is an ordered pair of two assignments and .we say that satisfies an instance if and only if both and satisfy the instance .then , the expected ( mean ) number of solutions ] denotes the probability that satisfies a random instance .note that ] and ] is exponentially greater than ] is asymptotically equal to below the threshold , where almost all instances are satisfiable , i.e. /e^{2}[n]\approx 1 ] . in other words , when using model rb , the strategy has almost no effect on the number of solutions and does not lead to a biased sampling of instances with many solutions .in addition to the analysis above , we can also study the influence of the strategy on the distribution of solutions with respect to the forced solution .we first define the distance between two assignments and as the proportion of variables that have been assigned a different value in and .we have .for forced satisfiable instances of model rb , with ] , for or , is asymptotically maximized when takes the largest possible value , i.e. . for unforced satisfiable instances of model rb , with ] is asymptotically maximized when .intuitively , with model rb , both unforced satisfiable instances and instances forced to satisfy an assignment are such that most of their solutions distribute far from .this indicates that , for model rb , the strategy has little effect on the distribution of solutions , and is not biased towards generating instances with many solutions around the forced one . for random 3-sat , similarly, we can show that as ( the ratio of clauses to variables ) approaches , ] are asymptotically maximized when and , respectively .this means , in contrast to model rb , that when is near the threshold , most solutions of forced instances distribute in a place much closer to the forced solution than solutions of unforced satisfiable instances .as all introduced theoretical results hold when , the practical exploitation of these results is an issue that must be addressed . in this section , we give some representative experimental results which indicate that practice meets theory even if the number of variables is small . note that different values of parameters and have been selected in order to illustrate the broad spectrum of applicability of model rb . , and ,width=246 ] first , it is valuable to know in practice , to what extent , theorems [ the:1 ] and [ the:2 ] give precise thresholds according to different values of , and .the experiments that we have run wrt theorem [ the:2 ] , as depicted in figure [ fig : diff ] , suggest that all other parameters being fixed , the greater the value of , or is , the more precise theorem [ the:2 ] is . more precisely , in figure [ fig : diff ] , the difference between the threshold theoretically located and the threshold experimentally determined is plotted against ] ) , against ] ) and against ] .instances ) of solving instances in rb(, ] .figure [ fig : above ] displays the search effort of a mac algorithm to solve such instances against the number of variables .it is interesting to note that the search effort grows exponentially with , even if the exponent decreases as the tightness increases .also , although not currently supported by any theoretical result ( theorems and of hold only for forced instances below the threshold ) it appears here that forced and unforced instances have a similar hardness . instances ) of solving instances in rb(,$],,,),width=246 ] finally , figure [ fig : tabu ] shows the results obtained with a tabu search with respect to the binary instances that have been previously considered with mac ( see figure [ fig : phase1 ] ) .the search effort is given by a median cost since when using an incomplete method , there is absolutely no guarantee of finding a solution in a given limit of time .remark that all unsatisfiable ( unforced ) instances below the threshold have been filtered out in order to make a fair comparison .it appears that both complete and incomplete methods behave similarly . in figure[ fig : tabu ] , one can see that the search effort grows exponentially with and that forced instances are as hard as unforced ones .instances ) of solving instances in rb(,,,, ) using a tabu search , width=246 ]as a related work , we can mention the recent progress on generating hard satisfiable sat instances . have proposed to build random satisfiable 3-sat instances on the basis of a spin - glass model from statistical physics .another approach , quite easy to implement , has also been proposed by : any 3-sat instance is forced to be satisfiable by forbidding the clauses violated by both an assignment and its complement .finally , let us mention which propose to build random instances with a specific structure , namely , instances of the quasigroup with holes ( qwh ) problem .the hardest instances belong to a new type of phase transition , defined from the number of holes , and coincide with the size of the backbone .in this paper , we have shown , both theoretically and practically , that the models rb ( and rd ) can be used to produce , very easily , hard random instances .more importantly , the same result holds for instances that are forced to be satisfiable . to perform our experimentation, we have used some of the most efficient complete and incomplete csp solvers .we have also encoded some forced binary csp instances of class rb(,,,, ) with ranging from to into sat ones ( using the direct encoding method ) and submitted them to the sat competition 2004 . about of the competing solvershave succeeded in solving the sat instances corresponding to ( and ) whereas only one solver has been successful for ( and ) .although there are some other ways to generate hard satisfiable instances , e.g. qwh or 2-hidden instances , we think that the simple and natural method presented in this paper , based on models with exact phase transitions and many hard instances , should be well worth further investigation .the first author is partially supported by nsfc grant 60403003 and fanedd grant 200241 .other authors have been supported by the cnrs , the `` programme cocoa de la rgion nord / pas - de - calais '' and by the `` iut de lens '' . | in this paper , we try to further demonstrate that the models of random csp instances proposed by are of theoretical and practical interest . indeed , these models , called rb and rd , present several nice features . first , it is quite easy to generate random instances of any arity since no particular structure has to be integrated , or property enforced , in such instances . then , the existence of an asymptotic phase transition can be guaranteed while applying a limited restriction on domain size and on constraint tightness . in that case , a threshold point can be precisely located and all instances have the guarantee to be hard at the threshold , i.e. , to have an exponential tree - resolution complexity . next , a formal analysis shows that it is possible to generate forced satisfiable instances whose hardness is similar to unforced satisfiable ones . this analysis is supported by some representative results taken from an intensive experimentation that we have carried out , using complete and incomplete search methods . |
in 1966 , forney introduced the concept of concatenated codes .it was generalized in 1976 by blokh and zyablov to generalized concatenated ( gc ) codes .the gc approach allows to design powerful codes with large block lengths using short , and thus easily decodable , component codes . the designed distance and the performance of gc codescan be easily estimated theoretically .this allows to design gc codes for applications like e.g. optical lines , where block error rates in the order of are required , a region , where simulations are not feasible .gc codes can be decoded up to half their minimum distance using a sufficiently large number of decoding attempts for each outer code in the blokh zyablov dumer algorithm ( bzda ) ,see also dumer .we should also mention papers by sorger , and ktter who suggested interesting modifications of a bmd decoder in such a way that multi - attempt decoding of the outer code can be made in one step .nielsen suggested in to use the guruswami sudan list decoding algorithm for decoding the outer codes and has shown that in this case also one decoding attempt is sufficient to allow decoding up to half the minimum distance of the gc code . in this paper, we employ another idea , which allows to decrease the number of outer decodings , but also to skip many decodings of the inner code .this idea is based on interleaved reed solomon ( irs ) codes .other aspects of using irs codes in concatenated codes were considered in .the rest of our paper is organized as follows : in section [ section : gcc ] we explain gc codes as well as the required notations and assumptions . section [ section : irs ] explains irs codes and where they appear within gc codes .section [ section : bzouterrs ] gives an overview of the bzda as introduced in .a generalization of this is given in section [ section : bzouterirs ] , leading to a new algorithm which maintains the error - correcting performance of gc codes while reducing the number of outer decodings and skipping many inner decodings . in section [ section :example ] we illustrate our results by means of some examples .encoding of a gc code of order is as follows , where we restrict ourselves to outer rs codes , , over the binary extension field and an inner binary block code with dimension .the first step is outer encoding . for thiswe take codewords of the outer codes and put them as rows into an matrix over .the second step is inner encoding , where the binary counterparts of the columns of are encoded by the inner code to obtain code word columns .the result of this procedure is a binary matrix , which in turn is a code word of the gc code .the inner code has the following nested structure : the code words are obtained by encoding an arbitrary binary information vector .if we fix groups of information bits starting from and encode , we obtain a subcode with distance .note that due to the special choice of encoders for the subcodes .obviously , . the minimum distance of the gc code is then lower bounded by its designed distance the channel adds a binary error matrix of weight to the transmitted concatenated code matrix , resulting in a received matrix at the receiver . as output of gc decoding with the bzda we want to obtain a matrix over , which is an estimate of the matrix .decoding consists of iterations , where iteration is as follows : a. decoding of all columns , , of the received matrix by bmd decoders for the nested subcodes , correcting up to errors and altogether yielding a new estimate for the -th row of .b. execution of attempts of decoding with and a different number of erased symbols in each attempt , which yields a candidate list of at most size . finally , the best candidate from this list has to be selected using some criterion and inserted into . note that because of this recurrent structure it is sufficient to consider the bzda only for ordinary concatenated codes , as decoding of gc codes simply means repeated application of this special case for the sequence of the nested inner subcodes , , and the corresponding outer codes .the details of the bzda for ordinary concatenated codes are described in section [ section : bzouterrs ] . in is shown that if for even , the bzda can decode up to errors in the -th iteration and thus by ( [ eqn : designeddistance ] ) also up to errors in a gc code word .observe that the matrix is just an interleaved set of different rs codes , hence an irs code .for irs codes an efficient decoding algorithm was suggested in , which has only times the complexity of the berlekamp massey algorithm for decoding one single rs code .the algorithm allows to correct at most erroneous columns of , where is the average minimum distance of the interleaved set of rs codes and the irs decoding algorithm from yields a decoding failure with some probability .however , this probability can be made small and is neglected here .if the complete matrix does not satisfy ( [ eqn : l1 ] ) , we can split it into a number of submatrices with the same length as , which all fulfill ( [ eqn : l1 ] ) and thus can be decoded by the irs decoding algorithm from .assume that is such a submatrix of and forms an irs code with average minimum distance , that satisfies both constraint ( [ eqn : l1 ] ) and .the main idea of applying the irs decoding algorithm from to gc codes is as follows : we can replace iterations of the bzda by the following single iteration : a. decoding of all columns , , of the received matrix by bmd decoders for the subcodes , correcting up to errors and yielding an estimate for the submatrix .b. execution of attempts of irs decoding with a different number of erased columns in each attempt , which yields a candidate list of at most size .finally , the best candidate from this list has to be selected using some criterion and inserted into . as a result of this method , we skipped inner decodings andwe will show that the number of required decoding attempts for the outer code to guarantee decoding up to channel errors is much smaller than in the original bzda , which in practice means .eventually , our modified algorithm corrects up to half the minimum distance of the gc code .in this section , we consider decoding of a _ simple _ concatenated code , which consists of the outer rs code and the inner binary code .this corresponds to the iteration of the refined bzda from for a gc code where it holds w.l.o.g , and .first ( step ( i ) of the bzda ) , we decode the columns of the received matrix by a bmd decoder for , correcting up to errors and yielding code word estimates or decoding failures .decoding of the outer code is performed with respect to the ordered set of thresholds with . for each decoding attempt the decoding results of the inner decoder depend on the threshold in the following manner : the symbols delivered to the outer decoder at position are where is the received word in the -th column , is the result of inner decoding , maps code words of to the corresponding -ary information symbols , and is the symbol for an erasure . as result from outer decoding we obtain the outer code word estimate . from ( [ eqn : rtilde ] ) follows that thresholds with equal integers parts yield equal decoding attempts , so the number of actual attempts may be smaller than the number of thresholds , i.e. . the numbers of decoding errors and erasures occurring at decoding are denoted by and , respectively .the outer rs code can successfully decode as long as , since we assume outer bmd decoding in this section . for a fixed number of thresholds , the following theorem fixes the optimum values of the thresholds such that the decoding bound of the bzda is maximized .[ theorem : thresholdsind ] for a concatenated code with outer bmd - decoded rs code and inner bmd - decoded code , the set of thresholds which maximizes the decoding bound is determined by .if the thresholds are chosen according to ( [ eqn : thresholdsind ] ) , the decoding bound is given by the following theorem in a sense that the transmitted code word is among the elements of the result list .[ theorem : boundind ] for a concatenated code with outer bmd - decoded rs code and inner bmd - decoded code , the decoding bound is in figure [ fig : outer_irs_even ] , the decoding bound ( [ eqn : boundind ] ) is plotted with circles versus the number of thresholds for an example with , .it can clearly be seen that the bound reaches , i.e. half the minimum distance of the concatenated code with increasing number of thresholds .the bound obviously only depends on the greatest threshold .if we hypothesize that the number of thresholds tends to infinity , we can see that for the greatest threshold but as , we know that the greatest possible integer threshold is if is even , and if is odd .this allows to state the following theorem , which confirms our observation from figure [ fig : outer_irs_even ] .[ theorem : maxboundind ] if the number of thresholds tends to infinity , the decoding bound of the bzda for a concatenated code with outer bmd - decoded rs code and inner bmd - decoded code is the decoding bound ( [ eqn : boundind ] ) is non - decreasing in , hence it assumes its maximum at .consider two cases : a. is even , thus the greatest possible integer threshold is and .b. is odd , hence the greatest possible integer threshold is and . in the following ,we restrict ourselves to binary error matrices meeting ( [ eqn : maxboundind ] ) . to obtain decoding bound ( [ eqn : maxboundind ] ) , the greatest possible integer threshold needs to be among the threshold set . for even greatest integer threshold is , which is strictly smaller than the limit . by the following lemmait can be reached already for a rather small value of .[ lemma : indeven ] for a concatenated code with inner bmd - decoded code and outer bmd - decoded rs code the greatest possible integer threshold is reached if .solve for .we can thus obtain ( [ eqn : maxboundind ] ) with only thresholds according to ( [ eqn : thresholdsind ] ) if is even .if however is odd , the greatest possible integer threshold is , i.e. the limit itself .it can obviously only be reached for an infinte number of thresholds .but the number of integers below is , hence the number of actual decoding attempts is upper bounded by .it follows that even though in the odd case the number of required thresholds is infinite , only outer decoding attempts are sufficient to achieve decoding bound ( [ eqn : maxboundind ] ) . up to now , we only know that the transmitted outer code word is _ somewhere _ within the result list of the bzda if ( [ eqn : maxboundind ] ) is fulfilled . the following lemma provides a means of exactly determining its position among the elements of .[ lemma : selection ] let with and .assume and that is a threshold with .then and the lemma guarantees that only the transmitted outer code word fulfills ( [ eqn : selection ] ) , i.e. that no further decoding attempts have to be executed as soon as ( [ eqn : selection ] ) is fulfilled for the smallest threshold index .then , we set and choose .now we consider the case where is an irs code , i.e. a row - wise arrangement of rs codes of equal length but potentially different dimensions , which are decoded collaboratively as described in section [ section : irs ] .this allows to correct a larger number of errors leading to a decoding success whilst , where .this means bounded distance ( bd ) decoding .our aim now is to derive formulae corresponding to ( [ eqn : thresholdsind ] ) and ( [ eqn : boundind ] ) for this specific case . in doing so, we generalize the approach for outer bmd decoding from .the procedure is as follows : let be the smallest number of channel errors for a given set of thresholds , such that all decoding attempts fail , i.e. such that we determine under the condition that ( [ eqn : errorforall ] ) is fulfilled .then , we find the set of thresholds which maximizes this minimum , i.e. the set of thresholds , which maximizes the decoding bound .this set is determined by the expression the detailed derivation is too involved to be presented here , so we confine ourselves to the results in form of the following theorems . [theorem : thresholds ] for a concatenated code with outer collaboratively decoded irs code consisting of rs codes and inner bmd - decoded code , the set of thresholds which maximizes the decoding bound is defined by with , where is the number of thresholds and .[ theorem : boundcol ] for a concatenated code with outer collaboratively decoded irs code consisting of rs codes and thresholds chosen as in ( [ eqn : thresholdscol ] ) , the decoding bound is given by by theorem [ theorem : boundcol ] the decoding bound only depends on threshold , the greatest one among the ordered threshold set . hence to maximize the decoding bound ( [ eqn : boundcol ] ) we have to maximize . since the threshold location function ( [ eqn : thresholdscol ] )is non - decreasing , the greatest threshold occurs for , and is .the following theorem states the decoding bound for this greatest possible threshold .[ theorem : irshd ] let be a concatenated code with inner bmd - decoded code and outer irs code with . if the maximum possible integer threshold is among the threshold set , the decoding bound is given by inserting the the integer parts and , respectively , of the greatest possible thresholds into bound ( [ eqn : boundcol ] ) proves the statement . for even , the greatest possible integer threshold already is reached considering a finite number of thresholds , i.e. if .thus for even the finite threshold set is sufficient to obtain the maximum of ( [ eqn : boundcol ] ) . if on the other hand is odd , the greatest possible integer threshold is itself , hence the number of required thresholds in fact is infinite .but since we know that decoding attempts corresponding to thresholds with equal integer parts coincide , we can skip all thresholds within the interval by the following lemma .[ lemma : neededthresholds ] for a concatenated code with inner bmd - decoded code and and outer irs code with collaboratively decoded rs codes is reached if .if , then the threshold location function ( [ eqn : thresholdscol ] ) becomes .but . by lemma [ lemma : neededthresholds ]we know that all thresholds in the range have equal integer parts and therefore can be omitted .thus , instead of the infinite threshold set it is equivalent to consider the finite set with only elements . we knowthat if we utilize the sets and of thresholds according to ( [ eqn : thresholdscol ] ) for even and odd inner minimum distance , respectively , we can decode up to half the minimum distance of the concatenated code .however , the integer parts not of all the thresholds among the sets are necessarily pairwise different . since decoding attempts in respect to thresholds with equal integer parts coincide , the number of actual decoding attempts which need to be executed to decode up to half the minimum distance of may be smaller than the number of thresholds .we can calculate it explicitly by results are subsumed using the following examples . we assume a concatenated code consisting of an inner code and an outer code consisting of rows containing code words of the rs code . for _even _ inner minimum distance the decoding bounds ( [ eqn : boundind ] ) for independent outer decoding and ( [ eqn : boundcol ] ) for collaborative outer decoding , respectively , depending on the number of thresholds are shown in figure [ fig : outer_irs_even ] . according to lemma [ lemma : indeven ] , for independent outer decoding thresholds are sufficient to decode up to half the minimum distance of .if collaborative decoding of outer rs codes is applied , we can calculate the number of required thresholds by and get .both values are confirmed by the bounds in figure [ fig : outer_irs_even ] .if the rs codes are outer codes of a gc code as described in section [ section : gcc ] , which fulfill ( [ eqn : l1 ] ) , the saving in terms of operations is even greater .besides the saved outer decoding attempts , the number of inner decodings can then be cut down by . note that decoding one irs code with interleaved rs codes with the algorithm from has the same complexity as decoding the rs codes independently .thus , our comparison of both constructions is fair in terms of complexity .figure [ fig : attempts ] shows the number of actual decoding attempts as well as the number of thresholds for some reasonable _ odd _ inner minimum distances .collaborative decoding of and outer rs codes is considered . for independent outer decodingas described in section [ section : bzouterrs ] , grows linearly with .it diminishes to at most already for an outer irs code with . for an outer irs code with already decoding attempts are sufficient to decode up to half the minimum distance of over the full range of all considered odd inner minimum distances $ ] .g. schmidt , v. r. sidorenko , and m. bossert , `` interleaved reed solomon codes in concatenated code designs , '' in _ proc .ieee itsoc inform .theory workshop _ , ( rotorua , new zealand ) , pp . 187191 , august 2005 .g. schmidt , v. r. sidorenko , and m. bossert , `` collaborative decoding of interleaved reed solomon codes and concatenated code designs . ''preprint , available online at arxiv , arxiv : cs.it/0610074 , 2006 . | generalized concatenated codes are a code construction consisting of a number of outer codes whose code symbols are protected by an inner code . as outer codes , we assume the most frequently used reed solomon codes ; as inner code , we assume some linear block code which can be decoded up to half its minimum distance . decoding up to half the minimum distance of generalized concatenated codes is classically achieved by the blokh zyablov dumer algorithm , which iteratively decodes by first using the inner decoder to get an estimate of the outer code words and then using an outer error / erasure decoder with a varying number of erasures determined by a set of pre - calculated thresholds . in this paper , a modified version of the blokh zyablov dumer algorithm is proposed , which exploits the fact that a number of outer reed solomon codes with average minimum distance can be grouped into one single interleaved reed solomon code which can be decoded beyond . this allows to skip a number of decoding iterations on the one hand and to reduce the complexity of each decoding iteration significantly while maintaining the decoding performance on the other . |
by nature a fault causes a system to behave abnormally but does not cause the system to shut down , however if left unattended a fault may lead to a system failure , hence the importance of a fault tolerant control ( ftc ) system . while faults can cause instability in a system , the integration of an ftc scheme significantly increases the ability of the system to maintain overall stability in the presence of a fault . in this paperwe develop a new active ftc system using an unscented kalman filter ( ukf ) for fault detection and identification ( fdi ) where parameter updates made by the ukf are sent to the nonlinear model predictive control ( nmpc ) based control system . to test the ftc systemthe design is applied to the control of a 2d robot model .+ there are numerous solutions to the ftc design problem with the ftc research community classifying these designs as either _ active _ or _passive_. passive ftc systems are based on fixed controllers and are designed to be resilient against known faults .they are designed using robust control techniques for worst case scenarios .active ftc systems `` actively '' seek the fault and try to gather as much information about it as possible to help the controller overcome any consequential instabilities and are also known as self - repairing , self - designing or fault detection , identification ( diagnosis ) and accommodation schemes .active schemes are made up of an fdi component and are based on controller redesign , or selection / mixing of pre - designed controllers .+ the main difficulty with active ftc is on - line reconfiguration which requires detailed information about changes in system parameters .hence the main role of the fdi subsystem is the gathering of information on parameter changes to assist in controller reconfiguration .fdi is a key component in an active ftc system and is the most difficult aspect of ftc . in the early daysmost research on fdi was done independently of the controller design and no combined design existed .recently there has been some research on the integration of fdi and ftc ; however much remains to be done .+ techniques commonly used for fdi include artifical intelligence based fdi schemes , , , and multiple model based methods where different models are used to describe the dynamics of the system for different operating regimes . sliding mode observers for fault detection have also been used because of their strong robustness to a particular class of uncertainty .other methods include analytical fdi methods that are based on estimating the fault through matrix algebra .observer based fdi remains the most common approach studied in recent literature where an extended state observer is used to estimate faults .+ quite often in the design of an ftc system two controllers are utilised ; the main ( or nominal ) controller is designed for the faultless case with the second controller being a compensator designed to handle the faulty case . in this papera number of observer based fdi schemes are investigated and integrated with the nmpc ( nonlinear model predictive control ) controller design of to develop a full active ftc system . in our design only one controller needs to be designed capable of handling the faultless as well as the fault cases .the 2d robot model from is used as a test bed to assist in the development of the ftc system design .+ this paper is organised as follows ; section [ section : chap4_fdi_tandi ] presents an in - depth look at the fdi methods chosen for investigation , namely the ekf ( extended kalman filter ) , the ukf ( unscented kalman filter ) and the imm ( integrated multiple models ) . an ekf filter , ukf filter , ekf based imm filter and a ukf based imm filter are each designed in section [ section : chap4_probform ] for the purposes of fdi using the 2d robot model of .four different active fault tolerant control systems using nmpc ( nonlinear model predictive control ) as the controller design are formulated and implemented in matlab for the 2d robot model .each of the four different active ftc systems developed are then tested under different conditions in section [ section : chap4_nranda ] .the tests are designed to evaluate the performance of the filters as well as the interaction of the filter and controller designs .section [ section : chap4_nranda ] provides a detailed analysis of the test results and concludes with a summary of findings .based on the findings the ukf filter was chosen as part of the final design of the active ftc system .section [ subsec : lmpc ] concludes with a comparison between linear mpc and nonlinear mpc as the controller showing that an active ftc system with a nonlinear mpc controller has better overall performance in the event of a failure .conclusions are given in section [ section : chap4_conclusion ] .the fault detection schemes considered here are all based on filtering techniques , namely the ekf , the ukf , the ekf based imm and the ukf based imm .these filters are used to sequentially estimate the state of a dynamic system using a sequence of noisy measurements made on the system .the state estimates are then utilised to aid in fault detection and control reconfiguration . a general overview and key mathematical concepts are provided for each method below .the ekf is an extension of the well known kalman filter .one of the drawbacks of the kalman filter is that it does not provide good estimations for nonlinear systems .the ekf approximates ( or linearises ) the nonlinear functions in the state dynamic and measurement models .there are two main stages during an ekf ( and the general kalman filter ) cycle : predict and update . during the prediction stagethe filter states and covariances are predicted forward one time step as are the measurement predictions . during the update stage correctionsare made to the state predictions via noisy measurements .a summary of the ekf ( ristic et .al ) equations is given below .the target state and measurement equations propagate according to : where and are random sequences and are mutually independent with zero - mean , white gaussian with covariances and respectively .the ekf is based on the assumption that local linearisation of the above equations may be a sufficient description of nonlinearity .the mean and covariance of the underlying gaussian density are computed recursively in a two stage process ( ristic et .al ) : + stage 1 : prediction stage 2 : update / correction where , and .the parameter is commonly known as the kalman gain and is referred to as the innovation covariance .the innovation is the error between the predicted measurement and the actual measurement of the system .the matrices and are the local linearisation of the nonlinear functions and . the two matrices are defined as jacobians evaluated at and respectively ( ristic et .al ) .the non - gaussianity of the true posterior density is more evident , for example becomes bimodal or heavily skewed , when the model is highly nonlinear . in this eventthe performance of the ekf will significantly degrade .the ukf addresses the issue of non - gaussianity .the ukf is a part of a family of nonlinear filters , referred to as linear regression kalman filters , that are based on statistical linearisation rather than analytical linearisation .the key concept behind these filters is to perform nonlinear filtering using a gaussian representation of the posterior through a set of deterministically chosen sample points .the true mean and covariance of the gaussian density are completely captured by these sample points up to the second order of nonlinearity , with errors introduced in the third and higher order when propagated through a nonlinear transform .the ekf on the other hand is only of first order with errors introduced in the second and higher orders .the filters belonging to this family differ only by the method used to select the sample points i.e. their number , weights and values in the filtering equations are identical and are given below .the ukf uses an unscented transform for the selection of points in an ekf framework ( ristic et .al ) . + we assume that at time the posterior is gaussian : .the very first step is representing this density via a set of sample points and their weights .the ukf uses the unscented transform to select the sample points and weights .the prediction step is as follows : \left[\mathbf{f}_{k-1}\left(\mathcal{x}_{k-1}^i\right ) - \mathbf{\hat{x}}_{k\vert k -1}\right]^\top.\end{aligned}\ ] ] a set of sample points : are used to represent the predicted density : and the predicted measurement becomes : the update step is defined as : where : as can be seen from the above filter equations there is no explicit calculation of jacobians .consequently these filters can be utilised even when the nonlinear functions and have discontinuities .the imm belongs to a class of filters called the gaussian sum filters . the main concept here is the approximation of the required posterior density by a gaussian mixture ( ristic et .al ) : where are weights that are normalised , .gaussian sum filters such as the imm are ideal when the posterior density is multimodal because for multimodal densities there is a performance degradation in both the ekf and ukf . at time the state estimate is calculated for each possible current model using filters , with each filter using a different combination of the previous model - conditioned estimates called mixed initial condition .the algorithm as outlined in bar - shalom et .al is : * calculation of the mixing probabilities .the probability that mode was in effect at time given that is in effect at conditioned on is given by : the above can be written as : where the normalising constants are : * mixing . the mixed initial condition for the filter matched to is calculated starting with : and the corresponding covariance is given by : \\ & \cdot \left[\hat{x}^i\left(k-1 \vert k-1\right ) -\hat{x}^{0}\left(k-1\vert k-1\right)\right]^\intercal\bigg\},\quad\quad i , j = 1,\hdots , r . \end{split}\ ] ] * mode - matched filtering .the estimates of the states and covariances calculated in step 2 above are used as inputs to the filter matched to which uses to determine and .the likelihood functions associated to the filters : ,\ ] ] are calculated using the mixed initial condition and covariance from step 2 : ,\quad\quad j = 1,\hdots , r,\ ] ] that is : ,s^j\left[k;p^{0j}\left(k-1\vert k-1\right)\right]\right],\quad\quad j = 1,\hdots , r.\ ] ] * mode probability update .the mode probabilities are then updated via : where is the normalisation constant and is given by * estimations and covariance combination .the output is obtained by combining the model - conditioned estimates and covariances : \left[\hat{x}^j\left(k\vert k\right ) - \hat{x}\left(k\vert k\right)\right]^\intercal\right\rbrace.\end{aligned}\ ] ] this section outlined the details of the the methods chosen for further investigation , the ekf , the ukf and the imm filters .these techniques are applied to the 2d robot model in the next section .to test the different filtering techniques in an fdi context the robot model of is used for development and testing purposes and forms the plant that is to be controlled as well as the process model of the nmpc controller ( see figure [ fig : chap3_robot eom ] ) .the equations for this robot model are : where is the -coordinate of the point , is the -coordinate of the point , is the heading angle , is the right wheel angular velocity , is the left wheel angular velocity and is the speed given by .+ the nmpc controller is based on the design given in which solves the following optimal control problem using pseudospectral discretisation : subject to full details of the design are given in . fifty coincidence points ( )are used along with a prediction window length of 5 secs ( ) .the weights and are diagonal matrices with the diagonal values set to 10 , 1 and 1 respectively and found through trial and error . + the fault , to be simulated and tested for , is a punctured wheel .if a wheel is punctured the radius of the wheel will decrease and so the filters are set up to estimate the radius of the wheel .four different filters have been designed , the ekf , the ukf , the ekf imm and the ukf imm .+ the robot parameters used for all simulations are : right wheel radius , , left wheel radius , , distance between wheels , , speed demand is and the input constraints on and are .the filters are updated at 100hz while the controller is updated at 10hz .all work was developed using matlab with snopt as the nonlinear programme ( nlp ) solver .the following subsections detail the design of each of the filters .the state vector for the ekf consists of the following states : ^\intercal,\ ] ] where , and are the robot states and and are the right wheel and left wheel radii respectively .the measurements are assumed to be of the speed , , of the robot : where is additive white noise .the initial state vector and initial covariance matrix are : ^\intercal,\quad\quad p(0 ) = \begin{bmatrix } ( 0.5)^2 & 0 & 0 & 0 & 0 \\ 0 & ( 0.5)^2 & 0 & 0 & 0 \\ 0 & 0 & ( 1\pi/180)^2 & 0 & 0 \\ 0 & 0 & 0 & ( 0.5)^2 & 0 \\ 0 & 0 & 0 & 0 & ( 0.5)^2 \end{bmatrix}.\ ] ] the and noise matrices were chosen to be : where is the update rate of the filter .for the prediction cycle an euler integration scheme is used to predict the states of the ekf forward .the predicted measurement is given by : given the above information the kalman filter equations given in section [ subsection : chap4_ekf_equations ] are applied to estimate the radius of each wheel in the experiments conducted in section [ section : chap4_nranda ] .the general structure of the ekf and ukf are very similar in that they both have a prediction and update cycle and produce a single state vector and a corresponding covariance matrix . for the robot modelthe state vector is the same as the one given in equation .the initial state vector , initial covariance matrix , the process noise matrix and the noise covariance matrix all remain the same as those given in subsection [ subsec : chap4_ekf_equations_sims ] .the ukf algorithm given in subsection [ subsec : chap4_ukf_equations ] is applied to the robot model with .the interacting multiple model method , as the name suggests , is made up of multiple models where each model tests a different hypothesis .four different models ( the terms mode and model are used interchangeably and have the same meaning in the context of imms ) have been designed where : * no fault case . * right wheel deflation , left wheel no fault . * right wheel no fault , left wheel deflation .* right wheel deflation , left wheel deflation . during step 3 of the imm algorithm given in section [ subsection : chap4_imm_equations ] a filter such as the ekf is used to update the states and covariances , and both an ekf based imm filter and a ukf based imm filter have been designed .the initial covariance matrix for each filter and each mode are the same as equation .the and matrices are those given in equation and the initial state vectors for each filter and mode are : ^\intercal,\\ \mathbf{x}_2(0 ) & = & \left[x_0,\,\,\,y_0\,\,\,\psi_0\,\,\,1\,\,\,2\right]^\intercal,\\ \mathbf{x}_3(0 ) & = & \left[x_0,\,\,\,y_0\,\,\,\psi_0\,\,\,2\,\,\,1\right]^\intercal,\\ \mathbf{x}_4(0 ) & = & \left[x_0,\,\,\,y_0\,\,\,\psi_0\,\,\,1\,\,\,1\right]^\intercal.\end{aligned}\ ] ] the mixing probabilities or mode probabilities are initially set to : ^\intercal,\ ] ] and the mode transition probabilities matrix is set to : robot is required to follow a circular path for all experiments . to simulate the measurement additive white noiseis added to the speed of the robot which is calculated as a part of the truth simulations of the robot movement .to test the filters four different test cases were set up and each test case was run twice . during the first run the fdi feedback loop is not closed and the filters are used for estimation only . the fdi loop is closed during the second run to investigate the behaviour of the full active ftc controller .the test cases are as follows : + 1 . no fault .the objective is to investigate how well the filters estimate the radii of the tyres in a no fault situation .2 . left wheel puncture . in this casea puncture is simulated to occur 10 secs into the simulation .the wheel is assumed to deflate to of its original value instantaneously .3 . left and right wheel puncture . in this test casea left wheel puncture is simulated 5 secs into the run and a right wheel puncture is simulated to occur 10 secs into the simulation .both punctures are assumed to cause an instantaneous reduction of the respective wheel radius to of the original wheel radius .4 . left wheel linear puncture . in this test caseonce again the left wheel is punctured 10 secs into the run however this time the puncture is assumed to follow a linear reduction in wheel radius according to , where represents the left wheel radius reduced from its original value of 2 m down to 0.1 m at a rate of 0.1 m/s and t is the current time . the radius does not drop to 0 m as this caused a complete system failure .+ the results for each filter are presented in the next four subsections : plots of the speed innovation were produced ( but have been omitted due to space constraints ) where the innovations were plotted along with the calculated uncertainty bounds .the uncertainty bounds are a confidence interval and the solution ( innovations in this case ) must remain within the bounds of the time . the results for the ekf and ukf filters showed that the speed innovations remained well within the uncertainty bounds throughout the duration of the run both with and without feedback .the speed innovation plots for the imm filters were also produced .the results for both the ukf and ekf based imm filters showed that the filters were able to very quickly detect the correct mode of operation .+ plots of the wheel radius estimates were also produced .all filters do a very good job of estimating the radius of the tyres with and without feedback .the imm filters initially have a higher error in the tyre estimate as the estimation is based on a mixture of all the models , however it took only one update for the imm to reach the correct estimate .+ the wheel speed estimates ( or angular rates ) were also analysed and it was found that the estimates for all four filters were very similar . in the case of no feedbackthe estimate was the same as the actual speed ; however in the case where feedback is provided the wheel speeds were quite noisy .this is a result of calculations based on noisy measurements which is a consequence of feedback .+ plots of the robot trajectory for all four filters showed that the robot remained on the path with and without feedback which is to be expected in the no fault case .the speed innovations for all four filters were plotted and the results showed that all filters were able to detect the fault .the fault occurred at 10 secs into the run , and the plots show that at 10 seconds there was a peak change in the innovation curves .the corrections / innovations were seen to increase at the time the fault occurred and settle again near zero once the correct estimate was reached .+ for the imm filters model 3 is the correct match for scenario 2 and both of the imm filters were seen to find the correct mode immediately as mode 3 is the most confident in its estimate .mode 3 and mode 4 both hypothesise a failure in the left wheel of which is why after the occurrence of the fault the uncertainties do not increase .however because mode 1 and mode 2 do not hypothesise a fault in the left wheel the uncertainties can be seen to increase once the fault has occurred . the uncertainty in mode 3was seen to decrease twice as much compared to mode 4 after the fault occurred .this is because mode 4 hypothesises that both wheels are punctured whereas mode 3 predicts the puncture of only the left wheel .another point to note is that in the single filter cases feedback did not have much effect on the innovations .however , in the case of the imm filters the results show that with feedback the filter errors do not grow as rapidly between updates .the errors are seen to grow very quickly when no feedback is present which is evident in figures [ fig : chap4_scenario2_immukf_velocityerrors_nfb ] and [ fig : chap4_scenario2_immukf_velocityerrors_wfb ] .uncertainty bounds ( red ) , speed innovations ( blue ) ] uncertainty bounds ( red ) , speed innovations ( blue ) ] the results for the wheel radius estimates showed that the ukf estimates were closer to the actual wheel radius compared to those produced by the ekf . turning on feedback results in the filters reaching a steady estimate faster when compared to the no feedback case .the imm filters produced slightly better estimates than the single ukf filter and there was very little improvement on the imm estimates compared to the no feedback case . +the wheel speeds were also analysed and the results for all four filters presented the same trends . without feedback there is much more activity present compared to turning on the feedback .once the fault occurs the robot yaws to the side with the punctured wheel and demands a faster speed to compensate for the loss in radius . +the robot trajectories were simulated and all results showed that the robot was only able to remain on the path if feedback from the filter was provided to reconfigure the controller .although all filter estimates without feedback were excellent , without reconfiguration of the controller the robot could not be made to follow the desired path ( see figure [ fig : chap4_scenario2_traj_ukf ] ) .the plots of the speed innovations for all filters clearly indicated , from the sudden changes in innovations , the detection of both faults , left wheel at 5 secs and right wheel at 10 secs .the ekf innovations were found to be consistent , however the innovations produced by the ukf show that , with feedback , the innovation uncertainty begins to grow rapidly between updates whereas without feedback the uncertainty remains constant ( see figure [ fig : chap4_scenario3_ukf_velocityerrors ] ) .the imm filters show that after 5 secs model 3 is the best model .however , once the second fault occurs the filters do an excellent job of recognising that mode 4 is the correct match and uncertainties in mode 4 are seen to decrease ( see figure [ fig : chap4_scenario3_immekf_velocityerrors_wfb ] ) .it was again observed in the no feedback case that the uncertainties on the imm filters grow rapidly between updates and many more corrections are required .+ uncertainty bounds ( red ) , speed innovations ( blue ) ] uncertainty bounds ( red ) , speed innovations ( blue ) ] the wheel radius estimates produced by the filters showed that the imm filters produce the best estimates of the radii .the ukf performs slightly better than the ekf , and turning feedback on results in a faster settling time ( see figure [ fig : chap4_scenario3_ukf_wheelradius ] ) .+ plots of the angular rates achieved by the robot via the ekf showed that once a fault occurred the punctured wheel is required to spin faster in order to compensate for the loss in radius .the angular rates produced as a result of the ukf , with feedback , resulted in operation at the angular rate constraints .both imm filters displayed similar behaviour in that once a wheel was punctured it was required to rotate faster to compensate for the loss in radius . + the trajectories produced by each filter for scenario 3 showed that without feedback it is impossible to maintain the robot on the path . reconfiguring the controller on the other hand with estimates from the filters allowed the robot to easily follow the reference path .an anomaly occurred with the ukf filter where even turning the feedback on did not result in the robot following the path after the occurrence of the second fault .this could possibly be the result of poor tuning of the filter ( see figure [ fig : chap4_scenario3_traj_ukf ] ) .+ scenario 4 velocity innovations are presented in figures [ fig : chap4_scenario4_ekf_velocityerrors ] and [ fig : chap4_scenario4_ukf_velocityerrors ] for the ekf and ukf respectively .the ekf breaks down 5 seconds after the fault occurs , when feedback is turned on as can be seen by the innovations exceeding the covariance bounds . without feedback however the innovations remain within the uncertainty bounds .the results from the ukf are much better as it produces innovations which remain well within the uncertainty bounds with and without feedback .both of the imm filters failed , because the fault type of scenario 4 was not modelled as a part of the filter design , i.e. the hypothesis for this type of failure is not accounted for and so no model exists for this failure ( see figure [ fig : chap4_scenario4_immukf_velocityerrors_wfb ] ) .uncertainty bounds ( red ) , velocity innovations ( blue ) ] uncertainty bounds ( red ) , velocity innovations ( blue ) ] uncertainty bounds ( red ) , velocity innovations ( blue ) ] the wheel radius estimates showed that without a hypothesis on the imm filters the ukf was the only filter able to produce the correct estimates of the wheel radii .the ukf results clearly indicated that reconfiguring the controller resulted in a faster convergence to the correct estimate .+ plots of wheel speed produced by all four filters with and without feedback show that with this type of fault both wheels were required to work at their constraints .+ the robot trajectories as a result of the different filter information were examined and it was found that none of the filters show full compliance with the reference trajectory. however the ukf was able to maintain the robot on the path the longest .- this section looks at the behaviour of the imm filters by re - designing the filters to accommodate the fault type covered by scenario 4 .the filters were modified by adding a fifth model , which hypothesises the left wheel puncturing in the manner described by scenario 4 .the and matrices remain the same and the initial state vectors for the fifth filter and mode are : ^\intercal.\end{aligned}\ ] ] the mixing probabilities or mode probabilities become : ^\intercal,\ ] ] and the mode transition probabilities matrix is redefined as : the speed innovations for the imm ekf are presented in figures [ fig : chap4_scenario4_immekf_velocityerrors_nfb_5models ] and [ fig : chap4_scenario4_immekf_velocityerrors_wfb_5models ] with no feedback and with feedback respectively . as predicted the results show that if a hypothesis is made the imm performs very well .the ekf as part of imm is able to predict this type of error which it was unable to do as a single filter .the imm ukf speed innovation plots showed a higher level of confidence in its estimates compared to its ekf counterpart as the uncertainty is lower and consistent . providing feedback in both cases ( ekf and ukf imms )was shown to increase confidence in the filter estimates .uncertainty bounds ( red ) , velocity innovations ( blue ) ] uncertainty bounds ( red ) , velocity innovations ( blue ) ] the ukf based imm estimates of the wheel radii for the 5 mode imm are shown in figure [ fig : chap4_scenario4_immukf_wheelradius_5models ] .the results of both ekf and ukf based imms showed that in both cases the filters do an excellent job of making the correct estimations on the radius of the wheel .the angular rate plots of the 5 mode imms again showed that the control inputs are required to work at the constraints the majority of the time once the fault occurred .this result is not dependent on the filter type but rather the fault that has occurred makes it impossible for the robot to achieve the desired task while at the same time respecting its constraints .+ the trajectory plots showed that for this type of fault where the wheel radius had almost approached zero , the wheels are unable to maintain the reference .the ukf imm is again able to keep the robot on the path for a longer time than the ekf imm .the results of the previous section reveal that each of the filters considered exhibits excellent qualities for fault detection and identification .figure [ fig : radiicomparisons ] shows a comparison of the different filters in terms of wheel radius estimation .the plot shows the radius estimates for scenario 4 which is the worst case scenario out of the 4 scenarios considered .the output of the ekf and the 4 mode imm filters have been omitted from the plots as the errors were very high and as a consequence the remaining results were invisible .the plot shows that in terms of wheel radius estimate performance the ukf performed equally as well as the 5 mode imm filters .the imm filters were able to reach the correct estimate faster than the single ukf particularly when a sudden change is present as is evident at the 10 second mark .the ekf performance also dramatically increases when used within an imm configuration .the drawback of the imm filter is that all possible scenarios must be accounted for .the method used in this research illustrated the concept of the imm , by showing easy adaptability to different situations , and the speed with which it is able to identify and reach the correct estimate .however predicting exactly how a fault will occur ( in this case how a tyre will puncture ) is impractical . a more practical implementationwould have been to develop a number of filters each with different process noises that could adapt to all different situations .the number of filters and the process noise values would have to be determined by trial and error . in any casean imm performs well if and only if it is equipped to make a hypothesis on the current situation . if the given situation is unaccounted for the filter breaks down . in terms of the single filter , in general the ukf displayed better performance than the ekf , especially in the case of scenario 4 where the non - linearities of the fault caused the single ekf to breakdown . for these reasonsthe ukf has been chosen for the final ftc system design . as a result of the findings given in subsection [ subsection : chap4_fandc ]only the ukf with feedback is implemented to compare nonlinear mpc with linear mpc .the results given in the next two sections are for scenarios 2 and 4 respectively however plots for only scenario 2 are given due to space constraints .the velocity innovation plots in figure [ fig : chap4_scenario2_ukf_velocityerrors_linearmpc ] produced by the linear mpc controller show that the innovations remain well within the uncertainty bounds and are approximately zero .however the uncertainty was seen to be double that produced by the nonlinear controller .uncertainty bounds ( red ) , speed innovations ( blue ) ] the wheel radii plots give in figure [ fig : chap4_scenario2_ukf_wheelradius_linearmpc ] show that the estimations produced by a nonlinear controller are the same as those produced by the linear controller .hence the filter performs well even with linear mpc .the angular rate plots for the linear mpc controller ( figure [ fig : chap4_scenario2_ar_ukf_linearmpc ] ) show that five seconds after the fault occurred the linear controller pushes the wheels to operate at their constraints and is unable to tolerate the faulty condition .the angular rates produced by the nmpc controller ( figure [ fig : chap4_scenario2_ar_ukf_nmpc ] ) show that it was able to easily adapt to the fault .this is further illustrated in the trajectory plot given in figure [ fig : chap4_scenario2_traj_ukf_linearmpc ] which clearly shows that the nonlinear mpc controller does an excellent job of keeping the robot on the path despite a faulty wheel whereas the solution produced by the linear controller has diverged .a point to note here is that while the trajectory tracking is good , the switching behaviour observed in wheel rotation rates is highly undesirable , and in the real world would produce wheel slippage , and high levels of wear on tyres and mechanicals .the wheel rates produced by the nonlinear mpc controller show a dramatic decrease in this switching behaviour .it is , however , still present ( figure [ fig : chap4_scenario2_ar_ukf_nmpc ] ) .this limit cycle behaviour is a negative characteristic that needs to be addressed before this technique can be applied to real systems .this exercise was purely for proof of concept and is not a practical application .the model is entirely kinematic and for it to represent a more practical scenario further work is required to eliminate the limit cycling behaviour ; for example adding actuation activity to the cost function and an angular acceleration term to avoid wheel slippage issues .the speed innovations were plotted for scenario 4 and showed similar trends to those above in that the innovations are quite small ; however the uncertainties with the linear controller are higher than those produced as a result of nonlinear mpc .the estimates of the wheel radii however are very good and were seen to be the same for both linear and nonlinear controllers . +as expected the linear controller was unable to maintain the robot on the path .neither of the controllers were able to drive the robot to the path as the wheel radius had almost reached 0 m making it infeasible for the robot to continue .the angular rates showed that the linear controller constantly demanded operation at the constraints oscillating between the upper and lower limits continuously .the nonlinear mpc controller results for wheel angular rates showed that the right wheel oscillates between the upper and lower bounds , however the left wheel is required to constantly work at the upper bound .the analysis from this work has proven the feasibility of the nmpc controller design with filter estimates for controller reconfiguration as a viable solution to fault tolerant control .four different filters were compared , the ekf , the ukf , the imm ekf and the imm ukf filters .the results showed that in terms of fault detection performance the ukf is the best candidate in a trajectory tracking scenario .+ comparisons were also made between the performance of nonlinear mpc and linear mpc .the results clearly show that for the purposes of reconfigurable fault tolerant control the nonlinear mpc controller has better performance .+ the next phase of this research is to implement the nmpc pseudospectral controller with a ukf based fdi subsystem to an aircraft , for fault tolerant flight control .9 khan , r. , williams , p. , riseborough p. , rao , a. and hill , r. , _ designing a nonlinear model predictive controller for fault tolerant flight control_. arxiv e - prints 2016 ; http://adsabs.harvard.edu/abs/2016arxiv160901529k , [ last accessed : 07 - 09 - 2016 ] .cork lr , walker r , dunn s. _ fault detection , identification and accommodation techniques for unmanned airborne vehicles_. australian international aerospace congress ( aiac ) 2005 ; http://eprints.qut.edu.au/1729/ , [ last accessed : 07 - 01 - 2016 ] .boskovic jd ,li sm , mehra rk ._ on - line failure detection and identification ( fdi ) and adaptive reconfigurable control ( arc ) in aerospace applications_. proceedings of the 2001 american control conference 2001 ; 4:26252626 .jiong - sang y , bin j , jian lw , yan l. _ discrete - time actuator fault estimation design for flight control application_. 7th international conference on control , automation , robotics and vision 2002 ; 3:1287 - 1292 .doi:10.1109/icarcv.2002.1234958 .wang w , hameed t , ren z. _ extended state observer - based robust fault - tolerant controller for flight control surface failures_. icemi09 .9th international conference on electronic measurement & instruments 2009 ; 3610 .wan ea , van der merwe r. _ the unscented kalman filter for nonlinear estimation_. the ieee 2000 adaptive systems for signal processing , communications , and control symposium 2000 ; 153 - 158 . doi:10.1109/asspcc.2000.882463 | this paper develops a new active fault tolerant control system based on the concept of analytical redundancy . the novel design consists of an observation filter based fault detection and identification system integrated with a nonlinear model predictive controller . a number of observation filters were designed , integrated with the nonlinear controller and tested before reaching the final design which comprises an unscented kalman filter for fault detection and identification together with a nonlinear model predictive controller to form an active fault tolerant control system design . keywords : fault tolerant control , fault detection and identification , model predictive control , nonlinear control , observation filters . |
knowing the redshifts of extragalactic objects is vital for understanding their physical properties as well as for statistical cosmology measurements .the primary method employed by researchers to measure galaxy redshifts is spectroscopy ; however , spectroscopy is available for only a small fraction of all imaged galaxies , on the order of 1% .this low percentage has necessitated a search for alternative methods of redshift estimation .much progress has been made with photometric redshift estimation methods , which primarily rely on the colors of galaxies to estimate redshifts . while such methods are used frequently , they face inherent limitations , resulting in redshift estimates with large associated error values .consequently , methods of redshift estimation using the angular clustering of galaxies have been proposed and explored .because these methods require only the angular positions of galaxies , they are considered distinct from photometric redshift methods per the working definition argued by .the `` clustering - based '' redshift estimation methods rely on the spatial correlations and directly use the angular two - point correlation function , which expresses the excess probability of finding a galaxy at an angular separation from another galaxy .while these methods are promising , they propose only redshift distributions and thus are agnostic to the redshifts of specific galaxies .therefore , a natural question arising from clustering - based redshift estimation is whether galaxy clustering can be used not only to produce redshift distributions of galaxy samples but also to determine the redshifts of individual galaxies . in this paper, we propose a new method that uses combinatorial optimization techniques applied to the angular two - point correlation function to divide galaxy samples into separate redshift bins , thereby constraining the redshifts of individual galaxies .creating thin slices not only is possible but potentially makes the optimization faster .ultimately , the final method is expected to balance the statistical noise and the optimization cost . herewe explore the simplest case .we aim to partition a set of ( simulated ) galaxies into two subsamples such that their correlation functions match the observations. this case would be particularly useful in creating a hard boundary between overlapping photo - z redshift bins .if it works , the optimization can be repeated iteratively in order to separate a galaxy sample into narrow redshift slices .of course , even this simplest case presents computational challenges .[ line - through]*naively , for a sample of size , possible partitions would have to be considered in order to find the optimal partition , which is prohibitively expensive even for dozens of objects .* we present a formalism using integer linear programming that , when implemented , makes this optimization tractable for statistically relevant sample sizes , . in section [ sec : formalism ], we provide a detailed description of our method , including a derivation of the relevant formalism . in section [ sec : results ] , we present results from in section [ sec : applicability ] , we discuss the applicability of this method .before introducing our formalism , let us introduce the notation that will be used throughout the rest of this paper .let and be two datasets with the same sky coverage , and let be a random dataset with the same sky coverage as and . in accordance with we define as the set of all unordered pairs of galaxies in , and we define as the set of all unordered pairs of galaxies such that one galaxy is from and the other is from .we also define and analogously .because the correlation function is estimated over a set of bins ( i.e. , intervals ) of angular separation , we must introduce notation related to pair counts within these bins .we define as the set of unordered data - data pairs such that the angular separation between the members of the pair is in bin , .the corresponding terms for , , etc . , are defined analogously .lastly , we define as the size of , as the size of , etc . in this paper , we develop our formalism using two estimators of correlation functions given two samples with the same sky coverage : the natural estimator and the estimator .the natural estimator is the least expensive of the correlation function estimators to compute and hence expected to yield the simplest optimization formulas . on the other hand ,the landy - szalay estimator has been shown to be the most accurate estimator by . using the natural estimator , the cross - correlation and autocorrelation function estimates of the two samples and in bin are given by : in the main body of the paper , we focus on these equations but provide the alternative method for the landy - szalay estimator in appendix [ sec : ls ] .one of the most powerful tools in discrete optimization is _ integer linear programming ( ilp)_. _ linear programming _ is an optimization model in which one seeks to optimize a linear function of finitely many variables , subject to linear inequality constraints on the variables .integer linear programming generalizes this by only admitting solutions where some ( or all ) of the variables are constrained to take only integer values .such models are an indispensable tool for optimizing over discrete solution spaces , and state - of - the - art software codes have been developed to handle numerous large scale problems arising in technological , scientific , and economic applications . in order to build an integer linear programming model ,we must build a model consisting of a cost function and constraints , where our cost function and constraints are constructed such that the minimization of our cost function yields the optimal solution to our problem .we define our optimal solution as a partition of our sample into two subsamples and its complement such that : \ ] ] is minimized , where , , and are our correlation function estimates in bin , as calculated using our choice of estimator , and and are target values for , , and , respectively .informally , this function is minimized when , , and are pulled as close to our target values and as possible , across all bins . the purpose of the following formalism is to translate equation [ eq : tot ] into an integer linear programming model .the construction of such a model requires translating all unknowns into variables , creating a cost function using a linear combination of these variables , and adding constraints to the model to enforce certain relationships between the variables .we must also specify which variables are integer or continuous . in current software, one can even specify if an integer variable is binary , i.e. , takes only values or .our integer variables will , in fact , be binary variables to accommodate the problem of classifying objects into two redshift slices .to simplify the optimization further , we fix the size of and its complement , and , to prevent runaway solutions in which one subsample contains a large majority of the galaxies in .moreover , this enables us to keep the cost function and constraints linear .consequently , we treat and as constants throughout .this , however , poses no real threat to generality because often good initial estimates are available , and further optimization can be performed along the size dimension if needed .we begin by defining variables for each of our galaxies .we introduce the binary variable that encodes whether a galaxy is a member of or : these variables serve as the bridge between the cost function and the partitioning of : we will construct the cost function in such a way that minimizing it sets each to either 0 or 1 and thus assigns each galaxy to or according to the optimal partition .we now add a constraint to our model in order to enforce that must be fixed to a pre - determined positive integer by using the fact that is precisely the sum of that evaluate to : thus , we have fixed , and because , where is fixed , we have also fixed .next , we introduce variables for unordered pairs of galaxies in that encode whether the galaxies in each pair are from the different subsamples . for each unordered pair of galaxies, we define the binary variable as follows : where is symmetric in and .significantly , can be expressed in terms of the boolean ( xor hereafter ) of the and variables , where each xor is encoded through four linear constraints of and that we add to our model as we reiterate that these constraints establish the relationship between s and s , as given in their definitions in and . before proceeding , we introduce more notation - related summations over pair counts .we use the summation notation to denote summing over all unordered pairs of galaxies in such that the angular separation between and falls into bin .we begin by translating the natural estimator to its linear programming equivalent in this section and generalize to the landy - szalay estimator in the appendix . using the natural estimator, we first translate into its cost function equivalent , . for cross - correlation, we seek to minimize : where is an estimate of cross - correlation in bin .we can express in terms of previously - defined quantities : because is fixed , and are constants .furthermore , and are fixed .thus , we can combine these constants into a single weight for each bin : we therefore seek to minimize where is the only non - constant term within the minimization for each bin .we can now reformulate this expression using our previously - defined binary variables . is precisely equal to the number of unordered pairs of galaxies in such that and are in different subsamples and the angular separation between and falls into bin ; thus , because we are summing over all unordered pairs in , as opposed to just unordered pairs in , we have eliminated any dependence on the partitioning of except in the variables themselves .the expression that we seek to minimize for cross - correlation optimization now simplifies to : because each is the xor of and , we have expressed cross - correlation optimization entirely in terms of binary variables for each galaxy , and minimization of the above expression will assign each galaxy to either or according to the optimal partition of .the expression in is not a linear function of the variables due to the absolute value function .however , this expression can be modeled by a linear function by introducing auxiliary continuous variables for each bin and relating them to the variables as follows . where for each , we add the following two constraints : we have now incorporated cross - correlation optimization into our linear programming model .the key insight here is that any solution that minimizes the expression in must satisfy one of the inequalities in at equality depending on which left hand side is larger , and thus in any optimum solution minimizing .we mention here the conscious choice of using the norm in , as opposed to the norm : the norm leads to a formulation with a linear objective like and linear constraints like , as opposed to the norm which would give a convex quadratic objective . as per folk wisdom in integer programming ,we prefer the linear formulation , and hence we go with the norm .next , we formalize autocorrelation optimization using the natural estimator . in this paper , we formalize two different approaches : combining the autocorrelations of and in such a way that the autocorrelation target values and of and , respectively , are set to be equal , and implementing separate target values for the autocorrelation of and , thereby allowing and to be potentially distinct for any bin .although the latter method is advantageous because it allows for independent target values , the downside is that one needs more variables in the integer linear program to model this , compared to the former method in which no new variables need to be introduced into our model .we introduce the former method next and introduce the latter method in the appendix in [ sec : independent ] . to derive a parameterized autocorrelation for the two samples , let us consider the variable defined as from the definition we can see that . although is agnostic as to which sample and belong, it does encode whether the pair contributes to an autocorrelation calculation .we can naturally extend the notion of autocorrelation for and into a combined autocorrelation given by : where is the weighted average of and : with in this combined autocorrelation model , we seek to minimize where is the target value for the combined autocorrelation in bin .we introduce the weight to replace constants in : furthermore , we can express entirely in terms of variables that have already been introduced because this sum is precisely equal to the sum of unordered pairs of galaxies in such that and are in the same subsample and the angular separation between and falls into bin : our expression takes the form : - ( 1 + \alpha_i ) \bigg|\ ] ] as with cross - correlation optimization , the final step is to eliminate absolute values from the cost function . for each bin , we add a continuous variable .the portion of the cost function corresponding to combined autocorrelation optimization takes its final form : where for each , we add the following two constraints : - ( 1 + \alpha_i ) & \le \psi_i\\ \bigg [ b_i \sum_{(u ,v ) \in vv_i } ( 1 - y_{uv } ) \bigg ] - ( 1 + \alpha_i ) & \le - \psi_i \end{split}\ ] ] we can now express our entire model using combined autocorrelation and the natural estimator .the model consists of the cost function : and all associated constraints .this model can be generalized to the landy - szalay estimator by modifying the cost function slightly ; we provide the details in the appendix in [ sec : ls ] .furthermore , in all of the above formalism , we have used a uniform weighting across all bins for cross - correlation and autocorrelation in the cost function for notational simplicity .we note that one can easily weight each bin individually .to test the effectiveness of the integer linear programming method described in section [ sec : formalism ] , we used the python interface for gurobi , a solver available for both academic and commercial use .we ran all tests on a dell poweredge r815 machine with 512 gb of ram and 4 amd opteron 6272 processors .the machine ran scientific linux 7 , python 2.7.5 , and gurobi 6.0.4 .it is worth noting that none of the following tests required gurobi to use more than 1% of ram at any given time ; therefore , our machine s specifications far exceed the specifications necessary to reproduce the following results .for the two datasets and , we used two mock catalogs , each 1 square degree in size and each consisting of 10,000 galaxies , generated [ line - through]*by sbastien heinis * using a cox point process , as described in heinis et al .the samples were produced using different random seeds , and consequently , they behaved as uncorrelated samples , meaning that the theoretical cross - correlation was 0 . in both samples , we selected the thin redshift cut of , the densest redshift slice in both samples , in order to select samples with strong angular autocorrelation signals .these cuts left slightly over 1000 galaxies per catalog .we then randomly selected 2000 galaxies in total from these two samples , yielding and such that , for our random catalog , we generated a 1 square degree poisson sample consisting of 200,000 galaxies .all bins for cross - correlation and autocorrelation were given equal weights in the cost function according to the formalism derived in section [ sec : formalism ] .furthermore , we chose to use binning schemes with an equal number of galaxy pairs per bin across all bins . with this choice ,the poisson noise was equal across all bins , unlike binning schemes set by uniform increments in angular separation .test the effectiveness of the optimization itself , we fed gurobi ground - truth target values by pre - computing all s and s using the real partition of into and ; we also fixed and according to equation [ eq : ssize ] using the real partition of v. given the ground - truth as target values , we then tested 1 . the time required for gurobi to complete the optimization 2 .the fraction of galaxies assigned correctly accordingly , these are the main metrics by which we compare different optimization runs in this section . in the limit of sufficiently few pairs per bin , the optimization is computationally feasible and recovers the ground - truth solution exactly .for instance , with a binning scheme of pairs per bin and bins , the optimization recovers the ground - truth partition in seconds .however , in this case of only 5 pairs per bin , the optimization almost certainly couples to the noise of the ground - truth correlation function target values .this motivated an analysis of a binning scheme of 100 pairs per bin , which improves upon the poisson noise per bin while still retaining resolution in the autocorrelation signal at short scales by having sufficiently narrow bins . in order to reduce computational load and make the optimization feasible , we fixed a fraction of the total galaxy population before beginning the optimization by adding extra constraints to our model that if and if for a fraction of galaxies ; including a fixed sample is reasonable because a spectroscopic sample could be used as a fixed population in an application of this method to real data . in order to explore the optimization with these settings , we ran a series of tests using gurobi with of galaxies fixed and a binning scheme of 100 pairs per bin . . three distinct regimesthe first regime occurs in the limit of few bins , or short maximum angular scales , in which there are many solutions with 0 cost .because there are so many optimal solutions , gurobi is capable of finding one such optimal solution in a short amount of time ; however , for this same reason , this solution behaves only slightly better than random in assigning the unfixed galaxies correctly .it is worth noting that even in this regime , the amount of time required to complete optimization increases exponentially as a function of number of bins .this increase in runtime is expected because increasing the number of bins increases the number of target values and thus decreases the number of solutions with 0 cost .the second regime occurs in the limit of many bins , in which the optimization is communicated a sufficient amount of information about the ground - truth solution through the s and s that it is able to find the ground - truth solution in of order a minute .consequently , gurobi assigns 100% of the unfixed galaxies correctly .the third regime is the peak in the amount of time required for the optimization to complete , located between the other two regimes .as seen in , the full features of the peak can not be determined due to the exponential growth in runtime .the points correspond to optimization runs that were terminated before completion after seconds , the maximum allowed time in our study ; the fraction of galaxies correctly assigned corresponds to that of the optimal solution found before termination .this peak is the result of a trade - off between the other two regimes : there are relatively few solutions with 0 cost , and they can not be found quickly by gurobi ; however , there are not enough bin target values to constrain the ground - truth solution immediately . in figure[ fig:100leftright ] , for a binning scheme with 200 pairs per bin .the same three characteristic regimes are present ; however , the second and third regimes begin at shorter maximum angular binning scales for the binning scheme of 100 pairs per bin than for the binning scheme of 200 pairs per bin .this is a consequence of the fact that for a given maximum angular binning scale , the binning scheme of 100 pairs per bin is fed twice the number of s and s than the binning scheme of 200 pairs per bin and thus has twice the information .the results presented in figure [ fig:100leftright ] are highly sensitive to the fraction of galaxies fixed before optimization .for example , in the regime of many bins , the optimization requires an order of magnitude more time to complete the optimization for 75% of galaxies fixed than for 80% of galaxies fixed ; this optimization does not complete within seconds for 70% of galaxies fixed .reducing the total number of galaxies to , the optimization requires a lower percentage of galaxies to be fixed in order to recover the ground truth solution in the limit of many bins .fixing 65% of galaxies and using 100 edges per bin and 600 bins , the optimization completes in 195 seconds . lowering the percentage fixed to 60%, the optimization completes in 695 seconds .thus , reducing the total number of galaxies to still allows for unfixed galaxies to be assigned , confirming that the percentage of fixed galaxies necessary for the optimization to complete within a given time frame is largely dependent on the total number of galaxies in .the results presented in section [ sec : results ] reveal that in the appropriate regimes , the optimization is computationally feasible when ground - truth values are fed in as the target values . in any real application of this method , the ground - truth values of the s and s would only be known approximately , and fiducial values are good approximations in the case of many pairs in the bins .the determination of the appropriate fiducial values is a question of physics rather than of linear programming , and because we have chosen to test only the optimization itself in this paper , so far we have omitted the exploration of the effects of inputting fiducial values on the optimization .we test the optimization s response to inexact target values by perturbing the s toward values taken from a power law fit of the combined autocorrelation function and perturbing the s toward , the expected cross - correlation .we accomplish this by using interpolation and setting the target values by varying the interpolation parameter $ ] according to the following equations : \ ] ] and where is the ground - truth combined autocorrelation value in bin , is the ground - truth cross - correlation value in bin , and is the value of the power - law fit in bin .as seen in figure [ fig : corrfn ] .this power law fit was used only for the purposes of perturbing the ground - truth combined autocorrelation function by small amounts , as seen in figure [ fig : interpolation ] . in figure[ fig : interpolation ] , we present the time required for the optimization to complete when varying up to 0.04 for 80% of galaxies fixed , 200 pairs per bin , and 300 bins . by varying only by small amounts ,we lessen the impact of an incorrect power law fit .furthermore , these binning settings were chosen because the optimization recovered the exact ground - truth partition in 44 seconds given these settings and ground - truth target values .for all runs in this figure , the optimization exactly recovered the ground - truth solution .the fact that the optimization still completes and recovers the real partition when given inexact target values for for , the optimization does not complete within seconds , the observed phenomenon of an increase in runtime with is possibly due to the fact that the model might not have a perfect solution and the cost function could become shallow , which leads to a large number of nearly optimal " solutions .these solutions must be pruned by the solver to find the global minimum conclusively .however , the pruning methodologies of ip solvers often need to spend a significant amount of time to discard the nearly optimal " solutions , even though they have arrived at a stage of the optimization where all the solutions that are being considered have values that are very close to the true optimal value . to finally discover the true optimal value by weeding through the large number of `` nearly optimal '' ones occupies the bulk of the time forthe solver in such situations ( see chapter 2 of for a discussion of these pruning strategies for integer linear programming ) . by interpolating a power law fit, we have in effect pulled the target values to a slightly less - noisy correlation function . in order to resolve fully the question of whether the optimization is finding the ground - truth solution by coupling to noise, would have to be driven closer to 1 ; of course , in doing so , the target values would become more dependent on the fiducial values , and physically - correct fiducial values would have to be chosen , .however , a visual representation of the noise in each bin of the ground - truth combined autocorrelation function is nonetheless instructive and can be seen in figure [ fig : corrfn ] .our ultimate goal is to use this procedure in conjunction with photometric redshift results to sort galaxies into thin redshift slices using only photometric data .the true redshift distribution of photo - z bins will have overlapping tails due to the uncertainties in the estimation . starting with designations from the photo - z catalog, one can apply our procedure to create a sharper boundary between the bins .repeating this procedure will yield much improved redshifts . .we have presented a novel integer linear programming method that enables a galaxy sample to be partitioned into two subsamples such that the angular two - point cross- and autocorrelations of the subsamples are optimized to pre - determined target values .our approach is the first application of integer linear programming to , which is expected to find other applications as we explore large statistical samples .we tested this optimization method using mock catalogs and gurobi , an optimization solver , and verified that this optimization technique is not only feasible in certain regimes but also provides good solutions , .this is due to the formulation of the problem using only linear equations .we explored the applicability of this method and have described how it could be used in the future to estimate the redshifts of individual galaxies using only their celestial coordinates .evidently , much more remains to be explored with the applicability of this method , a significant portion of which relates to fiducial values .for instance , the question of the extent to which the optimization is coupling to noise in the correlation function can only be resolved by analyzing runs with fiducial target values .furthermore , much would be learned from re - generating figure [ fig:100leftright ] using fiducial values as target values once the best method for selecting fiducial values is identified .other variables such as the optimal binning scheme and maximum angular binning separation for the autocorrelation and cross - correlation when using fiducial values would also have to be explored . ideally ,for real application , the number of galaxies per sample could be pushed higher , and the fraction of galaxies fixed could be pushed lower ; given the rapid improvement in linear programming algorithms and the increase in computing power , these limitations may very well resolve themselves with time .additions to the outlined method could be explored in order to improve runtimes and accuracy .some of these are mathematical and others physical issues .\(1 ) in a real application of this method , the optimization could be terminated when the relative gap between the upper and lower bounds of the optimization is below a threshold value , instead of forcing this gap to reach zero in order for the optimization to complete and find the true _ mathematically _ optimal solution . in this case , the solver may report a `` nearly optimal '' solution of the optimization model , as opposed to the true _ mathematical _ optimum of the model .nevertheless , we feel that from a physical standpoint , the `` nearly optimal '' solutions may be more meaningful than the mathematical optimal which could be influenced by noise .such a strategy would also make sense when we use fiducial values instead of values from a simulation and would help to deal with the explosion in running time as discussed in section [ sec : toward_fiducial ] .\(2 ) other than through the s , the power law nature of the autocorrelation function is never explicitly leveraged ; it could potentially be exploited in a greedy algorithm , for example , by fixing pairs of shortest angular separations to the same subsample before beginning optimization .the autocorrelation signal could also potentially be leveraged to a greater extent by using a binning scheme set by uniform increments in angular binning separation , which could provide more bins at the shortest angular scales .\(3 ) it is possible that subsampling the pairs in each bin could decrease runtime without sacrificing accuracy .considering that the true underlying variables are the classes to which the objects belong ( ) , a subset of the pairs ( ) could provide enough constraints at a reduced computational cost .the sampling , however , will probably have to be carefully constructed to optimize performance .the authors would like to thank sbastien heinis for providing mock catalogs and brice mnard for helpful discussions .lance joseph helped with computational resources .b.l . was supported by the 2015 herchel smith - harvard undergraduate science research fellowship .a.b . gratefully acknowledges partial support from nsf grant cmmi1452820 .we formalize the independent autocorrelation method for the natural estimator and provide the formalism for the landy - szalay estimator .the mathematical notation for these are somewhat more complicated but the complexity of the algorithm does not increase .here , we introduce the formalism for autocorrelation optimization with the natural estimator in which target values and can be set independently , as opposed to the combined autocorrelation method described in [ sec : combined ] .the independence of and comes at the expense of model complexity : we must introduce the variables and and associated constraints for each unordered pair , where neither nor can be expressed in terms of .therefore , this method effectively triples the number of variables in our model in comparison to the combined autocorrelation model .we designate the cost function equivalent of the autocorrelation of and using the natural estimator as and , respectively . in order to formalize the autocorrelation of , we introduce the variable . for each unordered pair , we define as follows : in addition , to formalize the autocorrelation of , we introduce the analogous variable . for each unordered pair , we define as follows : just as with s , we must add constraints to relate s to s and s : for the autocorrelation of , we minimize : and for the autocorrelation of , we minimize : thus , takes the final form of : where for each , we add the following two constraints : and takes the final form of : where for each , we add the following two constraints : in order to convert and into their final forms , we must eliminate the absolute values using the method described in sections [ sec : cross ] and [ sec : combined ] . in this formalization using independent autocorrelations for and and using the natural estimator , the full model consists of the cost function : and all of the associated constraints .here , we introduce the formalism for optimization with the landy - szalay estimator . using the landy - szalay estimator , the autocorrelation and cross - correlation function estimates of two samples and in bin are given by : in building the ilp model for the natural estimator , we have already modeled the first terms in all three expressions , , and .thus , we only need to translate the second terms in these expressions .fortunately , they can be expressed entirely in terms of constants and s .we begin with the term . in and, this term in the bin is defined as : furthermore , we know that is defined as the number of unordered pairs such that the angular separation between and lies in bin . defining for a given as the number of ordered pairs such that and the angular separation between and lies in bin , we can re - express : we can in turn express this in terms of our variables : by generalizing to summing over all galaxies in as opposed to just galaxies in , we have eliminated any dependence on the partitioning of except in the variables themselves .furthermore , for a given galaxy , is a constant that can be pre - computed before optimization .we now define the weight to absorb all of these constants : thus , in the bin , the term involving becomes : we can define the term involving in the bin analogously : where : thus , referring to , , , we can now express , , and , respectively , in terms of our binary variables : where has been defined in equation [ eq : a ] , and and have been defined in equations [ eq : as ] and [ eq : asbar ] , respectively .the cost function equivalents of , , and , given by , , and , respectively , can now be converted to their final forms by eliminating the absolute values using the method described in sections [ sec : cross ] and [ sec : combined ] .thus , our full model consists of the cost function : and all associated constraints .+ bentez , n. 2000 , apj , 536 , 571 benjamin , j. , van waerbeke , l. , mnard , b. , & kilbinger , m. 2010 , mnras , 408 , 1168 brammer , g. , van dokkum , p. , & coppi , p. 2008 , apj , 686 , 1503 budavri , t. , szalay , a. , connolly , a. , csabai , i. , & dickinson , m. 2000 , aj , 120 , 1588 budavri , t. , csabai , i. , szalay , a. , connolly , a. , & szokoly , g. 2001 , aj , 122 , 1163 budavri , t. 2008 , apj , 695 , 747 conforti , m. , cornujols , g. , zambelli , g. , integer programming , _ graduate texts in mathematics , springer - verlag _ , 2015 .connolly , a. , csabai , i. , szalay , a. , koo , d. , kron , r. , & munn , j. 1995 , aj , 110 , 2655 feldmann , r. , et al .2006 , mnras , 372 , 565 gurobi optimization , inc . , 2015 ,gurobi optimizer reference manual , http://www.gurobi.com heinis , s. , budavri , t. , & szalay , a. 2009 , apj , 705 , 739 kerscher , m. , szapudi , i. , & szalay , a. 2000 , apj , 535 , l13 koo , d. 1985 , aj , 90 , 418 koo , d. 1999 , in asp conf .191 , photometric redshifts and high redshift galaxies , ed .weymann , r. , storrie - lombardi , l. , sawicki , m. , & brunner , r. ( san francisco , ca : asp ) , 3 landy , s. d. , & szalay , a. s. 1993 , apj , 412 , 64 mnard , b. , scranton , r. , schmidt , s. , morrison , c. , jeong , d. , budavri , t. , & rahman , m. 2013 , arxiv e - prints , arxiv:1303.4722 newman , j. 2008 , apj , 684 , 88 rahman , m. , mnard , b. , & scranton , r. 2015a , arxiv e - prints , arxiv:1508.03046 rahman , m. , mnard , b. , scranton , r. , schmidt , s. , & morrison , c. 2015b , mnras , 447 , 3500 rahman , m. , mendez , a. j. , mnard , b. , et al .2016 , , 460 , 163 schmidt , s. j. , mnard , b. , scranton , r. , morrison , c. , & mcbride , c. k. 2013 , mnras , 431 , 3307 schmidt , s. j. , mnard , b. , scranton , r. , et al .2015 , mnras , 446 , 2696 | we propose a new method of the redshifts of individual extragalactic sources based on . techniques from integer linear programming are utilized to optimize simultaneously for the angular two - point cross- and autocorrelation functions . our novel formalism introduced here not only transforms the otherwise hopelessly expensive , brute - force combinatorial search into a linear system with integer constraints but also is readily implementable in off - the - shelf solvers . we adopt and use python to build the cost function dynamically . the preliminary results on simulated data show for future applications to sky surveys by complementing and enhancing photometric redshift estimators . our approach is the first |
the design of spectrum sharing and access mechanisms for cognitive radio networks ( crns ) has attracted much attention in the last few years .the interest in crns is mainly attributed to their ability to enable efficient spectrum utilization and provide wireless networking solutions in scenarios where the un - licensed spectrum bands are heavily utilized .furthermore , due to their cognitive nature , crns are more spectrum efficient and robust than their non - cognitive counterparts against spectrum unavailability , and have the capability to utilize different frequency bands and adapt their operating parameters based on the surrounding radio frequency ( rf ) environment .specifically , cr is considered as the key technology to effectively address the inefficient spectrum utilization in legacy licensed wireless communication systems by providing opportunistic on - demand access .cr technology enables unlicensed users to opportunistically utilize the idle pr channels ( so - called spectrum holes ) .the spectrum holes represent the pr channels that are currently under - utilized . in order to utilize these spectrum opportunities without interfering with the prns, cr users should perform accurate spectrum sensing , through which idle channel lists are identified .in addition , the cr users should be flexible enough to quickly vacate the operating channel when a pr user reclaims it . in this case, cr users should quickly and seamlessly switch their operating channel(s ) .while large - scale deployment of crns is still to come , extensive research attempts are currently underway to improve the effectiveness of spectrum sharing protocols and improve the spectrum management and operation of such networks .two of the most crucial challenges in deploying crns are the needs to maximize spectrum efficiency and minimize the caused interference to prns . on other words, providing efficient communication and spectrum access protocols that provide high throughput while protecting the performance of licensed prns is the crucial design challenge in crns .the main objective of this paper is to overview and analyze the key schemes and protocols for spectrum access / sharing / managemnent that have been developed for crns in the literature .furthermore , we briefly highlight a number of opportunistic spectrum sharing and management schemes and explain their operation details . as indicated later , it follows logically that cross - layer design , link quality / channel availability tradeoff and interference management are the key design principles for providing efficient spectrum utilization in crns .we start by describing the main crn architectures and operating environment .then , the spectrum sharing problem is stated .the various objectives used to formulate the spectrum sharing problem in crns are summarized .we then point out the several design challenges in designing efficient spectrum sharing and access mechanisms .the tradeoffs in selecting the operating channel(s ) in crns are discussed .a number of spectrum sharing design categories are then surveyed .various complementary approaches , new technologies and optimization methods that have great potential in facilitating the design of efficient crn communication protocols are highlighted and discussed . finally , concluding remarks are provided with several open research challenges .= 4.5 in = 1.5 in = 1.2 intypical crn environment consists of a number of different types of prns and one or several crns .the pr and cr networks geographically co - exist within the same area . in terms of network topology ,two basic types of crns are proposed : centralized multi - cell crns and infrastructure - less ad hoc crns .figure [ fig : crnc ] depicts a composition view of a crn operating environment consisting of an ad hoc crn and a multi - cell centralized crn that coexist with two different types of prns .the different prns have license to transmit over orthogonal non - overlapping spectrum bands , each with a different licensed bandwidth .pr users of a given prn operate over the same set of licensed channels .cr users can opportunistically utilize the entire pr licensed and unlicensed spectrum . for adhoc multi - hop crns without centralized entity , it is necessary to provide distributed spectrum access protocols that allow each cr user to separately access and utilize the available spectrum . furthermore , for centralized multi - cell crns , it is desirable to provide ( 1 ) centralized spectrum allocation protocols that allocate the available channels to the different cr cells , and ( 2 ) centralized channel assignment mechanisms that enable efficient spectrum reuse inside each cell . in general , the channel availability model of each pr channel in a given locality is described by a two - state on / off markov process .this model describes the evolution between idle ( off ) and busy ( on ) states ( i.e. , the on state of a pr channel indicates that the pr channel is busy , while the off state reveals that the pr channel is idle ) .the model is further described by the stochastic distribution of the busy and idle periods , which are generally distributed .the distributions of the idle and busy states depend on the pr activities .we note here the on and off periods of a given channel are independent random variables . for a given channel , the average idle and busy periods are and , respectively . based on this model , the idle and busy probabilities of a pr channel respectively given by and .figure [ fig : onoff ] shows a transition diagram of a -state busy / idle markov model of a given pr channel .we note here that neighboring cr users typically have similar views to spectrum availabilities , while non - neighboring cr users have different channel availability conditions .= 3.2 in = 2.0 in = 1.5 in = 1.2 inspectrum sharing problem ( including spectrum management and decision ) can be stated as follows : given the output of spectrum sensing , the main goal is to determine which channel(s ) to use , at what powers , and at what rates ?such that a given performance metric ( objective function ) is optimized .this is often a joint optimization problem that is very difficult to solve ( often it constitutes an np - hard problem ) .recently , several spectrum assignment strategies have been proposed for crns .these strategies are designed to optimize a number of performance metrics including : * maximizing the cr throughput ( individual users or network - level ) based on shannon capacity or a realistic staircase rate - sinr function ( e.g. , ) . * minimizing the number of assigned channels for each cr transmission ( e.g. , ) . * maximizing the cr load balance over the different pr channels ( e.g. , ) . * minimizing the probability of pr disruption , i.e. , minimizing the pr outage probability ( e.g. , ) . * minimizing the average holding time of selected channels ( minimizing the pr disruption ) ( e.g. , ) . * minimizing the frequency of channel switching due to pr appearance by selecting the channel with maximum residual idle time , i.e. , minimizing the cr disruption in terms of forced - termination rate ( e.g. , ) . * maximizing the cr probability of success ( e.g. , ) . * minimizing the spectrum switching delay for cr users ( e.g. , ) . * minimizing the expected cr waiting time to access a pr channel ( e.g. , ) . * minimizing crn overhead and providing cr qos ( e.g. , ) . * minimizing the overall energy consumption ( e.g. , ) . * achieving fair spectrum allocation and throughput distribution in the crn ( e.g. , ) . * maintaining crn connectivity with predefined qos requirements ( e.g. , ) .we note here that the spectrum sharing problem for any of the aforementioned objectives is , in general , np - hard .therefore , several heuristics algorithms and approximations have been proposed to provide suboptimal solutions for the problem in polynomial - time .these heuristics and approximations can be classified based on their adopted optimization method as : graph theory - based algorithms ( e.g. , ) , game theory - based algorithms ( e.g. , ) , genetic - based algorithms ( e.g. , ) , linear programming relaxation - based algorithms ( e.g. , ) , fuzzy logic - based algorithms ( e.g. , ) , dynamic programming - based algorithms ( e.g. , ) , sequential - fixing - based algorithms ( e.g. , ) .the coexistence problem is one of the most limiting factors in achieving efficient cr communications . in a crn environment, there are three kinds of harmful interference that should be considered : pr - to - cr / cr - to - pr interference ( the so - called pr coexistence ) and cr - to - cr interference ( the so - called self - coexistence ) . while several mechanisms have been proposed to effectively deal with the pr - to - cr interference problem based on cooperative ( e.g., ) or noncooperative ( e.g., ) spectrum sensing , the cr - to - cr and cr - to - pr interference problems are still challenging issues . to address the cr - to - cr interference problem in ad hoc crns ,several channel allocation and self - coexistence management mechanisms have been proposed based on either ( 1 ) exclusive channel assignment or ( 2 ) joint channel assignment and power control . on the contrary, the cr - to - cr interference problem has been addressed in multi - cell centralized crns based on either fixed channel allocation or adaptive traffic - aware / spectrum - aware channel allocation .it has been shown that the cr - to - pr interference is the most crucial interference in crn environment , because it has a direct effect on the performance of prns .hence , the transmission power of cr users over the pr channels should be adaptively computed such that the performance of the prns is protected .based on the outcomes of spectrum sensing , two different power control strategies can be identified : binary and multi - level transmission power strategies . according to the binary - level strategy (the most widely used power control strategy in crns ) , cr users can only transmit over idle channels with no pr activities .specifically , for a given pr channel , a cr transmits power if the channel is busy , and uses the maximum possible power if the pr channel is idle .while this strategy ensures collision - free spectrum sharing between the cr and pr users , it requires perfect spectrum sensing .worse yet , the binary - level strategy can lead to non - optimal spectrum utilization . on the other hand , using a multi - level adaptive frequency - dependent transmission power strategy allows the cr and pr users to simultaneously share the available spectrum in the same locality , which can significantly improve spectrum utilization . by allowing cr users to utilize both idle and partially - occupied pr channels, much better spectrum utilization can be achieved .the multi - level power strategy can also be made time - dependent to capture the dynamics of pr activities . under this strategy ,controlling the cr - to - pr interference is nontrivial .in addition , computing the appropriate and adequate multi - level power strategy is still a challenging issue , which has been studied under some simplified assumptions .specifically , the authors in proposed an adaptive multi - level frequency- and locality - dependent cr transmission power strategy that provides a soft guarantee on prns performance . this adaptive strategy is dynamically determined according to the pr traffic activities and interference margins .in this section , we review several well - known distributed coordination mechanisms designed for crns .we note that control channel designs for crns can be loosely classified into six different categories : * dedicated pre - specified out - of - band common control channel ( ccc ) design . *non - dedicated pre - specified in - band ccc design . * hybrid out - of - band and in - band ccc design .* hopping - based control channel design .* spread - spectrum - based control channel design . * cluster - based local ccc design .despite the fact that using a dedicated out - of - band ccc is straightforward , it contradicts the opportunistic behavior of crns , may result in a single - point - of - failure ( spof ) and performance bottleneck due to ccc saturation under high cr traffic loads .similarly , using a pre - specified non - dedicated in - band ccc is not a practical solution due to spectrum heterogeneity and , if exists , such solution can result in a spof , become a performance bottleneck , and introduce security challenges .another approach that can effectively deal with the ccc saturation issue ( bottleneck problem ) is to use a hybrid out - of - band and in - band ccc ( simultaneous control communications over in - band pr channels and dedicated out - of - band cccs ) .this approach exploits the strengths of out - of - band and in - band signaling and , hence , can significantly enhance the performance of multi - hop crns . using a hopping - based control channelcan address the spof , bottleneck and security issues .however , in such type of solutions , the response to pr appearance is challenging as cr users can not use a pr channel once reclaimed by pr users . in addition , this type of solution is generally spectrum unaware .another key design issue in such solutions is the communication delay that heavily depends on the time to rendezvous .using cluster - based coordination solutions , where neighboring cr users are dynamically grouped into clusters and establish local cccs , can provide reliable distributed coordination in crns .however , adopting this type of solutions in a multi - hop crn is limited by several challenges , such as providing reliable inter - cluster communication ( i.e. , different cluster may consider different cccs ) , maintaining connectivity , broadcasting control information , identifying the best / optimal cluster size , and maintaining time - synchronization .finally , using spread - spectrum - based distributed coordination is a promising solution to most of the aforementioned design challenges , but the practicality and design issues of such solution need to be further investigated . according to this solution , the control information is spread over a huge pr bandwidth with a very low transmission power level ( below the noise level ) .consequently , with a proper design , an efficient ccc design can be implemented using spread spectrum with minor effect on prns performance . in conclusion, various distributed coordination mechanisms have been developed to provide reliable communications for crns , none of which are totally satisfactory . hence , designing efficient distributed coordination schemes in crns should be based on novel coordination mechanisms along with effective transmission technologies that enable effective , robust and efficient control message exchanges .the spectrum ( channel ) assignment problem in crns has been extensively studied in the literature .existing channel assignment / selection solutions can loosely be classified into three categories : best link - quality schemes , larger availability - period schemes , and joint link - quality and channel - availability - aware schemes .it has been shown ( e.g., ) that using the best link - quality schemes in crns , where the idle channel(s ) with the highest transmission rate(s ) are selected , can only provide good performance under relatively static pr activities with average pr channel idle durations that are much larger than the needed transmission times for cr users . under highly dynamic pr activities ,this class of schemes can result in increasing the cr termination rate , leading to a reduction in crn performance as a cr user may transmit over a good - quality pr channel with relatively short availability time ( short channel - idle period ) . on the other hand , employingthe larger availability - period schemes in crns ( e.g. , ) can result in increasing the cr forced - termination rate as an idle pr channel of very poor link - quality ( low transmission rate ) may be chosen , resulting in a significant reduction in crn performance .we note here that the interaction between the crn and prns is fundamental for conducting channel assignment in crns .the above discussion presents sufficient motivation to jointly consider the link - quality and average idle durations of pr channels when assigning operating channels to cr users .however , several open questions in this domain still need to be addressed ; possibly the most challenging one is how to jointly consider the link - quality and average idle durations into one metric to perform channel assignment .other important questions are : how can a cr user estimate the distribution of the idle periods of the different pr channels ?what are the implications of the interaction between the crn and the prns? how can a cr user determine the link - quality conditions over the various ( large number ) pr channels ? some of these questions have been addressed in by introducing the cr packet success probability metric .this metric is derived based on stochastic models of the time - varying pr behaviors .the probability of success over a given channel is a function of both the link - quality condition and the average - idle period of that pr channel .it has been proven that it is necessary to jointly consider the link - quality conditions and availability times of available pr channels to improve the overall network performance .there are several attempts have been made to design spectrum sharing protocols with the objective of improving the overall spectrum utilization while protecting the performance of licensed prns .existing spectrum sharing / access protocols and schemes for crns can loosely be categorized into four main classes based on : the number of radio transceivers per cr user ( single - transceiver , dual - transceiver , or multiple transceiver ) , their reaction to pr behavior ( reactive , proactive , or interference - based threshold ) , their spectrum allocation behavior ( exclusive or non - exclusive spectrum occupancy model ) , and the guardband considerations ( guardband - aware or guardband - unaware ) .spectrum sharing protocols and schemes for crns can also be categorized based on the number of radio transceivers per a cr user ( i.e. , single transceiver , dual transceivers , and multiple transceivers ) . using multiple ( or dual ) transceiversgreatly simplifies the task of spectrum access design and significantly improve system performance .this is because a cr user can simultaneously utilize multiple channels ( the potential benefits of utilizing multi - channel parallel transmission in crns were demonstrated in ) .in addition , the spectrum access issues such as hidden / exposed terminals , transmitter deafness and connectivity can be easily overcome as one of the transceivers can be switched to the assigned control channel ( i.e. , cr users can always receive control packet over the ccc even when they are operating over the data channels ) .however , the achieved performance gain of using multiple transceivers ( multi - channel parallel transmission ) comes at the expense of extra hardware .worse yet , the optimal joint channel assignment and power control problem in multi - transceiver crns is , in general , np - hard . on the other hand , it has been shown that the design of efficient channel assignment schemes for single - transceiver single - channel low - cost crns is simpler than that of the multi - transceiver counterpart . while single - transceiver designs can greatly simplify the task of finding the optimal channel assignment , the aforementioned channel access issues are not trivial and the performance is limited to the capacity of the selected channel .spectrum sharing schemes in the crns can also be classified based on their reaction to the appearance of pr users into three main groups : ( 1 ) proactive ( e.g. , ) , ( 2 ) reactive ( e.g. , ) , and ( 3 ) interference threshold - based ( e.g. , ) . in reactive schemes ,the active cr users switch channels after the pr appearance .on the other hand , in proactive schemes , the cr users predict the pr appearance and switch channels accordingly .the threshold - based schemes allow the cr users to share the spectrum ( both idle and partially - occupied pr channels ) with pr users as long as the interference introduced at the pr users is within acceptable values .existing threshold - based schemes attempt at reducing the impacts of the un - controllable frequency - dependent pr - to - cr interference on crn performance through proper power control based on either ( 1 ) the instantaneous sensed interference , ( 2 ) the average measured pr interference , or ( 3 ) using stochastic pr interference models .the spectrum sharing model represents the type of interference model used to solve the channel and power assignment problem .there are two different spectrum sharing models : protocol ( interference avoidance ) and physical ( interference ) models .the former employs an exclusive channel occupancy strategy , which eliminates the cr - to - cr interference and simplifies the management of the cr - to - pr interference .however , it does not support concurrent cr transmissions over the same channel , which may reduce the spectrum efficiency . on the other hand ,the overlay physical model allows for multiple concurrent interference - limited cr transmissions to simultaneously proceed over the same channel in the same locality , which improves spectrum efficiency .however , the power control issue ( cr - to - cr and cr - to - pr interference management ) under this model is not trivial .worse yet , using this model requires a distributed iterative power adjustment for individual cr users , which was shown that it results in slow protocol convergence .most of existing spectrum sharing protocols for crns were designed assuming orthogonal channels , where the adjacent channel interference ( aci ) is ignored ( e.g. , ) .however , this requires using ideal sharp transmit and receive filters , which is practically not feasible . in practice ,frequency separation ( guard bands ) between adjacent channels is needed to mitigate the effects of aci and protect the performance of ongoing pr and cr users operating over adjacent channels .it has been shown that introducing guard bands can significantly impact the spectrum efficiency , and hence it is very important to account for the guard - band constraints when designing spectrum sharing protocols for crns .few number of crn spectrum access and sharing protocols have been designed while accounting for the guard band issue .guard band - aware strategies enable effective and safe spectrum sharing , have a great potential to enhance the spectral efficiency , and protect the receptions of the ongoing cr and pr transmissions over adjacent channels .the need for guard band - aware spectrum sharing mechanisms and protocols was discussed in .specifically , the authors , in , have investigated the aci problem and proposed guard - band - aware spectrum access / sharing protocols for crns .the main objective of their proposed mechanism is to minimize the total number of reserved guard - band channels such that the overall spectrum utilization is maximized . in , the authors showed that selecting the operating channels on per block ( group of adjacent channels ) basis instead of per channel basis ( unlike the work in ) provides better spectrum efficiency .the work in attempts at selecting channels such that at most one guard band is introduced for each new cr transmission .in , the authors proposed two guard - band spectrum sharing mechanisms for crns .the first mechanism is a static single - stage channel assignment that is suitable for distributed multi - hop crns .the second one is an adaptive two - stage channel assignment that is suitable for centralized crns .the main objective of the proposed mechanisms is to maximize spectrum efficiency while providing soft guarantees on cr performance in terms of a pre - specified rate demand .in this section , we discuss and explain several methods and optimizations that interact with spectrum sharing protocols to further improve spectrum utilization in crns . the resource virtualization concept has been extensively discussed in the literature , which refers to the process of creating a number of logical resources based on the set of all available physical resources .this concept allows the users to utilize the logical resources in the same way they are using the physical resources .this leads to a better utilization of the physical resources as virtualization allows more users to share the available physical resources .in addition , virtualization introduces an additional layer of security as user s application can not directly control the physical resources .the concept of virtualization was originally used in computer systems to better utilize the available physical resources ( e.g. , processors , memory , storage units , and network interfaces ) .these resources are virtualized into separate sets of logical resources , each set of these virtual resources can be assigned to different users .using system virtualization can achieve : ( 1 ) users isolation , ( 2 ) customized services , and ( 3 ) improved resource efficiency .virtualization was also been introduced in wired networks by introducing the framework of virtual private networks ( vpns ) . recently, several attempts have been made to implement the virtualization concept in wireless crns .we note here that employing virtualization in crns is daunted by several challenges including : spectrum sharing , limited infrastructure , different geographical regions , self co - existence , pr co - existence , dynamic spectrum availability , spectrum heterogeneity , and users mobility . in , a single cell crn virtualization framework was introduced .according to this framework , a network with one bs and physical radio nodes ( pns ) with varying sets of resources are considered .the resources include the number of radio interfaces at each pn , the set of orthogonal idle channels at each pn , and the employed coding schemes .each pn hosts a set of virtual nodes ( vns ) .the vns located in the different pns can communicate with each other . to facilitate such communications , vns request resources from their hosting pns .simulation results have demonstrated the effectiveness of using network visrtualization in improving network performance . in ,the authors have proposed a virtualization framework for multi - channel multi - cell crns . in this work ,a virtualization based semi - decentralized resource allocation mechanism for crns using the concept of multilayer hypervisors was proposed .the main objective of this work is to reduce the overall cr control overhead by minimizing the cr users reliance on the base - station in assigning spectrum resources .simulation results have indicated significant improvement in crn performance ( in terms of control overhead , spectrum utilization and blocking rate ) is achieved by the virtualized framework compared to non - virtualized resource allocation schemes .the problem of computing the optimal spectrum access strategy for cr users has been well investigated in , but for cr users that are equipped with half - duplex ( hd ) transceivers .it has been shown that using hd transceivers can significantly reduce the achieved network performance .motivated by the recent advances in full - duplex ( fd ) communications and self - interference suppression ( sis ) techniques , several attempts have been made to exploit the fd capabilities and sis techniques in designing communication protocols for crns .the main objective of these protocols is to improve the overall spectrum efficiency by allowing simultaneous transmission and reception ( over the same channel or over different channels ) at each cr user .these protocols , however , require additional hardware support ( i.e. , duplexers ) .the practical aspects of using fd radios in crns need to be further investigated .the design of effective channel / power / rate assignment schemes for fd - based crns is still an open problem .beamforming techniques are another optimization that can enable efficient spectrum sharing . according to beamforming ,the transmit and receive beamforming coefficients are adaptively computed by each cr user such that the achieved cr throughput is maximized while minimizing the introduced interference at the cr and pr users .furthermore , the performance gain achieved by using beamforming in crns can be significantly improved by allowing for adaptive adjustment of the allocated powers to the transmit beamforming weights .the operation details of such an approach need to be further explored .the use of variable channel widths through channel aggregation and bonding is another promising approach in improving spectral efficiency .however , this approach has not given enough attention .based on its demonstrated excellent performance ( compared to using fixed - bandwidth channels ) , variable channel widths has been chosen as an effective spectrum allocation mechanism in cellular mobile communication systems , including the recently deployed 4 g wireless systems .thus , it is very important to use variable - bandwidth channels in crns .more specifically , in crns , assigning variable bandwidth to different cr users can be achieved through channel bonding and aggregation .this has a great potential in improving spectrum efficiency .the use of variable bandwidth transmission in crns is not straightforward due to the dynamic time - variant behavior of pr activities and the hardware nature of most of existing cr devices , which make it very hard to control the channel bandwidth .so far , most of cr systems have been designed with the assumption that each cr user is equipped with single or several radio transceivers .using hardware radio transceivers can limit the number of possibly assigned channels to cr users and can not fully support variable - width channel assignment .one possible approach to enable variable - width spectrum assignment and increase network throughput is to employ software defined radios ( sdrs ) .the use of the sdrs enables the cr users to bond and/or aggregate any number of channels , thus enabling variable spectrum - width cr transmissions .thus , sdrs support more efficient spectrum utilization , which significantly improves the overall crn performance and provides qos guarantees to cr users .cross - layer design is essential for efficient operation of crns .spectrum sharing protocols for crns should select the next - hop and the operating pr frequency channel(s ) using a cross - layer design that incorporates the network , mac , and physical layers .a cross - layer routing metric called the maximum probability of success ( mpos ) was proposed in .the mpos incorporates the link quality conditions and the average availability periods of pr users to improve the crn performance in terms of the network throughput .the metric assigns operating channels to the candidate routes so that a route with the maximum probability of success and minimum cr forced termination - rate is selected .the main drawback of the mpos approach is its requirement of known pr channel availability distributions ( the probability density function of idle periods of the pr channels ) .based on the spectrum availability conditions and to enable efficient crn operation , a cr user may need to utilize multiple adjacent ( contiguous ) idle pr channels ( the so - called spectrum bonding ) or non - adjacent ( non - contiguous ) idle pr channels ( the so - called spectrum aggregation ) .spectrum bonding and aggregation can be realized using either the traditional frequency division multiplexing ( fdm ) or the discontinuous - orthogonal frequency division multiplexing ( d - ofdm ) technology .the former technology requires several half - duplex transceivers and tunable filters at each cr user , where each assigned channel will use one of the available transceivers .while this approach is simple , it requires a large number of transceivers and does not provide the enough flexibility needed to implement channel aggregation and bonding at a large - scale .the d - ofdm is a novel wireless radio technology that allows a cr transmission to simultaneously take place over several ( adjacent or non - adjacent ) channels using one half - duplex ofdm transceiver . according to d - ofdm ,each channel includes a distinct equal - size group of adjacent ofdm sub - carriers . according to d - ofdm, spectrum bonding and aggregation with any number of channels can be realized through power control , in which the sub - carriers of a non - assigned channel will be assigned power and all the sub - carries of a selected channel will be assigned controlled levels of powers .we note here that the problem of assigning different powers to different ofdm symbols within the same channel is still an open issue . multiple input multiple output ( mimo )is considered as a key technology to increase the achieved wireless performance .specifically , mimo can be used to improve spectrum efficiency , throughput performance , wireless capacity , network connectivity , and energy efficiency .the majority of previously proposed works on mimo - based crns ( e.g. , ) have focused on the physical layer and addressed a few of the challenging issues at the upper layers , but certainly more effort is still required to investigate the achieved capacity of mimo - based crns , the design of optimal channel / power / rate assignment for such crns , the interoperability with the non - mimo crns , and many other challenging issues .one of the main challenges in the design of crns communication protocols is the time - varying nature of the wireless channels due to the pr activities and the multi - path fading .cooperative communication is a promising approach that can deal with the time - varying nature of the wireless channels , and hence improve the crn performance .cooperative communication can create a virtual mimo system by allowing cr users to assist each other in data delivery ( by relaying data packets to the receiver ) .hence , the received data packets at the cr destination traverse several independent paths achieving diversity gains .cooperative communication can also extend the coverage area .the benefits of employing cooperative communication , however , are achieved at the cost of an increase in power consumption , an increase in computation resources and an increase in system complexity .it has been shown that cooperation may potentially lead to significant long - term resource savings for the whole crn .an important challenge in this domain is how to design effective cooperative mac protocols that combine the cooperative communication with cr multiple - channel capability such that the overall network performance is improved .the cr relay selection is another challenging problem that needs to be further investigated .therefore , new cooperative crn mac protocols and relay selection strategies are needed to effectively utilize the available resources and maximize network performance .network coding in crns is another interesting approach that has not yet explored in crns . based onits verified excellent performance in wireless networks , it is natural to consider it in the design of cooperative - based crns .the packet relaying strategies in cooperative communication are generally implemented on a per packet basis , where a store - and - forward ( sf ) technique is used ( the received packets at the cr relays are received , stored and retransmitted towards the receiver ) .while this type of relaying mechanisms is simple , it has been shown that it provides a sub - optimal performance in terms of the overall achieved crn throughput ( especially , in multi - cast scenarios ) .instead of using sf , network coding can be used to maximize the crn performance . with network coding , the intermediate relay cr users can combine the incoming packets using mathematical operations ( additions and subtractions over finite fields ) to generate the output packets .one drawback in using network coding is that the computational complexity increases as the finite field size increases .the higher the field size , the better is the network performance .however , the tradeoff should be further investigated and more efforts are required to identify and study the benefits and drawbacks of increasing the field - size in crns .in addition , the performance achieved through network coding can be further enhanced in crns by dynamically adapting the total number of coded packets that need to be sent by the source cr user .such adaptation adjustment is yet to be explored , which should be based on the pr activities , link loss rates , link correlations , and nodes reachability .cr technology has a great potential to enhance the overall spectrum efficiency . in this paper, we first highlighted the main existing crn architectures .then , we described the unique characteristics of their operating rf environment that need to be accounted for in designing efficient communication protocols and spectrum assignment mechanisms for these networks .we then surveyed several spectrum sharing approaches for crns .we showed that these approaches differ in their design objectives .ideally , one would like to design a spectrum sharing solution that maximizes spectrum efficiency while causing no harmful interference to pr users .we showed that interference management ( including self - coexistence and the pr coexistence ) and distributed coordination are the main crucial issues in designing efficient spectrum sharing mechanisms .the key idea in the design of effective spectrum sharing and assignment protocols for crns is to jointly consider the pr activities and cr link - quality conditions .the reaction to pr appearance is another important issue in designing spectrum sharing schemes for crns. currently , most of spectrum sharing schemes are either reactive or proactive schemes .interference threshold - based schemes are very promising , where more research should be conducted to explore their advantages and investigate their complexities .another crucial and challenging problem is the incorporation of the guard - band constraints in the design of spectrum sharing schemes for crns .a huge amount of interference is leaked into the adjacent channels when guard bands are not used .this can significantly reduce spectrum efficiency and cause harmful interference to pr users .the effect of introducing guard - bands on the spectrum sharing design has not been well explored .many interesting open design issues still to be addressed .variable - width spectrum sharing approach is very promising , but their design assumptions and feasibility should be carefully investigated .resource virtualization is another important concept that can significantly improve the overall spectrum utilization .beamforming and mimo technology have recently been proposed as a means of maximizing spectrum efficiency .the use of beamforming in crns with mimo capability can achieve significant improvement in spectrum efficiency .however , the spectrum sharing problem becomes more challenging due to the resurfacing of several design issues such as the determination of the beamforming weights , the joint channel assignment and power control , etc ., which need to be further addressed .research should focus also on the cooperative cr communication and cross - layer concepts .using fd radios versus using hd radios is another interesting issue .moreover , utilizing network coding is very promising in improving the crn s performance .finally , we showed that channel bonding and aggregation can be realized through the use of d - ofdm technology .this technology allows cr user to simultaneous transmit or receive over multiple channels using a single radio transceiver . 2 federal communications commission , spectrum policy task force , " et docket no .02 - 135 , nov . , 2002 .h. bany salameh and m. krunz , channel access protocols for multihop opportunistic networks : challenges and recent developments , " ieee network , vol .23 , no . 4 , 2009 . h. bany salameh , throughput - oriented channel assignment for opportunistic spectrum access networks , " mathematical and computer modelling , vol . 53 , iss .11 - 121 , pp .2108 - 2118 , june 2011 .h. salameh , rate - maximization channel assignment scheme for cognitive radio networks " , in proceedings of the ieee globecom conference , florida , 2010 . l. tan and l. le , channel assignment with access contention resolution for cognitive radio networks , " ieee transactions on vehicular technology , vol .61 , no . 6 , pp . 2808 - 2823 , 2012 .k . tseng , w .- h .chung , h. chen , and c .- s .wu , distributed energy - efficient cross - layer design for cognitive radio networks , " in proceedings of the 23rd international symposium on personal indoor and mobile radio communications ( pimrc12 ) , sydney , australia , 2012 , pp .161 - 166 .e. anifantis , v. karyotis , and s. papavassiliou , a markov random field framework for channel assignment in cognitive radio networks , " in proceedings of the ieee international conference on pervasive computing and communications workshops ( percom workshops12 ) , lugano , switzerland , 2012 , pp .770 - 775 .h. bany salameh , m. krunz , and d. manzi , an efficient guard - band - aware multi - channel spectrum sharing mechanism for dynamic access networks , " in proceedings of the ieee globecom conference , dec .m. ahmadi , y. zhuang , and j. pan , distributed robust channel assignment for multi - radio cognitive radio networks , " in proceedings of the 76th ieee vehicular technology conference ( vtc12-fall ) , quebec city , canada , 2012 , pp . 1 - 5 . h. bany salameh , m. krunz , and d. manzi , spectrum bonding and aggregation with guard - band awareness in cognitive radio networks , " ieee transactions on mobile computing , 2014 . h. bany salameh , m. krunz , and o. younis , dynamic spectrum access protocol without power mask constraints " , in proceedings of the ieee infocom conference , brazil , april 2009 .q. zhao , l. tong , a. swami , and y. chen , decentralized cognitive mac for opportunistic spectrum access in ad hoc networks : a pomdp framewrok , " ieee journal on selected areas in communications , vol .589 - 600 , 2007 .q. xiao , y. li , m. zhao , s. zhou , and j. wang , opportunistic channel selection approach under collision probability constraint in cognitive radio systems , " computer communications , vol .18 , pp . 1914 - 1922 , 2009 .q. zhao , s. geirhofer , l. tong , and b. m. sadler , opportunistic spectrum access via periodic channel sensing , " ieee transactions on signal processing , vol .785 - 796 , 2008 .h. bany salameh , m. krunz , and o. younis , mac protocol for opportunistic cognitive radio networks with soft guarantees , " ieee transactions on mobile computing , vol .10 , 2009 .wang , c .- w .wang , and f. adachi , load - balancing spectrum decision for cognitive radio networks , " ieee journal on selected areas in communications , vol .4 , pp . 757 - 769 , april 2011 .p. a. k. acharya , s. singh , and h. zheng , reliable open spectrum communications through proactive spectrum access , " in proceedings of the tapas conference , 2006 .m. bkassiny and s. k. jayaweera , optimal channel and power allocation for secondary users in cooperative cognitive radio networks , " in proceedings of the 2nd international conference on mobile lightweight wireless systems ( mobilight ) , 2010 .h. wang , j. ren , and t. li , resource allocation with load balancing for cogntive radio networks , " in proceedings of the ieee globecom conference , 2010 .u . yoon and e. ekici , voluntary spectrum handoff : a novel approach to spectrum management in crns , " in proceedings of the ieee icc conference , 2010 .y. song and j. xie , common hopping based proactive spectrum handoff in cogntive radio ad hoc networks , " in proceedings of the globecom conference , 2010 .o. badarneh and h. bany salameh , quality - aware routing in cognitive radio networks under dynamically varying spectrum opportunities , " computers and electrical engineering journal , vol .38 , iss . 6 , pp . 1731 - 1744 , november 2012 .h. bany salameh , resource management with probabilistic performance guarantees in opportunistic networks , " international journal of electronics and communications aeu , vol .67 , iss . 7 , pp .632 - 636 , 2013 .h. bany salameh and o. badarneh , opportunistic medium access control for maximizing packet delivery rate in dynamic access networks , " journal of network and computer applications , vol .523 - 532 , 2013 .lee , s. member , and i. f. akyildiz , a spectrum decision framework for cognitive radio networks , " ieee transaction on mobile computing , vol .161 - 174 , 2011 .i. malanchini , m. cesana , and n. gatti , on spectrum selection games in cognitive radio networks , " in proceedings of the ieee globecom conference , 2009 , pp . 1 - 7 .q. d. xue feng , s. guangxi , and l. yanchun , smart channel swiching in cognitive radio networks , " in proceedings of the cisp conference , 2009. a. c .- c .hsu , d. s .- l .wei , and c .- c .j. kua , a cognitive mac protocol using statistical channel allocation for wireless ad - hoc networks , " in proceedings of the ieee wcnc conference , 2007 .hsu , and k .-feng , a pomda - based spectrum handoff protocol for partially observable cognitive radio networks , " in proceedings of the ieee wcnc conference , 2009 .p. zhu , j. li , and x. wang , a new channel parameter for cognitive radio , " in proceedings of the crowncom conference , 2007 .l. yu , c. liu , and w. hu , spectrum allocation algorithm in cognitive ad - hoc networks with high energy efficiency , " in proceedings of the international conference on green circuits and systems ( icgcs ) , 2010 .s. byun , i. balasingham , and x. liang , dynamic spectrum allocation in wireless cognitive sensor networks : improving fairness and energy efficiency , " in proceedings of the 68th ieee vehicular technology conference ( vtc 2008-fall ) , 2008 , pp . 1 - 5 .s. gao , l. qian , and d. vaman , distributed energy efficient spectrum access in wireless cognitive radio sensor networks , " in proceedings of the ieee wcnc conference , 2008 , pp .1442 - 1447 .x. li , d. wang , and j. mcnair , residual energy aware channel assignment in cognitive radio sensor networks , " in proceedings of the ieee wcnc conference , 2011 , pp . 398 - 403. t. zhang , b. wang , and z. wu , spectrum assignment in infrastructure based cognitive radio networks , " in proceedings of the ieee national aerospace and electronics conference ( naecon ) , 2009 , pp .l. le and e. hossain , resource allocation for spectrum underlay in cognitive radio networks , " ieee transactions on wireless commuications , vol . 7 , no . 12 , pp . 5306 - 5315 , 2008 .g. yildirim , b. canberk , and s. oktug , enhancing the performance of multiple ieee 802.11 network environment by employing a cognitive dynamic fair channel assignment , " in proceedings of the 9th ifip annual mediterranean ad hoc networking workshop ( med - hoc - net ) , 2010 , pp . 1 - 6. y. ge , j. sun , s. shao , l. yang , and h. zhu , an improved spectrum allocation algorithm based on proportional fairness in cognitive radio networks , " in proceedings of the 12th ieee international conference on communication technology ( icct ) , 2010 , pp .742 - 745 .y. li , z. wang , b. cao , and w. huang , impact of spectrum allocation on connectivity of cognitive radio ad - hoc networks , " in proceedings of the ieee globecom conference , pp . 1 - 5 , dec . 2011. m. ahmadi and j. pan , cognitive wireless mesh networks : a connectivity preserving and interference minimizing channel assignment scheme , " in proceedings of the ieee pacific rim conference on communications , computers and signal processing , pp .458 - 463 , aug . 2011 .h. m. almasaeid and a. e. kamal , receiver - based channel allocation for wireless cognitive radio mesh networks , " in proceedings of the ieee symposium on new frontiers in dynamic spectrum ( dyspan ) , pp . 1 - 10 , apr .2010 . c. zhao , b. shen , t. cui , and k. kwak , graph - theoretic cooperative spectrum allocation in distributed cognitive networks using bipartite matching , " in proceedings of the ieee 3rd international conference on communication software and networks ( iccsn ) , 2011 , pp .223 - 227 .f. ye , r. yang , and y. li , genetic algorithm based spectrum assignment model in cognitive radio networks , " in proceedings of the 2nd ieee information engineering and computer science international conference ( iciecs10 ) , china , 2010 , pp . 1 - 4. f. hou and j. huang , dynamic channel selection in cognitive radio network with channel heterogeneity , " in proceedings of the ieee globecom conference , florida , 2010 .p. kaur , m. uddin , and a. khosla , adaptive bandwidth allocation scheme for cognitive radios , " international j. advancements in computing technology , vol .35 - 41 , 2010 . l. gao , s. cui , power and rate control for cognitive radios : a dynamic programming approach , " in proceedings of the 3rd international conference on cognitive radio oriented wireless networks and communications ( crowncom 2008 ) , pp. 1 - 7 , 2008 .e. peh , y - c liang , y. guan , y. zeng , optimization of cooperative sensing in cognitive radio networks : a sensing - throughput trade off view " , ieee transactions on vehicular technology , vol .5294 - 5299 , 2009 .w. wang , j. cai , b. kasiri , and a.s .alfa , channel assignment of cooperative spectrum sensing in multi - channel cognitive radio networks , " in proceedings of the ieee international conference on communications ( icc ) , 2011 , pp . 1 - 5 .y. zeng , y - c liang , e. peh , a. hoang , cooperative covariance and eigenvalue based detections for robust sensing " , in proceedings of the ieee globecom conference , december 2009 , hawaii , usa .j. unnikrishnan , v. veeravalli , cooperative sensing for primary detection in cognitive radio " , ieee journal on selected topics in signal processing , vol 2 , no .18 - 27 , 2008 .k. seshukumar , r. saravanan , m. suraj , spectrum sensing review in cognitive radio , " in proceedings of the ieee international conference on emerging trends in vlsi , embedded system , nano electronics and telecommunication system conference ( icevent ) , pp . 1 - 4 , jan .s. haykin , d.j .thomson , j.h .reed , spectrum sensing for cognitive radio , " proceedings of the ieee , vol .849 - 877 , 2009 .z. quan , s. cui , a.h .sayed , h.v .poor , optimal multiband joint detection for spectrum sensing in cognitive radio networks , " ieee transactions on signal processing , vol .3 , pp . 1128 - 1140 , march 2009 .k. bian and j .- m .park , a coexistence - aware spectrum sharing protocol for 802.22 wrans , " in proceedings of the 18th internatonal conference on computer communications and networks ( icccn ) , aug .2009 , pp . 1 - 6. v. gardellin , s.k .das , and l. lenzini , a fully distributed game theoretic approach to guarantee self - coexistence among wrans , " in proceedings of the ieee infocom conference , march 2010 , ppfranklin , s .- j .you , j .- s .pak , m .- s .song , and c .- j .channel management in ieee 802.22 wran systems , " ieee communications magazine , vol .88 - 94 , 2011 .p. camarda , c. cormio , and c. passiatore , an exclusive self - coexistence ( esc ) resource sharing algorithm for cognitive 802.22 networks , " in proceedings of the 5th ieee international symposium on wireless pervasive computing ( iswpc ) , may 2010 , pp .128 - 133 .w. hu , d. willkomm , m. abusubaih , j. gross , g. vlantis , m. gerla , and a. wolisz , cognitive radios for dynamic spectrum access - dynamic frequency hopping communities for efficient ieee 802.22 operation , " ieee communications magazine , vol .80 - 87 , may 2007 .h. bany salameh , y. jararweh , t. aldalgamouni , and a. khreishah , traffic - driven exclusive resource sharing algorithm for mitigating selfcoexistence problem in wran systems , " in proceeding of the ieee wcnc conference , april 2014 , pp .1933 - 1937. s. debroy and m. chatterjee , intra - cell channel allocation scheme in ieee 802.22 networks , " in proceedings of the 7th ieee conference on consumer communications and networking conference ( ccnc ) , 2010 , pp .284 - 289 .m. bani hani , h. bany salameh , y. jararweh , and a. bousselham , traffic - aware self - coexistence management in ieee 802.22 wran systems , " in proceeding of the 7th ieee gcc conference and exhibition , nov 2013 , pp .507 - 510 .lo , a survey of common control channel design in cognitive radio networks , " physical communication , vol .26 - 39 , 2011 .k. chowdhury and i. akyildiz , ofdm based common control channel design for cognitive radio ad hoc networks , " ieee transactions on mobile computing , vol .10 , pp . 228 - 238 , 2011 .h. su and x. zhang , cross - layer based opportunistic mac protocols for qos provisionings over cognitive radio wireless networks , " ieee journal on selected areas in communications , pp .118 - 129 , 2008 .y. yuan , p. bahl , r. chandra , t. moscibroda , and y. wu ., allocating dynamic time - spectrum blocks in cognitive radio networks , " in proceedings of the acm international symposium on mobile and ad - hoc networking and computing ( mobihoc ) , sept .j. jia , q. zhang , and x. shen , hc - mac : a hardware - constrained cognitive mac for efficient spectrum management , " ieee journal on selected areas in communications , vol .106 - 117 , jan . 2008 .a. masri , c .- f .chiasserini , and a. perotti , control information exchange through uwb in cognitive radio networks , " in proceedings of the ieee international symposium on wireless pervasive computing conference ( iswpc ) , 2010 , pp .110 - 115 .j. marinho , e. monteiro , corhys : hybrid signaling for opportunistic distributed cognitive radio " , computer networks , february 2015 .park , r. chen , k. bian , control channel establishment in cognitive radio networks using channel hopping , " ieee journal on selected areas in communications , vol .689 - 703 , 2011 .u. tefek , t. lim , channel - hopping on multiple channels for full rendezvous diversity in cognitive radio networks , " in proceedings of ieee globecom conference , pp .4714 - 4719 , dec . 2014 j .- m.j .park , r. chen , k. bian , a quorum - based framework for establishing control channels in dynamic spectrum access networks , " in proceedings of the annual international conference on mobile computing and networking conference ( mobicom ) , 2009 , pp .n. baldo , m. zorzi a. asterjadhi , a distributed network coded control channel for multihop cognitive radio networks , " ieee network , vol .23 , no . 4 , 2009 , pp .n. baldo , a. asterjadhi , and m. zorzi , dynamic spectrum access using a network coded cognitive control channel , " ieee transactions on wireless communications , vol . 9 , no . 8 , pp . 2575 - 2587 , 2010 .c. cormio and k.r .chowdhury , common control channel design for cognitive radio wireless ad hoc networks using adaptive frequency , " ad hoc networks , vol .430 - 438 , 2010 .dasilva and i. guerreiro , sequence - based rendezvous for dynamic spectrum access , " in proceedings of the ieee dyspan conference , 2008 , pp . 1 - 7 .h. bany salameh , spread spectrum - based coordination design for multi - hop spectrum - agile wireless networks , " in proceedings of the ieee 81th vehicular technology conference ( vtc15-spring ) , scotland , 2015 , pp . 1 - 5. s. perez - salgado , e. rodriguez - colina , m. pascoe - chalke , and a. prieto - guerrero , underlay control channel using adaptive hybrid spread spectrum techniques for dynamic spectrum access , " in proceedings of the international symposium on performance evaluation of computer and telecommunication systems , july 2013 , pp .t. chen , h. zhang , g.m .maggio , and i. chlamtac , cogmesh : a cluster - based cognitive radio network , " in proceedings of the ieee dyspan conference , 2007 , pp .168 - 178 .t. chen , h. zhang , m.d .katz , and z. zhou , swarm intelligence based dynamic control channel assignment in cogmesh , " in proceedings of the ieee icc workshops , 2008 , pp . 123 - 128. l. lazos , s. liu , and m. krunz , spectrum opportunity - based control channel assignment in cognitive radio networks , " in proceedings of the ieee secon conference , 2009 , pp . 1 - 9. s. liu , l. lazos , and m. krunz , cluster - based control channel allocation in opportunistic cognitive radio networks , " ieee transactions on mobile computing , vol .10 , pp . 1436 - 1438 , 2012 .v. gardellin , s. das , and l. lenzini , coordination problem in cognitive wireless mesh networks , " pervasive and mobile computing , vol . 9 , no .1 , pp . 18 - 34 , 2013 .s. jones , n. merheb , i .- j . wang , an experiment for sensing - based opportunistic spectrum access in csma / ca networks " , in proceedings of the ieee dyspan conference , 2005 , pp .593 - 596 .j. jia , j. zhang , and q. zhang , cooperative relay for cognitive radio networks , " in proceedings of the ieee infocom conference , 2009 .g. uyanik , m. abdel - rahman , and m. krunz , optimal guard - band - aware channel assignment with bonding and aggregation in multi - channel systems , " in proceedings of the ieee globecom conference , atlanta , usa , 2013 , pp . 4898 - 4903. m. abdel - rahman , f. lan , and m. krunz , spectrum - efficient stochastic channel assignment for opportunistic networks , " in proceedings of the ieee globecom conference , atlanta , usa , 2013 , pp .1272 - 1277 .h. bany salameh and m. krunz , adaptive power - controlled mac protocols for improved throughput in hardware - constrained cognitive radio networks , " ad hoc networks journal , vol .9 , no . 7 , pp .1127 - 1139 , sep . 2011 .j. r. gallego , m. canales , and j. ortin , flow allocation with joint channel and power assignment in multihop cognitive radio networks using game theory , " in proceedings of the 9th ieee international symposium on wireless communication systems ( iswcs12 ) , paris , france , 2012 , pp . 91 - 95. j. wu , y. dai , and y. zhao , effective channel assignments in cognitive radio networks , " computer communications , vol .411 - 420 , 2013 . c. zheng , r. p. liu , x. yang , i. b. collings , z. zhou , and e. dutkiewicz , maximum flow - segment based channel assignment and routing in cognitive radio networks , " in proceedings of the 73rd vehicular technology conference ( vtc11-spring ) , budapest , hungary , 2011 , pp . 1 - 6 y. dai and j. wu , efficient channel assignment under dynamic source routing in cognitive radio networks , " in proceedings of ieee 8th international conference on mobile adhoc and sensor systems ( mass11 ) , valencia , spain , 2011 , pp .550 - 559 .junior , m. fonseca , a. munaretto , a. viana , and a. ziviani , zap : a distributed channel assignment algorithm for cognitive radio networks , " eurasip journal on wireless communications and networking , vol .2011 , no .1 , pp . 1 - 11 , 2011 .y. xing , c. mathur , m. haleem , r. chandramouli , k. subbalakshmi , dynamic spectrum access with qos and interference temperature constraints " , ieee transactions on mobile computing , vol .423 - 433 , 2007 .t. shu , s. cui , m. krunz , medium access control for multi - channel parallel transmission in cognitive radio networks , " in proceedings of globecom conference , pp . 1 - 5 , 2006 . j. vartiainen , m. hoyhtya , j. lehtomaki , and t. braysy , priority channel selection based on detection history database , " in proceedings of the crowncom conference , 2010 .l. yang , l. cao , and h. zheng , proactive channel access in dynamic spectrum networks , " elsevier physical communications journal , vol .103 - 111 , 2008 . t. c. clancy and b. d. walker , predictive dynamic spectrum access , " in proceedings of the sdr forum technical conference , florida , usa , 2006 .a. mishra and r. brodersen , cooperative sensing among cognitive radios , " in proceeding of the ieee icc conference , 2006 .x. liu and s. n. , sensing - based opportunistic channel access , " mobile networks and applications , vol .577 - 591 , 2006 .s. fourati , s. hamouda , and s. tabbane , rmc - mac : a reactive multi - channel mac protocol for opportunistic spectrum access , " in proceedings of the 4th ifip international conference on new technologies , mobility and security ( ntms ) , 2011 , pp . 1 - 5 .x. wang , p. krishnamurthy , and d. tipper , wireless network virtualization " , in proceedings of the ieee international conference on computing , networking and communications ( icnc ) , pp .818 - 822 , 2013 .y. jararweh , m. al - ayyoub , a. doulat , a. abed al aziz , h. bany salameh , a. khreishah , software defined cognitive radio network framework : design and evaluation " , international journal of grid and high performance computing ( ijghpc ) , 2014 .a. doulat , a. abed al aziz , m. al - ayyoub , y. jararwah , h. bany salameh , a. khreishah : software defined framework for multi - cell cognitive radio networks , " in proceedings of the ieee 10th international conference on wireless and mobile computing , networking and communications ( wimob ) , larnaca , cyprus , 2014 . q. zhao and j. ye , quickest detection in multiple on - off processes , " ieee transactions on signal processing , vol .5994 - 6006 , dec . 2010 .w. afifi , a. sultan , and m. nafie , adaptive sensing and transmission durations for cognitive radios , " in proceedings of the ieee dyspan11 conference , 2011 , pp .380 - 388 .s. huang , x. liu , and z. ding , optimal sensing - transmission structure for dynamic spectrum access , " in proceedings of the ieee infocom09 conference , april 2009 , pp .2295 - 2303 .w. afifi and m. krunz , adaptive transmission - reception - sensing strategy for cognitive radios with full - duplex capabilities , " on proceedings of the ieee dyspan 2014 conference , mclean , va , april 2014 . w. afifi and m. krunz , exploiting self - interference suppression for improved spectrum awareness / efficiency in cognitive radio systems . " in proceedings of the ieee infocom13 conference , turin , italy , 2013 , pp .1258 - 1266 . w. cheng , x. zhang , and h. zhang , full duplex spectrum sensing in non - time - slotted cognitive radio networks , " in proceedings of the milcom11 conference , nov .2011 , pp .1029 - 1034. m. hassan , md.j .hossainr , cooperative beamforming for cognitive radio systems with asynchronous interference to primary user , " ieee transactions on wireless communications , vol .11 , pp.5468 - 5479 , 2013 . c. zhang , l. guo , r. hu , j. lin , opportunistic distributed beamforming in cognitive radio networks with limited feedback , " in proceedings of the ieee wcnc conference , pp.893 - 897 , 2014 .j. zhang , l. guo , t. kang , p. zhang , cooperative beamforming in cognitive radio network with two - way relay , " in proceedings of the ieee 79th vehicular technology conference ( vtc spring ) , pp . 1 - 5 ,may 2014 . j .-h . noh , s .-oh , beamforming in cognitive radio with partial channel state information , " in proceedings of the ieee globecom conference , pp. 1 - 6 , 2011 . j. poston and w. horne , discontiguous ofdm considerations for dynamic spectrum access in idle tv channels , " in proceedings of the ieee dyspan conference , 2005 , pp .607 - 610 .g. scutari , d. palomar , and s. barbarossa , cognitive mimo radio , " ieee signal procesing . magazine ,46 - 59 , nov . 2008 .l. bixio , g. oliveri , m. ottonello , m. raffetto , and c. s. regazzoni , cognitive radios with multiple antennas exploiting spatial opportunities , " ieee transaction on signal processing , vol .58 , no . 8 , pp . 4453 - 4459 , aug .s. hua , h. liu , m. wu , and s. panwar , exploiting mimo antennas in cooperative cognitive radio networks , " in proceeding of the ieee infocom conference , pp . 2714 - 2722 , shanghai , 2011 . | in this paper , we investigate the issue of spectrum assignment in crns and examine various opportunistic spectrum access approaches proposed in the literature . we provide insight into the efficiency of such approaches and their ability to attain their design objectives . we discuss the factors that impact the selection of the appropriate operating channel(s ) , including the important interaction between the cognitive link - quality conditions and the time - varying nature of prns . protocols that consider such interaction are described . we argue that using best quality channels does not achieve the maximum possible throughput in crns ( does not provide the best spectrum utilization ) . the impact of guard bands on the design of opportunistic spectrum access protocols is also investigated . various complementary techniques and optimization methods are underlined and discussed , including the utilization of variable - width spectrum assignment , resource virtualization , full - duplex capability , cross - layer design , beamforming and mimo technology , cooperative communication , network coding , discontinuous - ofdm technology , and software defined radios . finally , we highlight several directions for future research in this field . |
query log analysis has received extensive research attention nowadays , since the exploitation of user feedbacks from query log has been proven to be an effective and non - intrusive method to improve search quality .search engine has been recording user click through information all the time , which can be represented as a bipartite graph .the bipartite graph refers to query and url in most cases .an edge connects a query and a url and the edge value generally corresponds to the click frequency .many query log analysis models are based on the click graph , in that a certain url has been clicked by different queries ( issued by users ) and hence provides the information about the relevance of url , query and user .a query can be represented as a vector in which each dimension corresponds to the edge value between the query and a url .traditional models make use of the raw click frequency ( the number of clicks or users ) between a query and a url , which suffers from two problems : first , raw click frequency does not favor unpopular queries or urls ; moreover , ranking models based on raw click frequency often favor already frequently clicked urls because of inherent bias of clicks .therefore it is worth to research on how to improve the representation of click graph before developing any analysis method . to leveragethe influence of highly clicked urls , an entropy - biased model in has been proposed to address the disadvantages of raw click frequency by weighting the raw click frequency with _ inverse query frequency _ ( iqf ) , under the assumption that less clicked urls are more relevant to a given query than heavily clicked ones .the inverse query frequency is inspired by _inverse document frequency _( idf ) in text retrieval , and incorporates inverse query frequency into user frequency in the same manner as tf - idf does .although there are many interpretations about why tf - idf has proved to be extraordinary robust in text retrieval , utilizing user click frequency in the same manner as tf - idf may not be appropriate in the context of user click graph .different from content - aware text retrieval , the click through information is the implicit feedback from users in that each click denotes a potential association between a query and a url .a click tends to be more informative than the case in text mining , since a document could contain much irrelevant information .therefore , our observation is that user frequency and inverse query frequency should be treated differently during query representation in the context of click graph .consistent with the assumption that less clicked urls tend to be more relevant to a given query , the inverse query frequency should be more informative than user frequency according to our observation .moreover , if inverse query frequency can be considered as a global property of each url on click graph , it is intuitive to develop the global consistency model for query representation , which utilizes user frequency and the global weight of url on user click graph in a consistent way to achieve better performance , as described in this paper .the contribution of this paper lies in : 1 ) we observe that the global nature of the url plays a central role for query representation on user click graph ; 2 ) a new scheme called _ inverse url frequency _ ( iuf ) is presented to specify the global weight of each url on click graph , and result shows that iuf is superior to iqf in the context of global consistency on click graph ; 3 ) we define the rules for achieving global consistency on click graph , and develop the framework of global consistency model for query representation .the rest of this paper is organized as follows : the related works are introduced in section [ sec : rl ] , and we illustrate global consistency on click graph in section [ sec : gcongraph ] .various query representation models are presented in section [ sec : queryrepresent ] , while section [ sec : datacollection ] and section [ sec : exp ] presents the experimental analysis about the performance of different models under query similarity .conclusion is made in section [ sec : conclusion ] .extensive research has been conducted on click graphs to exploit implicit feedback .frequently studied topics include agglomerative clustering , query clustering for url recommendation , query suggestion , which used hitting time to generate semantic consistent suggestions , and rare query suggestion . moreover , worked on query classification through increasing the amount of training data by semi - supervised learning on the click graph instead of enriching feature representation. while there are works studying different aspects of user click information , revealed that the click probability of a webpage is influenced by its position on the result page .the sequential nature of user clicks has been considered in , whereas combined both the click and skip information from users . in addition , having noticed that click graphs are very sparse and the click frequency follows the power law , made use of co - click information for document annotation .random walk has been applied to click graphs to improve the performance of image retrieval . also employed random walk to smooth the click graph to tackle the sparseness issue .in contrast , less work has been carried out on the study of query representation on click graphs . represented each query as a point in a high dimensional space , with each dimension corresponding to a distinct url . introduced the query - set based model for document representation using query terms as features for summarizing the clicked webpages .the entropy - biased model for query representation has been proposed to replace raw click frequency on the click graph .it assumed that less clicked urls are more effective in representing a given query than heavily clicked ones .thus , the raw click frequency was weighed by the inverse query frequency of the url . however , the entropy - biased model utilized raw click frequency and inverse query frequency in the same manner as tf - idf does , which may not be appropriate in the context of click graph .this is because user click information is content - ignorant while text retrieval is content - aware .our work is closely related to , while our contribution is to study how to combine the raw click frequency and the global weight of url in a consistent way for query representation .the user click graph is generally regarded as a bipartite graph . in this paper, we consider a bipartite graph evolves query and url : g = ( q d , e ) , where the query set q and document ( url ) set d are connected by edges in e. suppose there are m queries and n documents in total , the bipartite graph can be represented as a rank m matrix c , with the entry ( i , j ) as the edge value of ( , ) . in most cases , the edge value corresponds to the raw click frequency between query and document ( url ) , which is the number of times the users click on when is presented to the users as a result for query .thus , a query can be represented as a row vector of c , and a document ( url ) corresponds to a column vector of c. .preliminaries and notations [ cols="^,<",options="header " , ] [ tab : length - pagerank ] the average precisions of different models on personalized pagerank are very close compared with the performance on cosine or jaccard similarities , instead it is the behavior of each model that draws our attention . with more steps walked on the click graph, the result approaches to the stationary distribution .when the jumping constant , as indicated in figure [ fig : pagerank](a ) , the global consistency model ( ufw - iqf ) experiences a significant drop of average precision at step 2 , while the behavior of other models follows the convention with better precision at step 2 . if the jumping constant which implies an extremely low rate of propagation on the graph , and the first step becomes the dominant factor since the result varies little in subsequent steps .as shown in figure [ fig : pagerank](b ) , the average precision of each model is determined by the first step .the personalized pagerank reveals the nature of those different models : for the global consistency model , most of the relevant queries have already been retrieved during the first step , while subsequent steps are needed for other models to retrieve the most relevant results with a reasonable jumping constant . in other words ,the conventional models need a proper propagation rate to walk on the click graph in order to achieve the best performance , while within global consistency model , the best results are closely clung to the initial preferred vertex without much propagation . in table[ tab : length - pagerank ] , we also listed the average length of results at rank 10 with personalized pagerank , we observe that on one hand , a smaller jumping constant ( lower propagation rate ) favors long tail results within each model , since the propagation on the graph is heading to the queries with larger transition probability which tend to be the popular short tail queries ; on the other hand , the global consistency model ( ufw - iqf ) demonstrates the superiority of boosting long tail queries over other models despite of different jumping constants .in this paper we propose a novel model for query representation on user click graph , which is based on the observation that for a certain query , the global nature of urls is more informative than local user frequency .the global consistency model identifies the inverse query frequency from previous work as a global property of the url , based on which we suggests a more effective scheme called inverse url frequency which further considers the similarities among queries for global nature capturing . besides , we formalize the framework of utilizing user frequency in tune with the global nature of the url .the global consistency model consistently demonstrates better performance over current models for query representation under popular query similarities , in terms of better precision and long tail result boost .hence many query log analysis tasks can benefit from this query representation model .this work is supported by the national natural science foundation of china ( grant nos .61103185 , 61073118 and 61003247 ) , the start - up foundation of nanjing normal university ( grant no .2011119xgq0072 ) , natural science foundation of the higher education institutions of jiangsu province , china ( grant no .11kjb520009 ) , the 9th six talents peak project of jiangsu province ( grant no .dzxx-043 ) , and major program of national natural science foundation of jiangsu province ( grant no .bk2011005 ) . | extensive research has been conducted on query log analysis . a query log is generally represented as a bipartite graph on a query set and a url set . most of the traditional methods used the raw click frequency to weigh the link between a query and a url on the click graph . in order to address the disadvantages of raw click frequency , researchers proposed the entropy - biased model , which incorporates raw click frequency with inverse query frequency of the url as the weighting scheme for query representation . in this paper , we observe that the inverse query frequency can be considered a global property of the url on the click graph , which is more informative than raw click frequency , which can be considered a local property of the url . based on this insight , we develop the global consistency model for query representation , which utilizes the click frequency and the inverse query frequency of a url in a consistent manner . furthermore , we propose a new scheme called inverse url frequency as an effective way to capture the global property of a url . experiments have been conducted on the aol search engine log data . the result shows that our global consistency model achieved better performance than the current models . |
in cosmology we attempt to draw large conclusions from limited and often ambiguous data .i am impressed at how well the enterprise is succeeding , to the point that we have an established standard model for the hot expanding universe ( peebles _ et al ._ 1991 ) . which elements to include in the standard model is a matter for ongoing debate , of course .i am inclined to take a conservative line if only to avoid giving misleading impressions to our colleagues with deconstructionist tendencies .for example , the adiabatic cold dark matter model for structure formation has been more successful than i expected , and as a result is rightly the model most commonly used in studies of how structure might have formed .simon white calls this model a paradigm , which i take to mean a pattern many find useful and convenient in their research .i like this use of the term , provided we agree to distinguish it from a well - established standard model .i think we can not count the adiabatic cold mark matter paradigm as part of the standard model for cosmology because , as argued here , there is a viable and perhaps even more attractive alternative .i organize this discussion around the issues of the weight of the evidence on whether galaxies are good tracers of mass , what we are learning from the cosmological tests , and the elements of the standard model for structure formation on the scale of galaxies and larger .i begin with another question , whether einstein s introduction of the cosmological principle set a good example for research in our field .the hurried reader will find the main points summarized in 6 .the tension between caution and adventure in the advance of science is well illustrated by the histories these two principles .einstein ( 1917 ) introduced modern cosmology with his application of general relativity theory to a universe that is spatially homogeneous on average ( that is , a stationary random process ) .milne gave the homogeneity assumption its name , einstein s cosmological principle .it is difficult to find in the published literature evidence that einstein was aware of the observational situation on the distribution of matter .astronomers had established that we live in a bounded island universe of stars , and some had speculated that the spiral nebulae are other island universes .de sitter ( 1917 ) was willing to consider the possibility that the nebulae are uniformly distributed in the large - scale mean , and that their mass constitutes einstein s near - homogeneous world matter .on the other hand , de sitter was well aware that the distribution of the nearby nebulae is decidedly clumpy ; indeed , charlier ( 1922 ) pointed out that it resembles a clustering hierarchy ( what we would now call a fractal ) .that is , the conservative advice from the astronomical community would have been that the observations do not support einstein s world picture , that he would do well to consider a fractal model instead . but now einstein s cosmological principle is well established and part of the standard model : fluctuations from homogeneity on the scale of the hubble length are less than one part in ( from the isotropy of the x - ray background , and about one part in in the standard relativistic model ; peebles 1993 ) .this is a magnificent triumph of pure thought !just as the cosmological principle was introduced by hand to solve a theoretical problem , the violation of mach s principle in asymptotically flat spacetime , the biasing principle was introduced to reconcile the low relative peculiar velocities of the galaxies with the high mass density of the theoretically preferred einstein - de sitter world model ( davis _ et al .there never has been any serious observational evidence for biasing , but the idea rightly was taken seriously because it is elegant and plausible .but i do not include biasing in the standard model ; we have no very strong evidence for it and the following three arguments against it .first , there is no identification of a population of void irregular galaxies , remnants of the assumed suppression of galaxy formation in the voids ( peebles 1989 ) .the first systematic redshift survey showed that the distributions of low and high luminosity galaxies are strikingly similar ( davis _ et al .i know of no survey since , in 21-cm , infrared , ultraviolet , or low surface brightness optical , that reveals a void population .there is a straightforward interpretation : the voids are nearly empty because they contain little mass .second , the improving suite of cosmological tests listed in the next section suggests the mean mass density is well below the einstein - de sitter value .if the density is low it means galaxies move slowly because there is not much mass to gravitationally pull them , not because they are biased tracers of the mass . third , the galaxy autocorrelation function at low redshift has a simple form , quite close to the power law , with , over three orders of magnitude in separation , .carlberg shows in these proceedings that the index is quite close to constant back to redshifts near unity . on the theoretical side, simon white describes elegant numerical simulations of the adiabatic cdm model . in these simulationsthe mass autocorrelation function is not close to a power law , and the slope of increases with increasing time .the two functions allow us to define a bias parameter , b(r , t ) = ^1/2 .[ eq : bias ] in the adiabatic cdm model this is a function of separation and time .one interpretation is galaxies are biased tracers of mass , the bias depending on scale and time . but why should the biased tracer exhibit a striking regularity , in and the three - and four - point functions , that is not a property of the mass that is driving evolution ?the more straightforward reading is that the regularity in reflects a like regularity in the behaviour of the mass , and that there is a slight flaw in the model . given the enormous step we are taking in analyzing the growth of the structure of the universe it surely would not be surprising to learn that we have not yet got it exactly right .in the standard friedmann - lematre cosmological model coordinates can be assigned so the mean line element is ds^2 = dt^2 - a(t)^2 .the mean expansion rate satisfies the equation h^2 = ( aa ) ^2 = 83 g + 3 , which can be approximated as h^2= h_o^2[(1+z)^3 + ( 1+z)^2 + ] .[ eq : cos_pars ] this defines the fractional contributions to the square of the expansion rate by matter , space curvature , and the cosmological constant ( or a term in the stress - energy tensor that acts like one ) .the time - dependence assumes pressureless matter and constant .other notations are in the literature ; one that is becoming popular adds the matter and terms , as in michael turner s contribution to these proceedings . to avoid confusion with the definitions in equation ( [ eq : cos_pars ] ) we might express turner s convention as = + .this isolates the curvature term , which is useful . andsince the evidence is that is small it certainly helps rescue our theoretical preference for a density parameter equal to unity .i find it unsatisfying , however : what became of the intense debates we had on biasing and the other systematic errors in the measurement of ?i can get more excited about the full monty : let ^ = + + = 1 - .[ eq : fm ] each of the terms on the right - hand side of this equation is measurable in principle , and if the applications of the cosmological tests continue to improve at the present rate it may not be many more years before we have ten percent measurements of the three numbers .if they add to unity we will have a test of general relativity theory applied on large scales in the strong curvature limit ._ here s my problem : the conference wants latex , the table is set in plain tex because it s much too fiddly for latex , and lanl wo nt accept a postscript file of the compiled file of the table . what is a computer - challenged person to do ? _ the point is illustrated another way in table 1 .the lines represent quite different ways to probe the standard relativistic model , and the columns are grades for how well three sets of parameter choices fit the results .as the observations improve we may find that only one narrow range of parameters is consistent with all the constraints .if so we will have settled two issues .first , it surely will continue to be difficult to use internal evidence to rule out systematic errors in astronomical observations .for example , can astronomers unambiguously demonstrate that sneia in a given class of light curve shape really are drawn from the identical population at redshifts and ?a consistent story from independent tests is strong evidence the measurements have not been corrupted by some subtle systematic error .second , a consistent story will be a strong positive test of the standard relativistic cosmological model , as in equation ( [ eq : fm ] ) .the successful parameter set could be quite different from any of the choices in the table , of course ; we may be driven to a dynamical , for example ( peebles & ratra 1988 ; huey _ et al ._ 1998 ) .the classical cosmological tests based on measures of the spacetime geometry have been supplemented by a new class of tests based on the condition that the cosmology admit a consistent and observationally acceptable model for structure formation ( categories 3 and 4 in table 1 ) .i comment on some aspects of structure formation in 4 and 5 .the constraint from the rate of lensing of quasars by foreground galaxies does not comfortably fit the curvature of the redshift - angular distance relation .the analysis of lensing by falco _( 1998 ) , for a combined sample of lensing events detected in the optical and radio , indicates that if the universe is cosmologically flat then at one standard deviation , and at .the sneia redshift magnitude relation , from the magnificent work of perlmutter _ et al . _( 1998 ) and reiss _ et al . _( 1998 ) , seems best fit by , .the discrepancy is not far outside the error flags , but i think that if the lensing rate were the only available cosmological test we would greet it as confirmation of the einstein - de sitter model and another success for pure thought .the lensing constraint depends on the galaxy mass function .the predicted peak of the lensing rate at angular separation arc sec is dominated by the high surface density branch of early - type galaxies at luminosities .the number density of these objects is not well known , and an improved measurement is an important goal for the new generations of surveys of galaxies . if further tests of the lensing and redshift - magnitude constraints confirm an inconsistency for constant the lesson may be that the cosmological constant is dynamical , rolling to zero , as ratra & quillen ( 1992 ) point out .the einstein - de sitter model is not yet ruled out , but i think most of us would agree that consideration of structure formation in low density cosmological models is well motivated .we have good reason to think galaxies grew by gravity out of small initial departures from homogeneity , but the nature of the initial conditions is open to discussion . to illustrate this i present some elements of an isocurvature model .details are in peebles ( 1998 , ) . inthe paradigm simon white describes in these proceedings structure grows out of an adiabatic departure from homogeneity as would be produced by local reversible expansion or contraction from exact homogeneity that is a spatially stationary isotropic random gaussian process .another possibility is that the primeval mass distribution is exactly homogeneous there is no perturbation to spacetime curvature and structure formation is seeded by an inhomogeneous composition . in the isocurvature model presented herethe initial entropy per baryon is homogeneous , to preserve the paradigm for element formation , and homogeneity is broken by the distribution of cold dark matter . in both models the present mass of the universe is dominated by nonbaryonic cold dark matter ( cdm ) ; i shall call them acdm and icdm models . in the acdm modelthe primeval mass density fluctuation ( defined as the most rapidly growing density perturbation mode in time - orthogonal coordinates ) has a close to power law power spectrum , . in the icdm modelthe primeval distribution of the cdm is close to a power law , , in a homogeneous net mass distribution .it is an interesting exercise to check that in linear perturbation theory the evolution from the initial radiation - dominated universe to the present cdm - dominated epoch bends the spectra to where is the wavenumber appearing at the hubble length at the redshift of equality of mass densities in matter and radiation .the similarity of equations ( [ eq : aspect ] ) and ( [ eq : ispect ] ) for extends to roughly similar spectra of the angular distribution of the thermal cosmic background radiation ( the cbr ) in the adiabatic and isocurvature cdm models .the status of acdm model fits to the fluctuation spectra of galaxies and the cbr is discussed in these proceedings by bond .figures 1 and 2 show the icdm model predictions for the parameters m = -1.8,= 0.2 , = 0.8 , h = 0.7 , [ eq : parameters ] with the normalization p(k ) = 6300h^-3 ^ 3 k = 0.1h^-1 , [ eq : pnorm ] where hubble s constant is km s mpc .the data in figure 1 are from the iras psc - z ( point source catalog ) redshift survey of saunders _ et al . _this is the real space spectrum after correction for peculiar velocity distortion represented by the density - bias parameter .there are good measurements of the spectrum of the galaxy distribution on smaller scales , mpc , but this approaches the nonlinear sector , and it seems appropriate to postpone discussion of the small - scale mass distribution until we have analyses of nonlinear evolution from the non - gaussian initial conditions of the model in equation ( [ eq : icdmrho ] ) . since the psc - z catalog is deep , with good sky coverage , it promises to be an excellent probe of the large - scale galaxy distribution , and it is a very useful normalization .figure 2 shows second moments of the angular distribution of the cbr , where t ( , ) = a_l^m y_l^m ( , ) , t_l = ^1/2 |a_l^m|^2 ^ 1/2 .[ eq : tl ] in the approximation of the sum over as an integral the variance of the cbr temperature per logarithmic interval of is . in place of , as does bond in his contribution to these proceedings , but since i am considering icdm the convention in equation ( [ eq : tl ] ) , which i prefer because it reflects the components for each value of , may not be unreasonable . ] the measured are from the compilation of ratra ( 1998 ) .the second moments of the large - scale distributions of mass and radiation in the icdm model agree with the data as well about as could be expected given the state of these difficult measurements .the same is true of acdm models considered by bond ; both cases pass . atmost one will pass the improved measurements expected from work in progress , but that is for the future .simple and arguably natural realizations of the inflation concept lead to adiabatic initial conditions ; others to isocurvature initial conditions . in the example of the latter in peebles ( 1998 ) the cdm is a scalar field that ends up after inflation in a squeezed state as a gaussian random process with mass density ( * x * ) = m^2(*x*)^2/2 , [ eq : icdmrho ] for field mass . in a simple casethe field satisfies = 0 , ( * x*_1)(*x*_2 ) x_12 ^ - , [ eq : icdme ] and the power spectrum of the mass distribution in equation ( [ eq : icdmrho ] ) is a power law with index .the model requires , or .the `` tilt '' from the scale - invariant case is not difficult to arrange ; whether it might be considered natural has yet to be debated .the primeval density fluctuations in the model in equations ( [ eq : icdmrho ] ) and ( [ eq : icdme ] ) are non - gaussian and scale - invariant : the frequency distribution of the density contrast averaged through a window and scaled by the standard deviation is independent of the window size .the evidence discussed in peebles ( 1998 ) indicates the model with these initial conditions is viable but subject to serious tests from improvements from observational work in progress .the same is true of the acdm models , of course .i turn now to one of the tests , the redshift of assembly of the galaxies .the power law model for the primeval cdm fluctuation spectrum ( eqs .[ [ eq : ispect ] ] and [ [ eq : parameters ] ] ) is a good approximation for the residual cdm mass distribution at redshifts less than the epoch of equality of mass densities in matter and radiation and on scales small compared to the hubble length at and large compared to the scale of nonlinear clustering . within these bounds the spectrum varies as p_k^md(t)^2 , [ eq : scaling ] where is the solution to the linear equation for the evolution of the density contrast in an isothermal perturbation of the cdm .the rms contrast through a window of comoving radius varies as x^-(3+m)/2d(t ) .gravitational structure formation is triggered by passage of upward fluctuations of through unity , and the threshold is not sensitive to in a cosmologically flat model .this means the characteristic physical length , mass , and internal velocity of newly forming structures scale with time as r_nl(1+z)^-1d^2/(3+m ) , md^6/(3+m ) , ( 1+z)^1/2d^2/(3+m ) .these relations neglect nongravitational interactions ; they may be expected to be useful approximations on scales much larger than the half - light radii in galaxies , where the cdm halo dominates the mass in the standard model .we can normalize to the great clusters of galaxies , with r_a & = & 1.5h^-1 , _ cl = 750 ^ -1 , + m_cl & = & 410 ^ 14h^-1 m _ , n_cl = ( 21)10 ^ -6h^-3 ^ -3 .[ eq : mcl ] the abell radius is , is an rms mean line of sight velocity dispersion for clusters , is the mean mass within the abell radius , and is the present number density of clusters with mass ( bahcall & cen 1993 ) .clusters are relaxing at the abell radius , and the merging rate is significant , but it is generally agreed that that internal velocities typically are close to what is needed for support against gravity at . in the power law model in equation ( [ eq : scaling ] )these quantities scaled back in time characterize objects in a like state of early development in the past . with the parameters used in figures 1 and 2 ( eq .[ [ eq : parameters ] ] ) the scaling relations applied at expansion factor give r_g=15h^-1 , _g= 140 ^ -1 , m_g=1.310 ^ 11h^-1m_. [ eq : young_galaxies ] the present characteristic separation of clusters and the scaled comoving separation at are d_cl = n_cl^-1/3 = 80h^-1 , d_g = 5h^-1 . in this modelan astronomer sent back in time to would see objects with the somewhat disordered appearance of present - day clusters , merging at a significant rate , but with internal motions typically close to what is needed for virial support .the characteristic size , mass , and comoving distance between objects would be seen to be characteristic of the luminous parts of present - day galaxies .our time traveller might well be inclined to call these objects young galaxies , already assembled at . at expansion factor the scaling relations give r~1,~40 ^ -1 , m~110 ^ 9 m _ ,numbers characteristic of dwarf galaxies .i have to assume many merge to form the giants , and that the merging rate eases off at , perhaps because the dissipative settling of the baryons has progressed far enough to lower the cross section for merging , so later structure formation can build the present - day galaxy clustering hierarchy .if galaxies were assembled as mass concentrations at , as this model suggests , how would they appear at ?internal velocities ought to be characteristic of present - day galaxies .that is not inconsistent with the properties of the damped lyman- absorbers studied by wolfe & prochaska ( 1998 ) , though haehnelt , steinmentz & rauch ( 1998 ) show other interpretations are possible .the expected optical appearance depends on how feedback affects the rate of conversion of gas to stars , a delicate issue i am informed . in these proceedings steidelpresents elegant optical observations of high redshift galaxies that reveal strong spatial clustering .steidel points out this could signify strong biasing at formation. the interpretation could be slightly different in the non - gaussian icdm model , where high density fluctuations tend to appear in concentrations ( peebles 1998 ) .structure formation happens later in acdm .i have expressed doubts that late assembly could produce the high density contrasts of normal present - day galaxies , but the numerical simulations white describes seem not to find this a problem .if dense galaxies can be assembled at low redshift , when the mean mass density is low , one might have thought that protogalaxies assembled at high redshift and high mean mass density would be unacceptably dense .but nature was able to form clusters of galaxies that are close to virial equilibrium at modest density contrast at the abell radius , and well enough isolated that they seem likely to remain part of the clustering hierarchy rather than merging into larger monolithic superclusters . under the scaling argumentthe same would be true of protogalaxies assembled at in the icdm model .i arrived at the isocurvature model in 4 ( and peebles 1998 and ) through a search for a model for galaxy formation at high redshift , when the cosmic mean density is comparable to that of the luminous parts of a normal large galaxy .the argument traces back to partridge & peebles ( 1967 ) , a recent version is in peebles ( 1998 ) , and elements are reviewed here .the solid line in figure 3 is the distance between the andromeda nebula m31 and our milky way galaxy in a numerical solution for the motions of the galaxies in and near the local group ( peebles 1996 ) .the orbits are constrained to arrive at the present positions at expansion parameter from initial motions at consistent with the homogeneous background cosmological model .this uses the einstein - de sitter model , so the solution may be scaled with time .the dashed line in the figure assumes spherical symmetry with no orbit crossing , expansion from cosmological initial conditions , and collapse to half the maximum radius , at which point the kinetic energy in the spherical model has reached half the magnitude of the gravitational potential energy .the solid line for the motion of m31 relative to the milky way has a similar shape but with significant differences . in the numerical solution neighbouring galaxies are close to the milky way and m31 at , so the solid line is more strongly curved than a spherical solution with fixed mass .the solid line is less strongly curved at larger expansion factor because the interaction with neighbouring mass concentrations has given the milky way and m31 substantial relative angular momentum .the present transverse relative velocity of m31 is comparable to the radial velocity of approach , the minimum separation is about half the present value , and the mean separation in the future is larger than the present value . as we all know , nonradial motions tend to suppress collapse .now let us consider the spherical solution as a model for young galaxies .let be the proper radius of a sphere that is centred on the young galaxy and contains the mass in equation ( [ eq : young_galaxies ] ) . in the spherical solution , which ignores nonradial motion and the motion of mass across the surface of the sphere ,the radius varies with time as , where and .this ignores the cosmological constant , which has little effect on the orbit .if spherical collapse stops at radius at redshift , then in the spherical model the collapse factor from maximum expansion is r_g / r_max = ( 1 - _ g)/2 , [ eq : cf ] and an adequate approximation to is = 89(_g h_or_g ) ^2(1+z_g)^-3 = 410 ^ 4(1+z_g)^3 , [ eq : etao ] for the numbers in equation ( [ eq : young_galaxies ] ). for the collapse factor in the dashed line in figure 3 equation ( [ eq : etao ] ) says , not far from the value in equation ( [ eq : young_galaxies ] ) . in a model for late galaxy assembly , at , equations ( [ eq : cf ] ) and ( [ eq : etao ] ) say .this is the dotted line in figure 3 .the pronounced collapse could result from exchange of energy among lumps settling out of a more extended system , as happens in numerical simulations ( navarro , frenk , & white 1996 ) , but i think there are two reasons to doubt it happens in galaxy formation .first , in a hierarchical model for structure formation collapse to kpc at traces back to a cloud of subgalaxy fragments star clusters at radius kpc .i know of no evidence of such clustering ( apart from the usual power law correlation functions ) in deep samples .second , the scaled process of formation of rich clusters of galaxies shows no evidence of pronounced collapse : clusters seem to be close to stable at the abell radius and present in significant numbers at redshift .these are arguments , not demonstrations .i consider them persuasive enough to lend support to the isocurvature model that leads from the fit to measures of large - scale structure in figures 1 and 2 to the scaling model for early galaxy assembly in equation ( [ eq : young_galaxies ] ) .this depends on whether galaxies really were assembled early , of course , and are fortunate that the observations steidel describes in there proceedings may well be capable of telling us when the galaxies formed .it is inevitable that the exciting rush of advances in this subject has left ideas unexplored .i have attempted to identify some roads not taken , less popular lines of thought that seem worth considering .the main points are summarized in the following questions .einstein s brilliant success in establishing key elements of the standard cosmological model is an example of why we pay serious attention to elegant ideas even in the face of contrary empirical indications .but i think this is not an entirely edifying example : einstein s intuition was not always so successful , and most of us are not einsteins . in the present still crude state of cosmologyit is better to be led by the phenomenology from astronomy and from particle physics ( that may teach us the identity of the dark matter , for example ) .most of us agree that the einstein - de sitter model is the elegant case , and it makes sense that the community has given it special attention despite the long - standing indication from galaxy peculiar velocities that the einstein - de sitter density is too high .now other lines of evidence are pointing in the same direction , as summarized in table 1 , and i think there is general agreement in the community that we must give serious consideration to the possibility that nature has other ideas about elegance .i count this as a cautionary example for the exploration of ideas on how the galaxies formed .the galaxy two - point correlation function is quite close to a power law , , over three orders of magnitude of separation at low redshift , and the index is quite close to constant back to redshifts approaching unity .this is not true of the mass autocorrelation function in the adiabatic cold dark matter ( acdm ) model .thus we have a measure of bias , ^{1/2}$ ] ( eq . [[ eq : bias ] ] ) , that depends on position and time .should we take this as evidence galaxies are biased mass tracers ? since the regularity is in the galaxies surely the first possibility to consider is that is revealing a like regularity in the behaviour of the mass , that the bias is in the model .this reading is heavily influenced by a related issue : if much of the cdm is in the voids defined by normal galaxies where are the remnants of the void galaxies ?surely they are not entirely invisible ?i am impressed by the elegant simulations of the acdm models simon white presents in these proceedings , and have to believe they reflect aspects of reality .but the curious issue of leads me to suspect there is more to the story .would the isocurvature variant do better ?that awaits searching tests by numerical simulations of the kind that that have been applied to the adiabatic case .one often reads that it is to determine how the world ends .but should we trust an extrapolation into the indefinitely remote future of a theory that we know can only be a good approximation to reality ? for a trivial example , suppose the universe has zero space curvature and the present value of the density parameter in matter capable of clustering is , with the rest of the contribution to in a term that acts like a cosmological `` constant '' that is rolling toward zero ( peebles & ratra 1988 ; huey _ et al .if the final value of is identically zero then the world ends as minkowksi spacetime ( after all the black holes have evaporated ) . if ends up at a permanent negative value , no matter how close to zero , the world ends in a big crunch .should we care which it is ?i would consider a bare answer an empty advance , because the excitement of physical science is in discovering the interconnections among phenomena .perhaps the excitement of knowing how the world ends will be in what it teaches us about how the world began .the classical cosmological tests , that probe spacetime geometry , have been greatly enriched by tests based on the condition that the cosmology admit a consistent and observationally acceptable theory for structure formation .the structure formation theory in turn tests ideas about what the universe was like before it was well described by the classical friedmann - lematre model , and may eventually allow us to enlarge the standard model to include the story of how the world begins and ends .generally accepted elements are the gravitational growth of small primeval departures from homogeneity , that may be described as a stationary isotropic random process , in a universe with present mass that is dominated by cdm and maybe a term that acts like a cosmological constant .the most striking piece of evidence for the gravitational instability picture is the agreement between the primeval density fluctuations needed to produce the cbr anisotropy and the present distribution and motion of the galaxies .precision measurements in progress should allow us to fix many of the details of this gravitational instability picture , but within present constraints we can not say that the primeval density fluctuations are gaussian , or adiabatic , because we have a viable alternative , the non - gaussian isocurvature model mentioned in 4 .the main piece of evidence for the cdm is the mismatch between the baryon mass density in the standard model for the origin of the light elements and the mass density indicated by dynamical analyses of relative motions of the galaxies .our reliance on hypothetical mass is embarrassing ; a laboratory demonstration of its existence would be an exceedingly valuable advance .it is a sign of the growing maturity of our field that we can pose questions that are motivated by specific theoretical issues and can be addressed by feasible observations .but i think our subject still is immature enough that we should be quite prepared for surprises .my favorite example is shaver s ( 1991 ) demonstration that the radio galaxies within mpc distance are close to the plane of the local supercluster , even though the plane is not apparent in the general distribution of galaxies at this depth .if the clusters and radio sources were produced by a pancake collapse why do we not see it in the general galaxy distribution ?maybe a better picture is that in the early universe a nearly straight cosmic string passed by , piling mass in its wake into a sheet that fragmented into the seeds of engines of active galaxies .i think the most surprising outcome of the new surveys would be that there are no major corrections to what we think we know .this work was supported in part by the usa national science foundation . | i argue that the weight of the available evidence favours the conclusions that galaxies are unbiased tracers of mass , the mean mass density ( excluding a cosmological constant or its equivalent ) is less than the critical einstein - de sitter value , and an isocurvature model for structure formation offers a viable and arguably attractive model for the early assembly of galaxies . if valid these conclusions complicate our work of adding structure formation to the standard model for cosmology , but it seems sensible to pay attention to evidence . psfig [ firstpage ] |
our knowledge of solar magnetism relies heavily on our ability to detect and interpret the polarization signatures of magnetic fields in solar spectral lines . since the identification of sunspots as regions of strong magnetism by means of the observation of the zeeman effect ,the number of physical mechanisms applied to the interpretation of solar spectral line profiles has increased significantly .the hanle effect observed in linear polarization has been exploited to diagnose weak turbulent magnetic fields .a rich spectrum of linear polarization observed near the solar limb named the `` second solar spectrum '' has been used to constrain magnetic fields and scattering physics .observations of lines like mn which display hyperfine splitting have been used to break the degeneracy between magnetic flux and field for weak magnetic fields .information about plasma kinetics during solar flares is encoded in the linear polarization of h lines through impact polarization .other important justifications for multi - line observations exist .foremost among these is that observations of spectral lines formed at different heights in the solar atmosphere are needed to constrain the magnetic field geometry in three dimensions .secondly , simultaneous observation of multiple spectral lines and the application of a line - ratio technique have been shown to greatly enhance the diagnostic potential of zeeman - effect observations . finally , improving technologies ( e.g. ir detector arrays )have made it possible to take advantage of the increased sensitivity of the zeeman effect with wavelength through the observation of infrared spectral lines .it follows that the next generation of stokes polarimeters must have the capability to observe the solar atmosphere in a variety of spectral lines over a wide wavelength range , coupled with the ability to observe several lines simultaneously .this is reflected in the design of the recently completed spectro - polarimeter for infrared and optical regions ( spinor , ) , installed at the dunn solar telescope ( dst ) of the national solar observatory on sacramento peak ( nso / sp , sunspot , nm ) , that can observe between 430 and 1600 nm .post - focus instruments at the recently completed new solar telescope will observe between 400 and 1700 nm .the suite of spectro - polarimeters planned for the 4-m advanced technology solar telescope will cover the wavelength range between 380 and 2500 nm . andthe proposed science for the planned 4-m european solar telescope emphasizes multi - wavelength observations from the uv to the near - ir .one immediate instrument requirement stemming from this need for wavelength diversity is that the polarization modulation scheme must be _ efficient _ at all wavelengths of interest .typically , one attempts to achieve this goal by achromatizing the polarimetric response of a modulator .this , for instance , is the rational behind the design of super - achromatic waveplates . in this paper , instead , we present a new paradigm for the design of efficient polarization modulators that are not achromatic in the above sense , since they have polarimetric properties that vary with wavelength .however , they do so in such a way that they can be operated over a wide wavelength range with near optimal polarimetric efficiency .we refer to these modulators as _ polychromatic_. because of their performance , these are directly applicable to the next generation of multi - line stokes polarimeters . in the next section , we summarize the basic theory behind polarization modulators .next , we present example designs using existing technologies , including a discussion of the method used to obtain them .finally , the performance properties of a prototype polychromatic modulator are presented .most polarimeters operate by employing retarders and polarizers in a configuration that encodes polarization information into a modulated intensity signal .this intensity modulation is measured with a detector and analyzed to infer the input polarization state .an extensive treatment of the theoretical operation of stokes polarimeters has been given in .this work draws extensively from that formalism .the complete polarization state of an input light beam can be described by the stokes vector : where is the intensity , and are the net linear polarizations measured in two coordinate frames that are rotated by with respect to each other , and is the net circular polarization .since the stokes vector contains four unknown parameters , any polarimeter must make a minimum of four measurements to determine it .the polarization properties of a polarimeter in any state can be conveniently described by a 4 mueller matrix in the usual way .the first row of any mueller matrix captures the transformation of a stokes vector into intensity . assuming that the detector is sensitive to intensity alone, then only the first row of the polarimeter mueller matrix is important .the modulation of intensity by a -state polarimeter is captured in the modulation matrix , . for each state , the corresponding row of the modulation matrix ( the _ modulation vector _ )is given by the first row of the mueller matrix of the polarimeter in that state .the modulation matrix is generally normalized by the element .note that , for a retarder - based modulator , the first element of the modulation vector is the same in all modulation states , so after normalization the first column of the modulation matrix is all made of 1s .the operation of a polarimeter can then be represented by : and the input stokes vector is obtained by where is the demodulation matrix . in it is shown that , for a given modulation matrix , , the optimal demodulation matrix is .the polarimetric efficiency of a modulation scheme can be derived from the demodulation matrix itself , and is given by : where the subscript varies over the four elements of the stokes vector .the polarimetric efficiency is important in that it quantifies the noise propagation through the demodulation process .the noise in the -th element of the inferred stokes vector , , depends directly on the efficiency as : where is the uncertainty on the measured intensity , assumed constant , during the measurement cycle .equation 5 implies that an efficient polarization modulator is required to measure a stokes vector with high precision .the efficiency on any stokes parameter can be between 0 and 1 , subject to the two independent constraints , and .many so - called magnetographs are designed to modulate only stokes with high efficiency . in this paper, we are concerned with modulation schemes that are balanced , in the sense that the efficiencies of stokes , , and are equal . in this case , the maximum ( i.e. , optimal ) modulation efficiency for stokes is 1 , and for stokes , , and , is .cccc modulator type&retardance&orientation&modulation vector + stepped retarder& & & + & & & + & & & + & & & + lcvrs ( # 1,#2 ) & & & + & & & + & & & + & & & + & & & + & & & + flcs ( # 1,#2 ) & & & + & & & + & & & + & & & + there are several configurations for optimally efficient polarization modulators at a single wavelength .these may require different numbers of modulation states to achieve the full measurement of the stokes vector .in table [ tab : monochrome ] , we present examples of optimally efficient and balanced modulators at a single wavelength , employing three different modulator technologies .these are a rotating waveplate of fixed retardation , a pair of liquid crystal variable retarders ( lcvrs ) with variable retardation and fixed orientation of the fast axis ( e.g. , ) , and a pair of ferroelectric liquid crystals ( flcs ) with fixed retardance but variable orientation of the fast axis . the particular solution presented here for the flc modulatoris implemented in the diffraction - limited spectro - polarimeter ( dlsp , ) , which is also installed at the nso / sp dst .an example of a flc modulator that is optimally efficient but with higher efficiency for circular polarization is given in .the lcvr modulator presented here uses 6 modulation states , although it is possible to create a lcvr modulator that is balanced and optimally efficient at one wavelength requiring only 4 states .we also note that a single lcvr can not modulate all stokes parameters , regardless of the number of modulation states .modulation efficiency curves for the four stokes parameters between 400 and 1100 nm for modulators with two flcs ._ continuous curve _ : optimal and balanced solution at 630 nm ( indicated by the central arrow ) corresponding to the nso / dlsp modulator . _ dotted curve _ : polychromatic solution corresponding to a simple modification of the nso / dlsp modulator , where the first flc is rotated by an additional angle of ._ dashed curve _ : polychromatic solution obtained from the former solution through the addition of a fixed quartz retarder between the two flcs .this solution was optimized between 500 and 900 nm ( indicated by the two outmost arrows ) .the horizontal solid lines in the four panels indicate the maximum theoretical modulation efficiencies that can be achieved simultaneously for the four stokes parameters ( for , and for , , and ).,width=332 ] the retardance of a given device is wavelength dependent so these standard recipes generally work over a limited wavelength range .one could change the recipe to access other wavelengths but simultaneous spectro - polarimetric observations in different spectral ranges would not be possible .the chromatic nature of the example flc modulator is illustrated in fig .[ fig : flc ] showing the variation of the efficiency with wavelength . in the plot, we assume a target wavelength of 630 nm and that the intrinsic birefringence of the flcs is not a function of wavelength ( i.e. , no dispersion of the birefringence ) .we define the region of acceptable performance of a polarimeter as the region over which the efficiency is greater than the optimal efficiency divided by , that is , . at this point , the increase in noise is equivalent to a reduction in the photon flux by a factor of 2 , assuming poisson statistics . by this definition ,we find that the stepped - retarder solution is efficient over a spectral region of 268 nm , the lcvr solution over 287 nm , and the flc solution over 128 nm .to create polarimeters that operate over a large wavelength range , instrument developers have typically tried to select optical materials in order to make the polarimeter achromatic ; that is , to make the polarimeter modulation matrix as independent of wavelength as possible .the success of such an approach depends on the choice of materials and on the desired wavelength coverage . in principle, this affords an important simplification , which is the possibility of applying a single demodulation scheme to infer the stokes vector at any wavelength . in practice, the application of a single demodulation scheme can only give a zeroth - order approximation to the true stokes vector , because of the unavoidable residual wavelength dependence of the mueller matrix of the modulator across the spectral range of interest .for example , the achromatic modulator of nso / spinor shows a 15% variation of its retardance properties over its spectral range of operation ( 430 - 1600 nm , ) .therefore one still needs to perform a careful wavelength calibration of the polarimeter in order to infer the true stokes vector within the requirements of polarimetric sensitivity dictated by the science .previous work has also attempted to achromatize modulators by adding combinations of fixed and variable retarders , also with the goal of minimizing the wavelength dependence of the resulting mueller matrix . however , the preservation of a given form of the polarimeter s mueller matrix with wavelength is a very limiting , and arguably unnecessary , constraint .the fundamental driver in the design of a polarization modulator for multi - line applications is the achievement of near - optimal modulation efficiencies in all stokes parameters at all wavelengths of interest .the mueller matrix of such a modulator can be completely arbitrary , and even strongly dependent on wavelength .since calibration with wavelength is required , such wavelength dependence will not impact the precision of the polarimetric measurements .in addition , since the demodulation matrix can be calculated theoretically from the modulator design , it is straightforward to provide real - time approximations of the measured stokes vector at all wavelengths of operation . in designing the polarimeter for the 1-m yunnan solar telescope ( yunnan astronomical observatory , china ) optimized the efficiency of their polarimeter over a broad wavelength range by adjusting the orientation of the fast axes of the retarding elements , and in our opinion created the first polychromatic modulator .however , did not fully optimize their design by allowing the retardations of the elements to vary , they did not adopt an optimal demodulation scheme as described in which could result in lowering the efficiency that is achieved , and they stressed the importance of minimizing crosstalk which is not necessary to achieve an optimally efficient polarimeter .we propose a new paradigm where full - stokes polarization modulators are designed to satisfy the constraint of having optimal and balanced polarimetric efficiency at all wavelengths of interest .this generally results in polarimeters that have modulation and demodulation matrices that are strong functions of wavelength .we refer to these modulators as polychromatic .we have developed a technique for optimizing polarization modulators using combinations of fixed and variable retarders .we find that varying the retardance and orientation of the optical components that comprise a modulator , and maximizing the polarimetric efficiency for all wavelengths of interest , results in realizable configurations with a high degree of wavelength diversity .we employ an optimization code that systematically searches the parameter space for a solution that maximizes the stokes modulation efficiencies over the desired spectral range .the search is performed following the strategy of latin hypercube sampling of the parameter space .this significantly improves the convergence speed of the search for the optimal solution , compared to a direct monte carlo sampling .although the solutions found through this method are highly optimized , they may not be unique .thus it is not possible to conclude with certainty that they represent the solutions with the highest possible efficiency for a given type of modulator .different optimization schemes based on gradient methods have also been used with success ( e.g. , powell , levenberg - marquardt ) , although they need realistic first guesses in order to converge properly .we illustrate this concept with three different types of polychromatic modulators for polarimetric applications .for the first type of modulator , we consider a stack of two flcs , with the possible addition of a fixed retarder to broaden the wavelength range .for this type of modulator , there are four modulation states .the broadening of the wavelength range of a flc modulator by the addition of fixed retarders has been suggested with the intention to minimize off - diagonal elements of the mueller matrix , although that study did not result in efficient modulators over a wide wavelength range . as a first example, we start with the flc modulator used by the nso / dlsp and presented in table 1 .the dlsp was designed to operate at 630 nm only .figure [ fig : flc ] ( continuous curve ) shows that the dlsp , as configured , has optimal and balanced efficiency at 630 nm with a spectral range of acceptable performance of 128 nm .we then optimized the efficiency of the dlsp modulator between 500 and 900 nm , by allowing the orientation angles of the flcs to vary .the resulting solution ( dotted curve ) is a simple modification of the nso / dlsp configuration ( cf .table 1 ) , where the first flc is rotated such that the new switching angles are and . with this new configuration , the modulator still provides optimal and balanced efficiency at 630 nm , however the usable range is now twice as large as before ( 259 nm ) .next , we added a fixed retarder between the two flcs , and allowed the orientations of the flcs and the retardance and orientation of the fixed retarder to vary .the resulting solution ( dashed curve ) yields a polarimeter with optimal and balanced efficiency at 630 nm , but the usable range now spans 553 nm .it must be noted that one can obtain even larger usable ranges with this type of modulator , when the optimization is extended to all the retarding devices .a clear example of this ( even if still subject to some external constraints ) is the promag modulator illustrated in sect .[ sec : promag ] . as fig .[ fig : flc ] , but now for a modulator consisting of a stack of three quartz retarders rotated over 8 discrete steps of 22.5 degrees .the modulator was optimized between 380 and 1600 nm ( indicated by the two arrows).,width=332 ] the second type of modulator is based on a stack of retarders glued in a fixed set of relative orientations , that is then rotated as a whole to the required set of positions to perform the measurement of the stokes vector .we consider a stepping modulator , rather than a continuously rotating one .the adoption of a continuous integration scheme results in a small reduction of the modulation efficiency .we refer to for a typical estimation of the reduction of modulation efficiency for continuously rotating modulators . for this type of rotating modulator, we consider the usual scheme with 8 measurements at positions , with .the example in fig .[ fig : ret ] shows the modulation efficiency of a stack of three retarders . as fig .[ fig : ret ] , but now for a modulator consisting of two lcvrs followed by a fixed quartz retarder , and using a 6-state modulation scheme .the modulator was optimized between 450 and 1600 nm ( indicated by the two arrows).,width=332 ] the third type is based on a stack of lcvrs followed by a fixed retarder .the number of states of this kind of modulator is a free parameter .figure [ fig : lcvr ] considers a modulator with 6 states .the parameter space for this kind of modulator has a much higher dimensionality than for designs based on flcs or fixed retarders , and therefore multiple solutions with comparable efficiencies are often found . the example shown in fig .[ fig : lcvr ] was determined by running the optimization several times and selecting the solution with the smallest deviation from optimal efficiency averaged over the optimization range . in the solutions shown in figs .[ fig : ret][fig : lcvr ] , the fast axis of the first device is conventionally fixed at .we verified that this constraint does not affect the search of an optimally polychromatic solution for the modulator configuration . for practical purposes, we also assume that the fixed retarders are made of quartz , while for the lcvrs we assume no wavelength dispersion of the birefringence . while this limits the applicability of the recipes given in this paper , the implementation of realistic birefringence dispersion curves is possible .preliminary tests show that this is a mildly limiting factor in the search of high - efficiency modulator configurations , typically resulting in a narrower spectral range of optimization for a given number of retarding devices .specific science applications of polarimeters may require giving a higher priority to selected spectral lines and/or stokes polarization parameters .for this reason , in our optimization code both the optimizing wavelengths and the four stokes parameters can be attributed different weights in the search of a high - efficiency modulator configuration .the solutions shown here were derived without any specific application in mind , and therefore they had equal weights for all optimizing wavelengths and stokes parameters . in pratical situations , tweaking of some of the weights may be necessary to correct the behavior of the modulation efficiency in a particular interval of the spectral range .this is not surprising , especially in the optimization of the modulation efficiency over very large spectral ranges , since one needs to use a correspondingly large number of optimizing wavelengths ( typically , of the order of 50 ) , and the number of nearly equivalent solutions increases dramatically because of the increased number of degrees of freedom in the optimization problem . in such cases ,the use of different weights can help direct the search towards particular types of nearly equivalent solutions while excluding others , at the user s discretion .theoretical efficiency curves for the promag modulator , consisting of two flcs followed by a quartz retarder .the modulator was optimized at 587.6 , 656.3 , 769.9 , and 1083.0 nm ( indicated by arrows ) .the crosses indicate the measured efficiencies at 587.6 and 1083.0 nm after deployment of the instrument.,width=332 ] as a first application of polychromatic modulators , we re - designed the polarimeter of the hao prominence magnetometer ( promag , ) .this instrument was conceived to observe solar prominences and filaments in the spectral lines of he i at 587.6 and 1083.0 nm , and also in h ( 656.3 nm ) .the re - design was prompted by a series of failed attempts at fabricating the promag modulator following the original design , which was based on a stack of six flcs .thus , the original design was replaced by a much simpler scheme , utilizing only two flcs plus a fixed retarder .the configuration was determined with the optimization method described above and constrained to use one of the flcs from the original design .retardances of the second flc and the fixed quartz retarder were specified according to the optimized solution , and the retardances of these devices were measured in the laboratory . since the measured retardances differed somewhat from the specified ones , a second optimization was performed varying only the orientations of the acquired devices using their measured retardances .the modulator was then constructed according to the design resulting from the second optimization .we estimate that the elements were oriented to the design values to within approximately .figure [ fig : promag ] shows the predicted efficiencies for the stokes parameters resulting from the optimization .actual efficiencies at 587.6 and 1083.0 nm were measured after deployment of the instrument at the evans solar facility of the nso / sp , and they are in good agreement with the theory ( see crosses in fig . [ fig : promag ] ) .the discrepancy observed at 1083.0 nm is likely caused by a less precise determination of the retardances of the modulator optics at that wavelength , possibly due to an undetected leak in the interference filter used during the measurement in the laboratory , combined with the different spectral responses of the promag alpha - nir camera and the photo - diode used in the lab measurement .theoretical modulation matrix of promag .the crosses indicate the measured modulation amplitudes at 587.6 and 1083.0 nm .the first column of the matrix is identically equal to 1 and is not shown.,width=332 ] the theoretical variation of the modulation matrix , , with wavelength is shown in fig .[ fig : promag_matrix ] .the first column is unity and therefore is omitted .measured modulation amplitudes at 587.6 and 1083.0 nm are also shown to be in close agreement with the theory .in this paper we presented a new paradigm for the design of polarization modulators that responds to the needs of multi - line spectro - polarimetry .we argued that the effort of achromatizing the response matrix of a polarimeter that is , trying to make the mueller matrix as little dependent on wavelength as possible is both a very limiting and unnecessary constraint .the only requirement that should be imposed on a polarimeter , in order to reach a given target of polarimetric sensitivity , is that its modulation efficiency be sufficiently high at all wavelengths of interest .as these wavelengths may be distributed over very large spectral ranges ( of the order of 1000 nm or larger ) , and because of the wavelength dependence of the optical properties of retarding devices , the design of optimally efficient polarization modulators from first principles is a particularly arduous , if not impossible , task . for this reason, we have developed an improved monte carlo search technique in order to explore the parameter space of a polarimeter , and identify optimally efficient solutions compatible with the particular type of modulator .we illustrated the power of this method by improving the modulation efficiency of existing polarimetric instruments , as well as by designing new modulators with near optimal and balanced efficiency over very large spectral ranges .finally we have demonstrated the applicability of this new paradigm by showing the measured performance of a prototype modulator , which was designed and optimized through the method presented in this paper .we thank d. elmore for helpful discussions during the early stages of this work , s. sewell for helpful comments on the manuscript and frans snik for pointing us to the yunnan polarimeter paper .the national center for atmospheric research is sponsored by the national science foundation .a. asensio ramos , m. martnez gonzlez , a. lpez ariste , j. trujillo bueno , and m. collados , `` a near - infrared line of mn i as a diagnostic tool of the average magnetic energy in the solar photosphere , '' * 659 * , 829 - 847 ( 2007 ) .j. tpn , p. heinzel , and s. sahal - brchot , `` hydrogen h line polarization in solar flares . theoretical investigation of atomic polarization by proton beams considering self - consistent nlte polarized radiative transfer , '' astron . astrophys . * 465 * , 621 - 631 ( 2007 ) .i. redi , s. k. solanki , w. livingston and j. w. harvey , `` interesting lines in the infrared solar spectrum .iii . a polarimetric survey between 1.05 and 2.50 m , '' astron . astrophys . * 113 * , 91 - 106 ( 1995 ) .h. socas - navarro , d. f. elmore , a. pietarila , t. darnell , b. w. lites , s. tomczyk , and s. hegwer , `` spinor : visible and infrared spectro - polarimetry at the national solar observatory , '' solar phys ., * 235 * , 55 - 73 ( 2006 ) . s. l. keil , t. rimmele , c. u. keller , f. hill , r. r. radick , j. m. oschmann , m. warner , n. e. dalrymple , j. briggs , s. hegwer and d. ren , `` design and development of the advanced technology solar telescope ( atst ) , '' proc .spie * 4853 * , 240 - 251 ( 2003 ) .m. collados , `` high resolution spectropolarimetry and magnetography , '' in _ 3rd advances in solar physics euroconference_,b .schmieder , a. hoffman , j. staude , eds .. ser . * 184 * ) , pp . 3 - 22 ( 1999 ) .s. tomczyk , g. l. card , t. darnell , d. f. elmore , r. lull , p. g. nelson , k. v. streander , j. burkepile , r. casini and p. g. judge , `` an instrument to measure coronal emission line polarization , '' solar phys . * 247 * , 411 - 428 ( 2008 ) .k. sankarasubramanian , b. lites , c. gullixson , d. f. elmore , s. hegwer , k. v. streander , t. rimmele , s. fletcher , s. gregory and m. sigwarth , `` the diffraction limited spectro - polarimeter , '' in _ solar polarization 4 _ , r. casini and b. lites , eds .. ser . * 358 * ) , pp .201 - 204 ( 2006 ) .m. d. mckay , r. j. beckman and w. j. conover , `` a comparison of three methods for selecting values of input variables in the analysis of output from a computer code , '' technometrics * 21 * , 239 - 245 ( 1979 ) .d. f. elmore , r. casini , g. l. card , m. davis , a. lecinski , r. lull , p. g. nelson and s. tomczyk , `` a new spectro - polarimeter for solar prominence and filament magnetic field measurements , '' proc .spie * 7014 * , 701416 ( 2008 ) . | information about the three - dimensional structure of solar magnetic fields is encoded in the polarized spectra of solar radiation by a host of physical processes . to extract this information , solar spectra must be obtained in a variety of magnetically sensitive spectral lines at high spatial , spectral , and temporal resolution with high precision . the need to observe many different spectral lines drives the development of stokes polarimeters with a high degree of wavelength diversity . we present a new paradigm for the design of polarization modulators that operate over a wide wavelength range with near optimal polarimetric efficiency and are directly applicable to the next generation of multi - line stokes polarimeters . these modulators are not achromatic in the usual sense because their polarimetric properties vary with wavelength , but they do so in an optimal way . thus we refer to these modulators as _ polychromatic_. we present here the theory behind polychromatic modulators , illustrate the concept with design examples , and present the performance properties of a prototype polychromatic modulator . |
the bilateral filter was proposed by tomasi and maduchi as a non - linear extension of the classical gaussian filter .it is an instance of an edge - preserving filter that can smooth homogenous regions , while preserving sharp edges at the same time .the bilateral filter has diverse applications in image processing , computer vision , computer graphics , and computational photography .we refer the interested reader to for a comprehensive survey of the working of the filter and its various applications .the bilateral filter uses a spatial kernel along with a range kernel to perform edge - preserving smoothing . before proceeding further, we introduce the necessary notation and terminology .let be a vector - valued image , where is some finite rectangular domain of .for example , for a color image .consider the kernels and given by where and is some positive definite covariance matrix .the former bivariate gaussian is called the spatial kernel , and the latter multivariate gaussian is called the range kernel .the output of the bilateral filter is the vector - valued image : \omega \rightarrow \mathbb{r}^d ] around the pixel of interest , where is the standard deviation of the spatial gaussian .thus , the direct computation of and requires operations per pixel .in fact , the direct implementation is known to be slow for practical settings of . for the case ( grayscale images ) , researchers have come up with several fast algorithms based on various forms of approximations .a detailed account of some of the recent fast algorithms , and a comparison of their performances , can be found in .a straightforward way of extending the above fast algorithms to vector - valued images is to apply the algorithm separately on each of the components .the output in this case will generally be different from that obtained using the formulation in . in this regard , it was observed in that the component - wise filtering of rgb images can often lead to color distortions .it was shown that such distortions can be avoided by applying in the cie - lab color space , where the covariance is chosen to be diagonal . in this paper, we present a fast algorithm for computing .the core idea is that of using raised - cosines to approximate the range kernel .this approximation was originally proposed in for deriving a fast algorithm for gray - scale images .it was later shown in that the raised - cosine approximation can be extended for performing high - dimensional filtering using the product of one - dimensional approximations .unfortunately , this did not lead to a practical fast algorithm .the fundamental difficulty in this regard is the so - called `` curse of dimensionality '' .namely , while a raised - cosine of small order , say , suffices to approximate a one - dimensional gaussian , the product of such approximations result in an order of in dimensions .a similar bottleneck arises in the context of computing using the raised - cosine approximation .nevertheless , we will demonstrate how this problem can be circumvented using monte carlo approximation .the contribution and organization of the paper are as follows . in section [ pa ] ,we extend the shiftable approximation in for the bilateral filtering of vector - valued images given by . in this direction, we propose a stochastic interpretation of the raised - cosine approximation , and show how it can be made practical using monte carlo sampling . based on this approximation, we develop a fast algorithm in section [ fa ] . as an application, we use the proposed algorithm for filtering color images in section [ results ] .the results reported in this section demonstrate the accuracy of the approximation , and the speedup achieved over the direct implementation .we conclude the paper in section [ conclusion ] . over the dynamic range ] is with respect to .recall the identity , where .we use this along with the binomial theorem , and get notice that the coefficient in corresponding to a given is simply the probability that a random variable takes on the value . in other words , we can write where ] for .in this section , we present some results on natural color images for which .in particular , we demonstrate that the proposed ` mcsf ` algorithm is both fast and accurate for color images in relation to the direct implementation . to quantify the approximation accuracy for a given color image , we used the mean - squared error between ] ( the latter is the output of algorithm [ algo ] ) given by (\i)-\mathcal{s}[\f]_k(\i ) \big)^2,\ ] ] where ] are the -th color channel . on a logarithmic scale , this corresponds to db . for the experiments reported in this paper , we used an isotropic gaussian kernel corresponding to in .in other words , , and for .the accuracy of the proposed algorithm is controlled by the order of the raised - cosine ( ) and the number of trails ( ) .it is clear that we can improve the approximation accuracy by increasing and .we illustrate this point with an example in figure [ errorplot ] .we notice that ` mcsf ` can achieve sub - pixel accuracy when and .we have noticed in our simulations that , for a fixed , the accuracy tends to saturate beyond a certain .this is demonstrated in figure [ errorplot ] using and ..comparison of the run - time of the direct implementation and algorithm [ algo ] on the _ peppers _ image .the range parameter used is .the parameters of ` mcsf ` are and . the computations were performed using matlab on a ghz intel -core machine with gb memory . [ cols="^,^,^,^,^,^,^",options="header " , ] [ table1 ] a comparison of the run - time of the direct implementation of and that of the proposed algorithm is provided in table [ table1 ] .we notice that ` mcsf ` is few orders faster than the direct implementation , particularly for large .indeed , following the fact that the convolutions in step [ conv ] of algorithm [ algo ] can be computed in constant - time with respect to , our algorithm has complexity with respect to . as against this , the direct implementation scales as .finally , we present a visual comparison of the filtering for rgb images in figures [ dome ] and [ peppers ] .notice that the outputs are visually indistinguishable .as mentioned earlier , the authors in have observed that the application of the bilateral filter in the rgb color space can lead to color leakage , particularly at the sharp edges .the suggested solution was to perform the filtering in the cie - lab space . in this regard ,a comparison of the filtering in the cie - lab space is provided in figure [ house ] . in this case , we first performed a color transformation from the rgb to the cie - lab space , performed the filtering in the cie - lab space , and then transformed back to the rgb space .the filtered outputs are seen to be close , both visually and in terms of the mse .we proposed a fast algorithm for the bilateral filtering of vector - valued images .we applied the algorithm for filtering color images in the rgb and the cie - lab space .in particular , we demonstrated that a speedup of few orders can be achieved using the fast algorithm without introducing visible changes in the filter output ( the latter fact was also quantified using the mse ) .an important theoretical question arising from the work is the dependence of the order and the number of trials on the filtering accuracy . in future work, we will investigate this matter , and also look at various ways of improving the monte carlo integration .we also plan to test the algorithm on other vector - valued images . | in this paper , we consider a natural extension of the edge - preserving bilateral filter for vector - valued images . the direct computation of this non - linear filter is slow in practice . we demonstrate how a fast algorithm can be obtained by first approximating the gaussian kernel of the bilateral filter using raised - cosines , and then using monte carlo sampling . we present simulation results on color images to demonstrate the accuracy of the algorithm and the speedup over the direct implementation . bilateral filter , vector - valued image , color image , monte carlo method , approximation , fast algorithm . |
recently , social network has been a popular tool to share images .when a social network user uploads an image , the image is usually associated with a tag/ keyword which is used to describe the semantic content of this image .the tags provided by the users are usually incomplete .zhang et al . designed and implemented a fast motion detection mechanism for multimedia data on mobile and embedded environment .recently , the problem image tag completion is proposed in the computer vision and machine learning communities to learn the missing tags of images .this problem is defined as the problem of complete the missing elements of a tag vector of a given image automatically . in this paper , we investigate the problem of image tag completion , and proposed a novel and effective algorithm for this problem based on local linear learning .we propose a novel and effective tag completion method . instead of completing the missing tag association elements of each image, we introduce a tag scoring vector to indicate the scores of assigning the image to the tags in a given tag set .we propose to study the tag scoring vector learning problem in the neighborhood of each image .for each image in the neighborhood , we propose to learn a linear function to predict a tag scoring vector from a visual feature vector of its corresponding image feature .we propose to minimize the perdition error measure by the squared norm distance over each neighborhood , and also minimize the squared norm of the linear function parameters .besides the local linear learning , we also proposed to regularize the learning of tag scoring vectors by the available tags of each image .we construct a unified objective function to learn both the tag scoring vectors and the local linear functions .we develop an iterative algorithm to optimize the proposed problem . in each iteration of this algorithm, we update the tag scoring vectors and the local linear function parameters alternately .this rest parts of paper are organized as follows : in section [ sec : method ] , we introduced the proposed method . in section[ sec : exp ] , we evaluate the proposed methods on some benchmark data sets . in section [ sec : conclusion ] , the paper is concluded with future works .we assume that we have a data set of images , and their visual feature vectors are , where is the -dimensional feature vector of the -th image .we also assume that we have a set of unique tags , and a tag vector ^\top \in \{+1,-1\}^m ] , where if is available , and if is missing .we propose to learn a tag scoring vector \in \mathbb{r}^m$ ] , where is the score of assigning the -th tag to the -th image .the set of the nearest neighbor of each image is denoted as , and we assume that the tag scoring vector of a image can be predicted from its visual feature vector using a local linear function , where is the parameter of the local linear function . to learn the tag scoring vector and the local function parameters , we propose the following minimization problem , where and are tradeoff parameters . the objective function in ( [ equ : obj3 ] ) is a summarization of three terms over all the images in the data set .the first term , , is the prediction error term of the local linear predictor over the neighborhood of each image .the second , , is to reduce the complexity of the local linear predictor .the last term , , is a regularization term to regularize the learning of tag scoring vectors by the incomplete tag vectors , so that the available tags are respected . to optimize the minimization problem in ( [ equ : obj3 ] ), we propose to use the alternate optimization strategy in an iterative algorithm . * * optimization of * in each iteration , we optimize one by one , and the minimization of ( [ equ : obj3 ] ) with respect to can be achieved with the following gradient descent update rule , + + where is the sub - gradient function of , with respect to , + + and is the descent step . * * optimization of * in each iteration , we also optimized one by one . when is optimized , are fixed .gradient descent method is also employed to update to minimize the objective in ( [ equ : obj3 ] ) , + + where is the sub - gradient function with respect to , + the experiments , we used two publicly accessed image - tag data sets , which are corel5k data set and iapr tc12 data set . in the corel5k data set , there are 4,918 images , and 260 tages .we extract density feature , harris shift feature , harris hue feature , rgb color feature , and hsv color feature as visual features for each image .moreover , we remove 40% of the elements of the tag vectors to make the incomplete image tag vectors . in the iaprtc12 data set , there are 19,062 images , and 291 tags .we also remove 40% elements of the tag elements to construct the incomplete tag vectors .to evaluate the tag completion performances , we used the recall - precision curve as performance measure .we also use mean average precision ( map ) as a single performance measure .we compared the proposed method to several state - of - the - art tag completion methods , including tag matrix completion ( tmc ) , linear sparse reconstructions ( lsr ) , tag completion by noisy matrix recovery ( tcmr) , and tag completion via nmf ( tc - nmf ) .the experimental result on two data sets are given in fig .[ fig : corel5k ] and fig .[ fig : iapr ] . from these figures, we can see that the proposed method loctc performs best .its recall - precision curve is closer to the top - right corner than any other methods , and its map is also higher than maps of other methods . in this section , we will study the sensitivity of the proposed algorithm to the two parameters , and .the curves of and on different data sets are given in fig .[ fig : corel_alpah ] and fig .[ fig : iapr_alpah ] . from these figures, we can see that the performances are stable to different valuse of both and .in this paper , we study the problem of tag completion , and proposed a novel algorithm for this problem . we proposed to learn the tags of images in the neighborhood of each image .a local linear function is designed to predict the tag scoring vectors of images in each neighborhood , and the prediction function parameter is learned jointly with the tag scoring vectors .the proposed method is compared to state - of - the - art tag completion algorithms , and the results show that the proposed algorithm outperforms the compared methods . in the future , we will study how to incorporate these connections into our model and learn more effective tags . in this paper , we used one single local function for each neighborhood , and in the future , we will use more than than regularization to regularized the learning of tags , such as usage of wavelet functions to construct the local function .moreover , correntropy can also be considered as a alternative loss function to construct the local learning problem . in the future, we also plan to extend the proposed algorithm for completion of tags of large scale image data set by using high performance computing technology , and completion of tags of gene / protein functions of bioinformatics problems .feng , z. , feng , s. , jin , r. , jain , a. : image tag completion by noisy matrix recovery .lecture notes in computer science ( including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics ) * 8695 lncs*(part 7 ) , 424438 ( 2014 ) huang , y. , liu , q. , zhang , s. , metaxas , d. : image retrieval via probabilistic hypergraph ranking . in : proceedings of the ieee computer society conference on computer vision and pattern recognition , pp . 33763383 ( 2010 ) li , t. , zhou , x. , brandstatter , k. , zhao , d. , wang , k. , rajendran , a. , zhang , z. , raicu , i. : zht : a light - weight reliable persistent dynamic scalable zero - hop distributed hash table . in : parallel & distributed processing ( ipdps ) , 2013 ieee 27th international symposium on , pp . 775787 ( 2013 )lin , z. , ding , g. , hu , m. , wang , j. , ye , x. : image tag completion via image - specific and tag - specific linear sparse reconstructions . in : proceedings of the ieee computer society conference on computer vision and pattern recognition , pp .16181625 ( 2013 ) liu , l. , li , h. , xue , y. , liu , w. : reactive power compensation and optimization strategy for grid - interactive cascaded photovoltaic systems .ieee transactions on power electronics * 30*(1 ) , 188202 ( 2015 ) wang , h. , wang , j. : an effective image representation method using kernel classification . in :2014 ieee 26th international conference on tools with artificial intelligence ( ictai 2014 ) , pp .853858 ( 2014 ) wang , j. , wang , h. , zhou , y. , mcdonald , n. : multiple kernel multivariate performance learning using cutting plane algorithm . in : systems ,man and cybernetics ( smc ) , 2015 ieee international conference on .ieee ( 2015 ) wang , j. , zhou , y. , duan , k. , wang , j.j.y . , bensmail , h. : supervised cross - modal factor analysis for multiple modal data classification . in : systems ,man and cybernetics ( smc ) , 2015 ieee international conference on .ieee ( 2015 ) wang , k. , kulkarni , a. , zhou , x. , lang , m. , raicu , i. : using simulation to explore distributed key - value stores for exascale system services . in : 2nd greater chicago area system research workshop ( gcasr ) ( 2013 )wang , k. , zhou , x. , chen , h. , lang , m. , raicu , i. : next generation job management systems for extreme - scale ensemble computing . in : proceedings of the 23rd international symposium on high - performance parallel and distributed computing , pp .111114 ( 2014 ) wang , k. , zhou , x. , li , t. , zhao , d. , lang , m. , raicu , i. : optimizing load balancing and data - locality with data - aware scheduling . in : big data ( big data ) ,2014 ieee international conference on , pp .119128 ( 2014 ) wang , k. , zhou , x. , qiao , k. , lang , m. , mcclelland , b. , raicu , i. : towards scalable distributed workload manager with monitoring - based weakly consistent resource stealing . in : proceedings of the 24rd international symposium on high - performance parallel and distributed computing , pp .219222 ( 2015 ) zhang , s. , huang , j. , li , h. , metaxas , d. : automatic image annotation and retrieval using group sparsity .ieee transactions on systems , man , and cybernetics , part b : cybernetics * 42*(3 ) , 838849 ( 2012 ) zhao , d. , zhang , z. , zhou , x. , li , t. , wang , k. , kimpe , d. , carns , p. , ross , r. , raicu , i. : fusionfs : toward supporting data - intensive scientific applications on extreme - scale high - performance computing systems . in : big data ( big data ) , 2014 ieee international conference on , pp .6170 ( 2014 ) zhou , x. , chen , h. , wang , k. , lang , m. , raicu , i. : exploring distributed resource allocation techniques in the slurm job management system .illinois institute of technology , department of computer science , technical report ( 2013 ) | the problem of tag completion is to learn the missing tags of an image . in this paper , we propose to learn a tag scoring vector for each image by local linear learning . a local linear function is used in the neighborhood of each image to predict the tag scoring vectors of its neighboring images . we construct a unified objective function for the learning of both tag scoring vectors and local linear function parameters . in the objective , we impose the learned tag scoring vectors to be consistent with the known associations to the tags of each image , and also minimize the prediction error of each local linear function , while reducing the complexity of each local function . the objective function is optimized by an alternate optimization strategy and gradient descent methods in an iterative algorithm . we compare the proposed algorithm against different state - of - the - art tag completion methods , and the results show its advantages . tagging , tag completion , local learning , gradient descent |
nonnegative datasets are everywhere ; from term by document matrix induced from a document corpus , gene expression datasets , pixels in digital images , disease patterns , to spectral signatures from astronomical spectrometers among others .even though diverse , they have one thing in common : all can be represented by using nonnegative matrices induced from the datasets .this allows many well - established mathematical techniques to be applied in order to anayze the datasets .there are many common tasks associated with these datasets , for example : grouping the similar data points ( _ clustering _ ) , finding patterns in the datasets , identifying important or interesting features , and finding sets of relevant data points to queries ( _ information retrieval _ ) . in this paper, we will focus on two tasks : clustering and latent semantic indexing a technique that can be used for improving recall and precision of an information retrieval ( ir ) system .clustering is the task of assigning data points into clusters such that similar points are in the same clusters and dissimilar points are in the different clusters .there are many types of clustering , for example supervised / unsu - pervised , hierarchical / partitional , hard / soft , and one - way / many - way ( two - way clustering is known as co - clustering or bi - clustering ) among others . in this paper ,clustering term refers to unsupervised , partitional , hard , and one - way clustering .further , the number of cluster is given beforehand .the nmf as a clustering method can be traced back to the work by lee & seung .but , the first work that explicitly demonstrates it is the work by xu et al . in which they show that the nmf outperforms the spectral methods in term of _ purity _ and _ mutual information _ measures for reuters and tdt2 datasets .clustering aspect of the nmf , even though numerically well studied , is not theoretically well explained .usually this aspect is explained by showing the equivalence between nmf objective to either k - means clustering objective or spectral clustering objective .the problem with the first approach is there is no obvious way to incorporate the nonnegativity constraints into k - means clustering objective . andthe problem with the second approach is it discards the nonnegativity constraints , thus is equivalent to finding stationary points on unbounded region .accordingly , the nmf which is a bound - constrained optimization turns into an unbounded optimization , so there is no guarantee the stationary point being utilized in proving the equivalence is located on the feasible region indicated by the constraints . in the first part of this paper, we will provide a theoretical support for clustering aspect of the nmf by analyzing the objective at the stationary point using the karush - kuhn - tucker ( kkt ) conditions without setting the kkt multipliers to zeros .thus , the stationary point under investigation is guaranteed to be located on the feasible region .latent semantic indexing ( lsi ) is a method introduced by deerwester et al . to improve recall and precision of an ir system using truncated singular value decomposition ( svd ) of the term - by - document matrix to reveal hidden relationship between documents by indexing terms that are present in the similar documents and weakening the influences of terms that are mutually present in the dissimilar documents .the first capability can solve the _synonymy_different words with similar meaning problem , and the second capability can solve the _ polysemy_words with multiple unrelated meanings problem .thus , lsi not only is able to retrieve relevant documents that do not contain terms in the query , but also can filter out irrelevant documents that contain terms in the query .lsi aspect of the nmf is not well studied .there are some works that discuss the relationship between the nmf and probabilistic lsi , e.g. , .but the emphasize is in clustering capability of probabilistic lsi , not lsi aspect of the nmf .motivated by the svd which is the standard method in clustering and lsi , in the second part of this paper , lsi aspect of the nmf will be studied , and the results will be compared to the results of the svd .the nmf was popularized by the work of lee & seung in which they showed that this technique can be used to learn parts of faces and semantic features of text .previously , it has been studied under the term positive matrix factorization .mathematically , the nmf is a technique that decomposes a nonnegative data matrix into a pair of other nonnegative matrices : where ] denotes the basis matrix , ] , where is the -th largest eigenvector of .normalize every row of , i.e. , .apply k - means clustering on the row of to obtain the clustering indicator matrix . 1 .input : rectangular data matrix with data points , # cluster , and gaussian kernel parameter .2 . construct symmetric affinity matrix from by using gaussian kernel .3 . compute and by using nmf algorithm ( nmfls or nmfjk ) so that .assume is used , then clustering assignment of data point , , can be computed by . as shown in figure [ ch2:fig1 ] , [ ch3:fig1 ] , and [ ch3:fig2 ] , while the spectral clustering can correctly find the clustering assignments for all datasets , the nmfs can only compete with the spectral clustering for the last dataset which is rather linearly separable .these results are in accord with the proof of proposition [ ch3:prop1 ] ( that states as long as vertices on the feature graph are clustered , optimizing the nmf objective leads to the feature clustering indicator matrix ) .thus , it seems that as a clustering method , the nmf is more similar to k - means clustering or support vector machine ( svm ) which also can only cluster linearly separable datasets , than to the spectral methods , even though both clustering using the nmf and the spectral methods are based on matrix decomposition techniques .accordingly , clustering performances of the nmf can probably be improved by using appropriate kernel methods as in k - means clustering and svm .the experiments are conducted to evaluate the performances of the nmf as a clustering method .all algorithms are developed in gnu octave under linux platform using a notebook with 1.86 ghz intel processor and 2 gb ram .reuters-21578 document corpus , the standard dataset for testing learning algorithms and other text - based processing methods , is used for this purpose .this dataset contains 21578 documents ( divided into 22 files with each file contains 1000 documents and the last file contains 578 documents ) with 135 topics created manually with each document is assigned to one or more topics based on its content .the dataset is available in sgml and xml format , we use the xml version .we use all but the 18 file because this file is invalid both in its sgml and xml version .we use only documents that belong to exclusively one class ( we use `` classes '' for refeering to the original grouping , and `` clusters '' for referring to groups resulted from the clustering algorithms ) .further , we remove the common english stop words , stem the remaining words using porter stemmer , and then remove words that belong to only one document . and also , we normalize the term - by document matrix by : where as suggested by xu et al .we form test datasets by combining top 2 , 4 , 6 , 8 , 10 , and 12 classes from the corpus .table [ ch2:table3 ] summarizes the statistics of these test datasets , where # doc , # word , % nnz , max , and min refer to the number of document , the number of word , percentage of nonzero entry , maximum cluster size , and minimum cluster size respectively . and table [ ch2:table4 ] gives the sizes ( # doc ) of these top 12 classes ..statistics of the test datasets . [ cols="<,>,>,>,>,>",options="header " , ] [ ch3:table13 ]we have presented a theoretical framework for supporting clustering aspect of the nmf without setting the kkt multipliers to zeros . thus the stationary point used in proving this aspectis guaranteed to be on the nonnegative orthant which is the feasible region of the nmf .our theoretical work implies a limitation of the nmf as a clustering method in which it can not be used in clustering linearly inseparable datasets .so , the nmf as a clustering method is more resembling k - means clustering or svm than the spectral clustering , even though both the nmf and the spectral methods utilize matrix decomposition techniques . as the clustering capabilities of k - means and svm usually can be improved by using the kernel methods , probably the same approach can also be employed in the nmf .we will address this issue in our future researches . clustering capability of nmfjk is comparable to the svd in reuters datasets with nmfjk tends to be better for small # cluster and the svd for big # cluster .but unfortunately , nmfls which is the standard nmf algorithm can not outperform the svd .these results imply clustering aspect of the nmf is algorithm - dependent , a fact that seems to be overlooked in the nmf researches .lsi aspect of the nmf seems to be comparable to the svd in its power for solving synonymy and polysemy problems for datasets with clear semantic structures that allowed these problems to be revealed . in real datasets , however , the nmf generally can not outperform the svd .but an interesting fact comes into sight ; in some cases , the nmf can outperform the svd , even though when the computations are repeated and averaged over the number of trials , these advantages vanish . because the nmf can offer different results depending on the algorithms , the initializations , the objectives , and the problems , improving lsi capability of the nmf is possible .we will address this problem in our future researches .h. kim and h. park , `` sparse non - negative matrix factorizations via alternating non - negativity constrained least squares for microarray data analysis , '' bioinformatics , vol .23(12 ) , pp . 1495 - 502 , 2007 . c. ding , t. li , and w. peng , `` on the equivalence between non - negative matrix factorization and probabilistic latent semantic indexing , '' computational statistics & data analysis , vol .52(8 ) pp . 3913 - 27 , 2008 . h. kim and h. park , `` nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method , '' siam .j. matrix anal . & appl .30(2 ) , pp . 713 - 30 , 2008 .r. albright , j. cox , d. duling , a. langville , and c. meyer , `` algorithms , initializations , and convergence for the nonnegative matrix factorization , '' ncsu technical report math 81706 , north carolina state university , 2006 . | this paper provides a theoretical support for clustering aspect of the nonnegative matrix factorization ( nmf ) . by utilizing the karush - kuhn - tucker optimality conditions , we show that nmf objective is equivalent to graph clustering objective , so clustering aspect of the nmf has a solid justification . different from previous approaches which usually discard the nonnegativity constraints , our approach guarantees the stationary point being used in deriving the equivalence is located on the feasible region in the nonnegative orthant . additionally , since clustering capability of a matrix decomposition technique can sometimes imply its latent semantic indexing ( lsi ) aspect , we will also evaluate lsi aspect of the nmf by showing its capability in solving the synonymy and polysemy problems in synthetic datasets . and more extensive evaluation will be conducted by comparing lsi performances of the nmf and the singular value decomposition ( svd)the standard lsi method using some standard datasets . bound - constrained optimization , clustering method , nonnegative matrix factorization , karush - kuhn - tucker optimality conditions , latent semantic indexing . 15a23 , 68r10 . |
minority game ( mg ) a game proposed by challet and zhang under the inspiration of the el farol bar problem is a simple model showing how selfish players cooperate with each other in the absence of direct communication .it succinctly captures the self - organizing global cooperative behavior which is ubiquitously found in many social and economic systems . in mg , inductive reasoning players have to choose one out of two choices independently in each turn . based only on certain commonly available global information , each player has to decide one s choice by means of his / her current best working strategy or mental model . those who end up in the minority side ( i. e. , the choice with the least number of players ) win .although its rules are remarkably simple , mg exhibits very rich self - organized collective behavior . moreover ,the dynamics of mg can be explained by the so - called crowd - anticrowd theory which stated that fluctuations arisen in the mg is resulted from the interaction between crowds of like - minded agents and their anti - correlated partners .however , in the real world , people usually do not only consider the global information when they make decisions .in fact , they may also consult the opinion of their neighbors before making decisions .for example , it is not uncommon for people to consider both the recommendation of their peers ( local information ) and the stock price ( global information ) in deciding which stock to buy from the market .hence , it is instructive for us to incorporate the local information into the mg model to gain more insights on this kind of social and economic systems . in the past few years, many researchers have studied a few variations of mg with local information .however , players can only make use of either the global or local information in each turn in these models .in contrast , people often make their decisions according to both the global information and their local information in many social and economic phenomenon . on the other hand , quan _ et al ._ introduced the local information in the evolutionary minority game ( emg ) .since we want to focus on how are players affected by their local information in a _ non - evolutionary _ game , we shall not consider the emg with local information here . with the above consideration in mind, we would like to propose a model of mg where players use both the local and global information in an unbiased manner for making decision . in sectionii , we introduce a new model called the networked minority game ( nmg ) .it is a modified mg model in which all players can make use of not only the global information but also the local information from their neighbors that are disseminated through a network .results of numerical simulations are presented and discussed in section iii .lastly , we conclude by giving a brief summary of our work in section iv .in this section , we would like to show how to construct the networked minority game ( nmg ) model . in this repeated game, there are heterogeneous inductive reasoning players whose aim is to maximize one s own profit in the game . in every turn, each player has to choose one out of alternatives with label .the global minority choice , denoted as at time , is simply the least popular choice amongst all players in that turn .( note that the global minority choice is the least popular choice that are chosen by a non - zero number of players ; and it is chosen randomly amongst the choices with the least non - zero number of players in case of a tie . ) the players picked the global minority choice gain one unit of wealth while all the other lose one . unlike in mg ,not only the global information is delivered to each player as one s external information in our model . nevertheless , the global information given to players are the same in these two models . in nmg, we call it the global history which is simply the -ary string of the global minority choice of the last turns . the global history can only take on different states .we label these states by an index and denote the global history by the index . besides global information, local information is also distributed via a network to all players in nmg such that individual player receive one s local information from a different source . in this game , we arrange all the players with label on a ring ( see fig . [fig : f1 ] ) where the local information of a player is based on the choices of this player and his / her nearest neighbors on the ring .specifically , the local information given to the player is the so - called local history which is simply the -ary string of the local minority choice of this player of the last turns .( note that we do not consider the case of since it is equivalent to mg . ) here , the local minority choice of the player at time step , , refers to the least popular choice amongst this player and his / her nearest neighbor on the ring at this time step .( in other words , is the least popular choice amongst , and player . ) however , unlike the global minority choice , the local minority choice could be an alternative that nobody chooses . in case of a tie , chosen randomly amongst the choices with the least number of players .the local history can only take on different states .we again label these states by an index and denote the local history of the player by the index . by the way , it is easy to extend nmg to the case where players are connected on a different topology .for example , we can arrange players on a dynamically random chain in which each player is connected to two randomly chosen players and all the connections between players change at every time step . in brief , each player is given a -ary string of length storing both the global and local history to decide his / her choice .players can only interact indirectly with each other through the global history and their local history .however , how does each player make use of such external information to decide one s choice in the nmg ?he / she does so by employing strategies to predict the next minority choice according to both the global and local information where a strategy is a map sending individual combination of global and local history ( , ) to the choice . a strategy be represented by a vector where is the minority choice predicted by the strategy for the input . in our model , each player picks randomly drawn strategies from the strategy space before the game commences .( we will discuss about the strategy space in depth later on . ) just like in mg , strategies in nmg are not evolving , i. e. , players are not allowed to revise their own strategies during the game . at each time step , each player uses his / her own best working strategy to guess the next global minority choice . buthow does a player decide which strategy is the best ?players use the virtual score , which is simply the hypothetical profit for using a single strategy throughout the game , to evaluate the performance of a strategy .the strategy with the highest virtual score is considered as the best one . sinceinductive reasoning players do not know whether a strategy is good or not before the game commences , we can not restrict players to have good strategies " only . as a result , all strategies in our model must be unbiased to the input , i. e. the global and local history .thus all the strategies employed in nmg can be picked from the full strategy space which is constituted by all the possible strategies for the input .( it is obvious that these strategies have no bias for any input . )there is a total of distinct strategies in the full strategy space for our model .nevertheless , it is very probable that a large number of strategies exist in this full strategy space even .consequently , we would like to pick strategies from the reduced strategy space to enhance computational feasibility .the reduced strategy space is only composed of strategies which are significantly different from each other .thus it characterizes the diversity of strategies in the full strategy space and we can use it instead of the full strategy space without altering the properties and dynamics of the game .how can we construct the reduced strategy space for nmg ? to answer this question , let us look at the reduced strategy space of mg first . for mg ,the reduced strategy space consists of mutually uncorrelated and anti - correlated strategies only .challet and zhang showed that the maximal reduced strategy space of the original 2-choice mg , denoted as for a 2-choice game with length of global history , is composed of pairs of mutually anti - correlated strategies where any two strategies from different anti - correlated strategy pairs are uncorrelated with each other .therefore , there is a total of different strategies in .( note that there are other smaller reduced strategy spaces consisting of less number of strategies for mg . )it can be shown that , in general , the maximal reduced strategy space for -choice mg is composed of ensembles of mutually anti - correlated strategies where any two strategies from different anti - correlated strategy ensembles are uncorrelated with each other .moreover , consists of distinct strategies . for nmg, it seems reasonable for us to define the reduced strategy space to be the maximal reduced strategy space of mg .however , we find that the cooperative behavior of the players in nmg using and the full strategy space are greatly different from each other .indeed , such a discrepancy is due to the bias of the strategies of to some of the input ( the global and local history ) in our model .hence , we should not use to substitute the full strategy space in nmg .in fact , we must define the reduced strategy space in a different way . for nmg ,a strategy is unbiased to the input if it satisfies the following conditions : ) simply looking at the predictions of this strategy for a _ given _ global history does not give us any information about its predictions for any other global history .simply looking at the predictions of this strategy for a _ given _ local history does not give us any information about its predictions for any other local history .accordingly , we define the reduced strategy space of nmg as follows : where the player uses the so - called segment strategy to predict the global minority choice for the global history if in case of and respectively for the local history if in case of . table [ tab : t1 ] illustrates how a strategy in gives the prediction of the next minority choice .it is easy to show that the number of distinct strategies in is either or depending on the ratio of .note that we should not define to be given by just one of the above expressions over all range of .otherwise , there is a large redundancy of strategies in since many strategies are very similar for such case .( however , it does not matter if we define by the second expression when . ) indeed , we have verified that the cooperative behavior of the players in nmg using and the full strategy space agree well with each other ; i. e. , can successfully characterize the diversity of strategy in the full strategy space . ' '' '' global history & local history & prediction + ' '' '' ( 0,0 ) & ( 0 ) & 1 + ' '' '' ( 0,1 ) & ( 0 ) & 1 + ' '' '' ( 1,0 ) & ( 0 ) & 1 + ' '' '' ( 1,1 ) & ( 0 ) & 1 + ' '' '' ( 0,0 ) & ( 1 ) & 0 + ' '' '' ( 0,1 ) & ( 1 ) & 0 + ' '' '' ( 1,0 ) & ( 1 ) & 1 + ' '' '' ( 1,1 ) & ( 1 ) & 1 + to investigate how well players cooperate with each other in our game , a quantity of interest is the attendance of an alternative which is the number of players choosing the alternative at time . for players gaining the maximum profit ,the expectation value of the attendance of any alternative should be equal to .accordingly , the variance of the attendance represents the loss of players in the game .( here , denotes the average over time . ) hence , we would like to study and as a function of the complexity of the system for our model . which parameter can be used as a measure of the complexity of the system ?it is the so - called control parameter which is the ratio of the strategy space size to the number of strategies at play . for nmg, the control parameter is equal to either for or for . on the other hand, we also want to compare the variance of the attendance with the prediction by the crowd - anticrowd theory in order to investigate the crowding effect in our model .since the strategies used in nmg ( which are picked from ) are neither anti - correlated nor uncorrelated , we can not only consider the interactions of the anti - correlated strategies in the crowd - anticrowd calculation of the variance just like in mg and the multichoice mg .in fact , each strategy used in nmg is a set of segment strategies from either or whereas different segment strategies are used for different or ( see eq .( [ e : rss ] ) ) .thus we can apply the crowd - anticrowd theory to estimate the variance simply by counting the crowd - anticrowd cancellation of the anti - correlated segment strategies .accordingly , the crowd - anticrowd prediction of the variance in nmg is given as follows : ^ 2 \right\ } \right\rangle & \mbox{if , } \vspace*{1 mm } \\\left\langle \displaystyle \frac{1}{n_c } \sum_{\mathcal{s}_l \in v_{n_c}(m_l ) } \sum_{a \in \mathcal{s}_l } \left\ { \frac{1}{n_c^2 } \left [ \sum_{b \in \mathcal{s}_l \backslash \{a\ } } ( n_{a } - n_{b } ) \right]^2 \right\ } \right\rangle & \mbox{otherwise , } \end{array } \right .\label{e : cac_var}\ ] ] where and denotes the mutually anti - correlated strategy ensembles in and respectively , and is the number of players making decision according to the segment strategy .we should aware that the variance of attendance for different alternatives must equal when averaged over time and initial choice of strategies since there is no bias for any alternative in our game .in all the simulations reported in this paper , each set of data was recorded from independent runs . in each run, we run 10000 steps starting from initialization before making any measurements .we have checked that it is already enough for the system to attain equilibrium in all the cases reported here .then we took the average values on 15000 steps after the equilibration . to be computational feasible, we choose the case where , and . in this section ,we compare the performance of players in nmg with that in mg .since the system exhibits peculiar properties in nmg with , we delay the discussion about this case to section [ mg0 ] .let us begin by investigating the properties of the mean attendance as a function of the control parameter in nmg .obviously , the mean attendance in nmg is similar to mg for all values of .that is to say , it always fluctuates around such that players can maximize their global profit .next we evaluate the performance of the players in nmg by studying the variance of the attendance per player versus the control parameter as shown in fig .[ fig : f2 ] .( note that the variance of the two choices must be the same by symmetry for a 2-choice game ; and all the variance mentioned in this section are divided by the number of player for objective comparison . )we find that the variance in nmg is always much smaller than that in mg for small no matter what is the value of . in other words , players perform much better in nmg than in mg through the introduction of their own local information when is small . when the reduced strategy space size is relatively small comparing with the number of strategies at play , the fluctuation of the attendance is large in mg due to the overcrowding effect of player s strategies .under the overcrowding effect , players tend to choose the same alternative for the same global history since they use some very similar strategies . however , in nmg , players can choose different alternatives even they use the same strategy since they consider both their local information and the global information in deciding their choice .such phenomenon will dilute the overcrowding effects of the strategies in nmg and so players can perform much better than in mg when is small .when the control parameter increases , the overcrowding effect will be suppressed in both nmg and mg . in mg , the maximal cooperation amongst the players can be achieved subsequently when the number of strategies at play is approximately equal to the reduced strategy space size . however , in nmg , players need to cooperate with each other through both the global information and their local information .these mixed types of information obtained from different sources makes the cooperation amongst all players becomes much more difficult for all value of .so the variance in nmg is larger than in mg around the critical point and it tends to the coin toss value ( which is the variance resulting fromplayers making random choices throughout the game ) when increases further .[ fig : f2 ] shows the variance of attendance per player as a function of the control parameter in nmg for different ratio of to .when , the variance becomes larger as increases regardless of the value of the control parameter .moreover , the variance tends to the coin - toss value for over all range of .furthermore , when , the performance of players becomes better if more local information is available no matter what is the value of . to account for this phenomenon, we should consider the structure of strategies in the reduced strategy space .when , each strategy is composed of more segment strategies belonging to a smaller strategy space as increases . therefore , players are more likely to use similar strategies and the overcrowding effect of strategies will dominate which results in large fluctuations of the attendance .similarly , when , the segment strategies are drawn from and thus the variance becomes larger as increase due to the overcrowding effect of strategies . in particular , when , each strategy is composed of the largest number of segment strategies drawn from the smallest strategy space ( or ) and so the variance is the largest for such case .when , only local information is available to each player . from numerical simulation ,the variance in nmg with is found to be smaller than in mg and nmg with when the control parameter is small and approaches zero .it implies that the cooperation amongst players in nmg with is much better than in mg and nmg with for small .in fact , player s cooperation in nmg with no global information is resulted from their local interaction . to investigate the local interaction ,we calculate the correlation function for the local minority choices of two neighboring players as follows : ^ 2 \rangle - \langle \omega_l^{i}(t ) \rangle^2},\ ] ] where is the local minority choice of the player at time .we reveal that the correlation function is equal to infinity for many neighboring players when .that is to say , many players always choose the same alternative as their nearest neighbors throughout the game .in such case , we found that the local histories of these players remain the same such that their local minority choices are completely frozen .in fact , the decision of a frozen player is dominated by the components of segment strategies corresponding to or whenever . for instance , suppose the components of segment strategies corresponding to equal for all strategies of two nearest neighbors .once their local histories are , their own local minority will then be in this turn and their local histories will be again at the next time step .it is a fixed point of the dynamics leading to the dimension reduction of the reduced strategy space .if similar configurations arise along the ring , the choices of some players can be determined before each turn because their actions are frozen .the freezing effect of players will minimize the fluctuations of attendance since it is only contributed by few non - frozen players .indeed , it is the negligence of global information that allows the strong local interactions amongst players who are connected on a ring . on the contrary, players can only weakly interact with each other locally if they are placed on a dynamically random chain in which each player is connected to two randomly chosen players with all the connections between players change at every time step .hence , whenever , the freezing effect of players disappears and the variance tends to the coin - toss value for the case of the dynamically random chain as shown in fig .[ fig : f3 ] .on the other hand , numerical results show that the probability for the freezing of players decreases exponentially when increases for .it is because each strategy is composed of more number of segment strategies and the disturbance from the global information becomes more significant when more global information is available .so the variance becomes larger as decreases for . in order to investigate the crowding effect in nmg, we compare the numerical results of the variance with the predictions by the crowd - anticrowd theory .we find a large discrepancy between them whenever or although their trends are consistent with each other for all value of .according to the crowd - anticrowd theory , the variance of attendance for a given realization of the quenched disorder ( i. e. the initial configuration of the system ) is given by where denotes the sum over the strategies in the corresponding reduced strategy space , is the action of strategy to the history at time ( corresponds to the two alternatives ) and is the number of players using strategy at time . in mg, should be equal to zero and thus is dropped .( for mg , the variance in fact plays the role of the effective hamiltonian of the spin - glass - like system while the strategies can be interpreted as the quenched disorder ; thus we can state that the four - point correlation function is dropped in the effective hamiltonian of the system for mg . )such expectation is based on the fact that the number of players using an uncorrelated strategy pair are , on average of the history , independent from the response of the strategy pair to the history in mg . however , from numerical simulation , we find that is equal to a large negative number for nmg with . in other words ,the uncorrelated segment strategy pairs are no longer independent with each other . through the local interaction, players tend to use the uncorrelated segment strategy pairs whenever the pairs choose the opposite choice .it is a favorable solution of the dynamics because the uncorrelated segment strategies can now contribute to the crowd - anticrowd cancellation which will in turn lead to the further reduction of the fluctuation of attendance .in fact , the crowd - anticrowd cancellation by the uncorrelated strategies is only possible in nmg since players have the freedom to choose which uncorrelated segment strategies to be used on the basis of the local information .we can correct the semi - analytical results of the crowd - anticrowd theory by counting the contribution of both the anti - correlated and uncorrelated segment strategies to the crowd - anticrowd cancellation .the corrected semi - analytical results is found to match with the numerical one as shown in fig .[ fig : f4 ] .so we conclude that the discrepancy is mainly due to the crowd - anticrowd cancellation by the uncorrelated strategies for . when , the discrepancy of the predictions of the crowd - anticrowd theory from the numerical results is due to the freezing of players .the ergodicity of the local histories of frozen players is broken down under the freezing effects .in other words , some possible states of the local histories will never be visited for frozen players since the effective dimension of the strategy space is reduced .of course , this will violate the ergodicity assumption in the crowd - anticrowd theory .so the results predicted by the crowd - anticrowd theory show a large discrepancy from the numerical results when .in summary , we find that nmg exhibits a remarkably different behavior from mg . in nmg with non - zero global information , selfish players on the ring can cooperate with each other to reduce their loss ; moreover , their cooperation is much better than in mg by using the local information that are disseminated through the ring except when the number of strategies at play is approximately equal to the strategy space size .such phenomenon is believed to be due to the dilution of the crowding effect of players which is resulted from their local interactions on the ring . in nmg with no global information , many players on the ring are found to be frozen where their action and also their local minority choice remain the same throughout the game .we reveal that the freezing occurs when their decisions are dominated by the components of segment strategies corresponding to or .such domination arises because of the strong local interaction on the ring due to the absence of the global information . on the other hand, we find that the predictions of the crowd - anticrowd theory deviate very much from the numerical results for nmg . such discrepancy is found to be due to the crowd - anticrowd cancellation contributed by the uncorrelated strategies in nmg whereas it is impossible for the uncorrelated strategies to contribute to the crowd - anticrowd cancellation in mg .in fact , all the above arguments should still be valid when players are connected on a network with different topology in nmg provided that each player always obtains the local information from the same neighbors ; and thus we believe that the cooperative behavior of players are similar in all these cases .finally , we would like to point out that the order parameter is also worth to be studied for nmg . from this study, we may learn whether there is a phase transition from a symmetric phase to an asymmetric phase as the complexity of the system increases just like that in mg .99 d. challet and y. c. zhang , physica a * 246 * , 407 ( 1997 ) .y. c. zhang , europhys .news * 29 * , 51 ( 1998 ) .w. b. arthur , amer .papers and proc . *84 * , 406 ( 1994 ) .d. challet and y. c. zhang , physica a * 256 * , 514 ( 1998 ) .r. savit , r. manuca and r. riolo , phys .lett . * 82 * , 2203 ( 1999 ) .n. f. johnson , s. jarvis , r. jonson , p. cheung , y. r. kwong and p. m. hui , physica a * 258 * , 230 ( 1998 ) .m. hart , p. jefferies , n. f. johnson and p. m. hui , physica a * 298 * , 537 ( 2001 ) .m. hart , p. jefferies , n. f. johnson and p. m. hui , eur .j. b * 20 * , 547 ( 2001 ) .m. l. hart and n. f. johnson , cond - mat/0212088 .m. paczuski , k. e. bassler and a. corral , phys .lett . * 84 * , 3185 ( 2000 ) .t. kalinowski , h .- j .schulz and m. briese , physica a * 277 * , 502 ( 2000 ). s. moelbert and p. de los rios , physica a * 303 * , 217 ( 2002 ) .e. burgos , h. ceva and r. p.j. perazzo , cond - mat/0212635 .quan , b .- h .wang , p. m. hui and x .- s .luo , physica a * 321 * , 300 ( 2003 ) . f. k. chow and h. f. chau , physica a * 319 * , 601 ( 2003 ) .f. chau and f. k. chow , physica a * 312 * , 277 ( 2002 ) .f. k. chow and h. f. chau , cond - mat/0210608 .d. challet and m. marsili , phys .e * 60 * , r6271 ( 1999 ) .d. challet , m. marsili and r. zecchina , phys .lett . * 84 * , 1824 ( 2000 ) . | to study the interplay between global market choice and local peer pressure , we construct a minority - game - like econophysical model . in this so - called networked minority game model , every selfish player uses both the historical minority choice of the population and the historical choice of one s neighbors in an unbiased manner to make decision . results of numerical simulation show that the level of cooperation in the networked minority game differs remarkably from the original minority game as well as the prediction of the crowd - anticrowd theory . we argue that the deviation from the crowd - anticrowd theory is due to the negligence of the effect of a four point correlation function in the effective hamiltonian of the system . |
[ section1 ] unlike traditional imaging systems , hyperspectral imaging ( hsi ) sensors , acquire a scene with several millions of pixels in up to hundreds of contiguous wavelengths .such high resolution spatio - spectral hyperspectral data , i.e. , three - dimensional ( 3d ) datacube organized in the spatial and spectral domain , has an extremely large data size and enormous redundancy , which makes compressive sensing ( cs ) a promising solution for hyperspectral data acquisition . to date , most existing designs for cs - based hyperspectral imagers can be grouped into frame - based acquisition in the spatial direction and pixel - based acquisition in the spectral direction .while a lot of reconstruction approaches for these two acquisition schemes have been proposed , most existing algorithms can only take advantage of the spatial and spectral information of hyperspectral data from the aspect of sparsity ( or joint - sparsity ) . because the foundation of these algorithms is built on conventional cs , which reconstructs the signals by solving a convex programming and proceeds without exploiting additional information ( aside from sparsity or compressibility ) .for hyperspectral data , the spatial and spectral correlations , which not only reflect in the correlation between the sparse structure of the data ( i.e. , structured sparsity ) , but also in the correlation between the amplitudes of the data , can be used to provide helpful prior information in the reconstruction processed and assist on increasing the compression ratios . in this paper, the structured sparsity and the amplitude correlations are considered jointly by assuming that spatially and spectrally correlated data satisfies simultaneous low - rank and joint - sparse ( l&s ) structure . using a structured l&s factorization, we propose an iterative approximate message passing ( amp ) algorithm , , in order to enable joint reconstruction of the data with the practical compression ratio that is substantially higher than the state - of - the - art .specifically , in section [ section2 ] , we introduce the structured factorization representation of the l&s model . in section [ section3 ] ,we propose a novel amp - based approach , called l&s - approximate message passing ( l&s - amp ) , that decouples the global inference problem into two sub - problems .one sub - problem considers the linear inverse problem of recovering the signal matrix from its compressed measurements .another sub - problem exploits the l&s structure of the signal matrix . then a recently proposed turbo amp " framework is used to enable messages to pass between these two phases efficiently .section [ section4 ] presents simulation results with real hyperspectral data that support the potential of the approach to considerably reduce the reconstruction error . in section [ section5 ] , we conclude the paper .[ section2 ] in this section , we first present the problem for compressive hyperspectral imaging . then , we propose a structured l&s factorization model for the signal matrix , which will be later exploited to acquire any hsi with very few measurements , via a novel joint reconstruction approach .owing to the inherent 3d structure present in the hyperspectral datacube and the two - dimensional nature of optical sensing hardware , cs - based hyperspectral imagers generally capture a group of linear measurements across either the 2d spatial extent of the scene for a spectral band or the spectral extent for a spatial ( pixel ) location at a time , i.e. , ] , is the measurement output vector , and is an additive noise vector with unknown variance .[ section2.3 ] as mentioned in the introduction , while the original hyperspectral data can be reconstructed by using conventional cs recovery algorithms , it is possible to achieve a much better recovery performance by applying the l&s model to further exploit the structural dependencies between the values and locations of the coefficients of the sparse signal vectors .the main reason that we consider as a l&s matrix is two - fold .first , images from different spectral bands enjoy similar natural image statistics , and hence can be joint - sparse in a wavelet / dct basis ; second , a limited number of unique materials in a scenes implies that spectral signatures across pixels can be stacked to form a matrix that is often low - rank . to precisely achieve the benefits of the l&s model and reconstruct the original hyperspectral data from a bayesian point of view , here we propose an accurate probabilistic model by performing a structured l&s factorization for as the diagonal matrix is the sparsity pattern matrix of the signals with the support indicates .we refer to as the sparsity level of .\in\mathbb{r}^{n\times r} ] are obtained from the low - rank matrix factorization of , which is the amplitude matrix of . for a joint - sparse matrix and an arbitrary matrix , this factorization implies that is a simultaneous low - rank ( ) and joint - sparse matrix with rank , where all sparse signal vectors share a common support with sparsity level .assuming independent entries for , , and , the separable probability density functions ( pdfs ) of and become where both and are assumed to be i.i.d .gaussian with unknown mean and variance .in particular , we assume follow i.i.d .gaussian distribution with zero mean and unit variance , i.e. , , to avoid ambiguity and the unnecessary model parameters update . as treated as i.i.d .bernoulli random variables with , the sparse coefficients , , become i.i.d .bernoulli - gaussian ( bg ) , i.e. , the marginal pdf where is the dirac delta function .furthermore , due to the assumption of adaptive gaussian noise in ( [ cs ] ) , the likelihood function of is known and separable , i.e. , the measurement . , , , and .,title="fig:",width=264 ] +[ section3 ] with the problem formulation in ( [ cs ] ) and ( [ signalmodel ] ) , our proposed method is to maximize the posterior joint distribution , i.e. , where denotes equality up to a normalizing constant scale factor .this posterior distribution can be represented with a factor graph shown in fig .[ fig_l&samp ] , where circles denote random variables and squares denote posterior factors based on belief propagation .each factor node represents the conditional probability distribution between all variable nodes it connected . vertical planes ( parallel to and axes ) exploit the linear measurement structure ( detailed in fig .[ fig_mgamp ] ) , while the remaining part of fig .[ fig_l&samp ] further exploits the l&s structure . to bypass the intractable inference problem of marginalizing ( [ pjd ] ), we propose to solve an alternative problem that consists of two sub - problems that mainly require local information to complete their tasks .correspondingly , our proposed algorithm is divided into two phases : i ) the _ multiple generalized approximate message passing _( m - gamp ) phase ; ii ) the _ low - rankness and sparsity pattern decoding _ ( l&spd ) phase . owing to this ,an efficient turbo amp " iterative framework is used , that iteratively updates one of the phases beliefs , and passes the beliefs to another phase , and vice versa , repeating until both phases converge . , , , and .,title="fig:",width=238 ] + [ subsection3_1 ] in each frame of the m - gamp phase, we apply the _ generalized approximate message passing _ ( gamp ) approach in parallel for the linear inference problem : estimate the vector from the observation , as shown in fig .[ fig_mgamp ] . specifically , the gamp computes the approximated posteriors on as , denotes a message passed from node to the adjacent node in the factor graph .the parameters , and are obtained after the gamp iteration converges .for the prior distribution of , i.e. , used in ( [ xpost ] ) , we can assume bg prior pdf where and are the active - coefficient mean and the active - coefficient variance , respectively , of the variable .it is worth mentioning that the prior parameters and , are only initialized to agnostic values at the beginning of the l&s - amp algorithm ( e.g. , ) , then iteratively updated according to the message passed from the l&spd phase .this process will be detailed in next subsection .then the minimum - mean - squared error ( mmse ) estimation of is facilitated by the following prior - dependent integrals [ subsection3_2 ] in the l&spd phase , to exploit the l&s structure , we employ the recently proposed _bilinear generalized approximate message passing _( big - amp ) approach to a variant of the pca problem : estimate the matrices and from an observation \in\mathbb{r}^{n\times t}$ ] which is the posterior estimation of their product obtained form the m - gamp phase in ( [ xmarginal ] ) . in particular , the big - amp obtains the approximately gaussian posterior messages as \ ] ] where the parameters and are obtained after the big - amp iteration converges .the prior distribution of , i.e. , used in ( [ xpost1 ] ) , comes from the posterior message of given the observation in the m - gamp phase .the prior distribution of used in ( [ xpost ] ) comes from the posterior message of given the matrix factorization in the l&spd phase . to enable effective implementation of turbo amp " iteration , given the construction of the factor graph in fig .[ fig_l&samp ] , the sum - product algorithm ( spa ) implies that , . \vspace{-4pt}\]]comparing ( [ xprior_spa ] ) and ( [ xprior_spa1 ] ) with ( [ xpost ] ) and ( [ xpost1 ] ) , respectively , we have thus , the parameters computed during the final iteration of the m - gamp phase , are treated as the prior parameters of in the l&spd phase .conversely , the parameters and computed during the final iteration of the l&spd phase are in turn used as the prior parameters of in the m - gamp phase in ( [ xmarginal ] ) .in addition , to further exploiting the joint - sparsity of , we use the local support estimate instead of the common sparsity rate in ( [ xmarginal ] ) .then , by applying the spa in the m - gamp phase , we get the posterior local support probability [ subsection3_3 ] beginning at the initial inter - phase iteration index , , the l&s - amp algorithm first performs the m - gamp phase with the initial prior parameters in ( [ xmarginal ] ). then the converged outgoing messages are treated as prior parameters in the l&spd phase . then the converged messages and obtained from the l&spd phase , along with the updated beliefs in ( [ lambdain ] ) , are used for the m - gamp phase at inter - phase iteration .this procedure continues until either a stopping condition or a maximum number of allowable iterations is reached .then we obtain the posterior mean estimates computed in ( [ xestimate ] ) .furthermore , we tune our prior and likelihood parameters using expectation - maximization , , and estimate the rank using a rank selection strategy based on the penalized log - likelihood maximization in .in addition , we recommend initializing l&s - amp using , and .[ section4 ] in this section , we present real data results to compare the performance of the proposed l&s - amp algorithm with prior state - of - art ppxa , ra - ormp , sa - music , and t - msbl algorithms .we evaluate the performance of the algorithms on two real hyperspectral datasets : 1 ) an urban dataset acquired over the university of houston , with 144 spectral bands , pixels , and a spatial resolution of .2 ) an agricultural dataset acquired over the salinas valley in california .the dataset has a spatial resolution of and consists of spectral bands with each band corresponding to an image with pixels .we assume that the l&s signal matrix is obtained using pixel - based acquisition , so that denotes the number of pixels and denotes the number of spectral band .the dct matrix is used as the sparsifying matrix , and gaussian noise is added to achieve snr db .it is worth noting that , for the sake of comparion , different random gaussian measurement matrices are used .also note that t - msbl , ra - ormp , and sa - music are derived only for the common measurement matrix case . for the recovery of the urban dataset ( left ) and agriculture - oriented dataset ( right).,title="fig:",width=302 ] + fig .[ cnmse ] plots the column - averaged normalized mse ( cnmse ) versus the compressive ratio on the two real datasets .the cnmse is defined as , where is an estimate of . from the figure , we observe that the proposed algorithm outperforms all the other algorithms in terms of cnmse , e.g. , in fig . [ cnmse].(b ), we note that l&s - amp achieves nearly 3db reconstruction gain than the other algorithms at .in addition , a plus - minus sign ( ) is used ( i.e. , l&s - amp ) to denote the case of using random measurement matrices , which are easy to implement in dmd , and can significantly reduce the burden of storage . some visual results of the recovered hyperspectral images by using different algorithms are presented in fig .[ visual ] . as expected ,our proposed algorithm preserves more fine details and much sharper edges , and shows much clearer and better visual results than the other competing methods .is fixed to 0.243 , and other simulation parameters remain unchanged . the whole scene is partitioned into a sequence of sub - scenes to enable parallel processing . ,title="fig:",width=317 ] +[ section5 ] in this paper , we studied joint cs reconstruction of spatially and spectrally correlated hyperspectral data acquired , assuming that the hyperspectral signal matrix satisfies the joint - sparse model with a lower rank than the sparsity level , i.e. , the l&s model .we proposed an amp - based algorithm for recovering the signal matrix with the l&s model while exploiting the structured sparsity and the amplitude correlation of the data .the numerical results were presented to confirm the performance advantage of our algorithm . 1 a. plaza , j. a. benediktsson , j. boardman , j. brazile , l. bruzzone , g. camps - valls , j. chanussot , m. fauvel , p. gamba , j. gualtieri , m. marconcini , j. c. tilton , and g. trianni , recent advances in techniques for hyperspectral image processing , " _ remote sens . environment , _ vol . 113 , no . 1 , pp .110122 , sep . 2009 .j. m. bioucas - dias , a. plaza , g. camps - valls , p. scheunders , n. m. nasrabadi , and j. chanussot , hyperspectral remote sensing data analysis and future challenges , " _ ieee geoscience and remote sens . mag ., _ vol . 1 , no. 2 , pp . 636 , jun . 2013 .e. j. cands , j. romberg , and t. tao , robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , " _ ieee trans .theory , _ vol .489509 , feb .m. f. duarte , m. a. davenport , d. takhar , j. n. laska , t. sun , k. e. kelly , and r. g. baraniuk , single - pixel imaging via compressive sampling , " _ ieee signal process .2 , pp : 8391 , mar .2008 .m. golbabaee and p. vandergheynst , hyperspectral image compressed sensing via low - rank and joint - sparse matrix recovery , " in _ proc .conf . on acoustics , speech , andsignal proces .( icassp ) , _ pp : 27412744 , kyoto , japan , mar .2012 .r. m. willett , m. f. duarte , m. a. davenport , and r. g. baraniuk , sparsity and structure in hyperspectral imaging : sensing , reconstruction , and target detection , " _ ieee signal process . mag .1 , pp : 116126 , jan . 2014 . z. zhang and b. d. rao , sparse signal recovery with temporally correlated source vectors using sparse bayesian learning , " _ ieee j. select .topics signal process ., _ vol . 5 , no .5 , pp . 912926 , sep . | this paper considers a compressive sensing ( cs ) approach for hyperspectral data acquisition , which results in a practical compression ratio substantially higher than the state - of - the - art . applying simultaneous low - rank and joint - sparse ( l&s ) model to the hyperspectral data , we propose a novel algorithm to joint reconstruction of hyperspectral data based on loopy belief propagation that enables the exploitation of both structured sparsity and amplitude correlations in the data . experimental results with real hyperspectral datasets demonstrate that the proposed algorithm outperforms the state - of - the - art cs - based solutions with substantial reductions in reconstruction error . compressive hyperspectral imaging , low - rank and joint - sparse , compressive sensing , approximate message passing |
ever since he wrote his large 1907 review of special relativity for the _ jahrbuch der radioaktivitt und elektronik _, einstein reflected on how to extend the principle of relativity to non - inertial motions .his key insight was that such an extension is indeed possible , provided gravitational fields are included in the description .in fact , the last chapter ( v ) of , which comprises four ( 17 - 20 ) out of twenty sections , is devoted to this intimate relation between acceleration and gravitation .the heuristic principle einstein used was his `` quivalenzhypothese '' ( hypothesis of equivalence ) or `` quivalenzprinzip '' ( principle of equivalence ) , which says this : changing the description of a system from an inertial to a non - inertial reference frame is equivalent to not changing the frame at all but adding a _ special _ gravitational field .this principle is _ heuristic _ in the sense that it allows to deduce the extension of physical laws , the forms of which are assumed to be known in the absence of gravitational fields , to the presence of at least those special gravitational fields that can be `` created '' by mere changes of reference frames .the idea behind this was , of course , to postulate that the general features found in this fashion remain valid in _ all _ gravitational fields . in the 1907 review einstein used this strategy to find out about the influence gravitational fields have on clocks and general electromagnetic processes .what he did not attempt back in 1907 was to find an appropriate law for the gravitational field that could replace the poisson equation of newtonian gravity .this he first attempted in his two `` prague papers '' from 1912 for static fields .the purpose of my contribution here is to point out that the field equation einstein arrived at in the second of these papers is not merely of historical interest .after 1907 einstein turned away from gravity research for a while , which he resumed in 1911 with a paper , also from prague , in which he used the `` quivalenzhypothese '' to deduce the equality between gravitational and inertial mass , the gravitational redshift , and the deflection of light by the gravitational field of massive bodies .as is well known , the latter resulted in half the amount that was later correctly predicted by gr . in the next gravity paper , the first in 1912 , entitled _`` lichtgeschwindigkeit und statik des gravitationsfeldes '' _ , einstein pushed further the consequences of his heuristics and began his search for a sufficiently simple differential equation for static gravitational fields .the strategy was to , first , guess the equation from the form of the special fields `` created '' by non inertial reference frames and , second , generalise it to those gravitational fields sourced by real matter .note that the gravitational acceleration was to be assumed to be a gradient field ( curl free ) so that the sought - after field equation was for a scalar field , the gravitational potential .the essential idea in the first 1912 paper is to identify the gravitational potential with , the local velocity of light . has the wrong physical dimension , namely that of a velocity , whereas the a proper gravitational potential should have the dimension of a velocity - squared . ]einstein s heuristics indicated clearly that special relativity had to be abandoned , in contrast to the attempts by max abraham ( 1875 - 1922 ) , who published a rival theory that was superficially based on poincar invariant equations ( but violated special relativity in abandoning the condition that the four - velocities of particles had constant minkowski square ) . in passing i remark that einstein s reply to abraham , which is his last paper from prague before his return to zrich , contains next to his anticipation of the essential physical hypotheses on which a future theory of gravity could be based ( here i refer to ji bik s contribution to this volume ) , also a concise and very illuminating account of the physical meaning and limitation of the special principle of relativity ,the essence of which was totally missed by abraham . back to einsteins first 1912 paper , the equation he came up with was where is the `` universal gravitational constant '' and is the mass density .the mathematical difference between ( [ eq : firstpragueequation ] ) and the poisson equation in newtonian gravity is that ( [ eq : firstpragueequation ] ) is homogeneous ( even linear ) in the potential .this means that the source strength of a mass density is weighted by the gravitational potential at its location .this implies a kind of `` red - shift '' for the active gravitational mass which in turn results in the existence of geometric upper bounds for the latter , as we will discuss in detail below .homogeneity was einstein s central requirement , which he justified from the interpretation of the gravitational potential as the local velocity of light , which is only determined up to constant rescalings induced from rescalings of the timescale . already in a footnote referring to equation ( [ eq : firstpragueequation ] )einstein points out that it can not be quite correct , as he is to explain in detail in a follow - up paper .this second paper of 1912 is the one i actually wish to focus on in my contribution here .it appeared in the same issue of the _ annalen der physik _ as the previous one , under the title _`` zur theorie des statischen gravitationsfeldes '' _ ( on the theory of the static gravitational field ) . in iteinstein once more investigates how the gravitational field influences electromagnetic and thermodynamic processes according to what he now continues to call the _ quivalenzprinzip _ , and derives from it the equality of inertial and gravitational mass .after that he returns to the equation for the static gravitational field and considers the gravitational force - density , acting on ponderable matter of mass density , which is given by ( einstein writes instead of our ) einstein observes that the space integral of does not necessarily vanish on account of ( [ eq : firstpragueequation ] ) , in violation of the principle that _ actio _ equals _ reactio_. terrible consequences , like self - acceleration , have to be envisaged .he then comes up with the following non - linear but still homogeneous modification of ( [ eq : firstpragueequation ] ) ( again einstein writes instead of ) : in the rest of this paper we will show how to arrive at this equation from a different direction and discuss some of its interesting properties as well as its relation to the description of static gravitational fields in gr .the following considerations are based on .we start from ordinary newtonian gravity , where the gravitational field is described by a scalar function whose physical dimension is that of a velocity - squared .it obeys the force per unit volume that the gravitational field exerts onto a distribution of matter with density is this we apply to the force that the gravitational field exerts onto its own source during a real - time process of redistribution .this we envisage as actively transporting each mass element along the flow line of a vector field .to first order , the change that suffers in time is given by where and is the lie derivative with respect to .we assume the support to be compact . in general , this redistribution costs energy .the work we have to invest for redistribution is , to first order , just given by where we used ( [ eq : densitychange ] ) in the last step and where we did not write out the lebesgue measure to which all integrals refer .note that in order to obtain ( [ eq : investedworkgeneral ] ) we did not make use the field equation .equation ( [ eq : investedworkgeneral ] ) is generally valid whenever the force - density relates to the potential and the mass density as in ( [ eq : newtonianforcedensity ] ) .now we make use of the field equation ( [ eq : newtonianfieldeq ] ) .we assume the redistribution - process to be adiabatic , that is , we assume the instantaneous validity of the field equation at each point in time throughout the process .this implies hence , using ( [ eq : investedworkgeneral ] ) , the work invested in the process of redistribution is ( to first order ) if the infinitely dispersed state of matter is assigned the energy - value zero , then the expression in curly brackets is the total work invested in bringing the infinitely dispersed state to that described by the distribution .this work must be stored somewhere as energy . like in electro - statics and -dynamics ,we take a further logical step and assume this energy to be spatially distributed in the field according to the integrand .this leads to the following expression for the energy density of the static gravitational field all this is familiar from newtonian gravity .but now we go beyond newtonian gravity and require the validity of the following * principle .* _ all energies , including that of the gravitational field itself , shall gravitate according to . _this principle implies that if we invest an amount of work to a system its ( active ) gravitational mass will increase by .now , the ( active ) gravitational mass is defined by the flux of the gravitational field to spatial infinity ( i.e. through spatial spheres as their radii tend to infinity ) : hence , making use of the generally valid equation ( [ eq : investedworkgeneral ] ) , the principle that takes the form this functional equation relates and , over and above the restriction imposed on their relation by the field equation .however , the latter may - and generally will - be inconsistent with this additional equation .for example , the newtonian field equation ( [ eq : newtonianfieldeq ] ) is easily seen to manifestly violate ( [ eq : theprinciple ] ) , for the right - hand side then becomes just the integral over , which always vanishes on account of ( [ eq : densitychange ] ) ( or the obvious remark that the redistribution clearly does not change the total mass ) , whereas the left hand side will generally be non - zero . the task must therefore be to find field equation(s ) consistent with ( [ eq : theprinciple ] ) .our main result in that direction is that the unique generalisation of ( [ eq : newtonianfieldeq ] ) which satisfies ( [ eq : theprinciple ] ) is just ( [ eq : secondpragueequation ] ) , i.e. the field equation from einstein s second 1912 paper .let us see how this comes about .a first guess for a consistent modification of ( [ eq : newtonianfieldeq ] ) is to simply add to the source : but this can not be the final answer because this change of the field equation also brings about a change in the expression for the self - energy of the gravitational field .that is , the term in the bracket on the right - hand side is not the total energy according to _ this _ equation , but according to the original equation ( [ eq : newtonianfieldeq ] ) . in other words :equation ( [ eq : improvedng - firstiteration ] ) still lacks _ _ self-__consistency .this can be corrected for by iterating this procedure , i.e. , determining the field s energy density according to ( [ eq : improvedng - firstiteration ] ) and correcting the right - hand side of ( [ eq : improvedng - firstiteration ] ) accordingly .again we have changed the equation , and this goes on ad infinitum .but the procedure converges to a unique field equation , similarly to the convergence of the `` noether - procedure '' that leads from the poincar invariant pauli - fierz theory of spin-2 mass-0 fields in flat minkowski space to gr . in our toy modelthe convergence of this procedure is not difficult to see .we start from the definition ( [ eq : defgravmass ] ) and calculate its variation assuming the validity of ( [ eq : improvedng - firstiteration ] ) . from what we said above we know already that this not yet going to satisfy ( [ eq : theprinciple ] ) .but we will see that from this calculation we can read off the right redefinitions .we start by varying ( [ eq : defgravmass ] ) : we replace with the variation of the right - hand side of ( [ eq : improvedng - firstiteration ] ) .partial integration of the non - liner part gives us a surface term whose integrand is and hence vanishes .the remaining equation is playing the same trick ( of replacing with the variation of the right - hand side of ( [ eq : improvedng - firstiteration ] ) and partial integration , so as to collect all derivatives on ) again and again , we arrive after steps at as is bounded for a regular matter distribution , and the spatial integral over is just , the last term tends to zero for .hence this _ is _ of the desired form ( [ eq : theprinciple ] ) required by the principle , provided we redefine the gravitational potential to be rather than , where saying that rather than is the right gravitational potential means that the force density is not given by ( [ eq : newtonianforcedensity ] ) , but rather by as we have made use of equation ( [ eq : improvedng - firstiteration ] ) in order to derive ( [ eq : laststep ] ) , we must make sure to keep _ that _ equation , just re - expressed in terms of .this leads to \,,\ ] ] which is precisely einsteins improved `` prague equation '' ( [ eq : secondpragueequation ] ) with .note from ( [ eq : newpotential ] ) that the asymptotic condition translates to .note also that for the -parts of and coincide , so that in the expressions ( [ eq : defgravmass ] ) for we may just replace with : the principle now takes the form ( [ eq : theprinciple ] ) with replaced by .it is straightforward to show by direct calculation that ( [ eq : theprinciple ] ) is indeed a consequence of ( [ eq : selfconsistenteq ] ) , as it must be .it also follows from ( [ eq : selfconsistenteq ] ) that the force density ( [ eq : newforcedensity ] ) is the divergence of a symmetric tensor : [ eq : divsymmtensor ] where \right\}\,.\ ] ] this implies the validity of the principle that actio equals reactio that einstein demanded ._ this _ was einstein s rationale for letting ( [ eq : secondpragueequation ] ) replace ( [ eq : firstpragueequation ] ) .finally we mention that ( [ eq : selfconsistenteq ] ) may be linearised if written in terms of the square - root of : one gets this helps in finding explicit solutions to ( [ eq : selfconsistenteq ] ) .note that is dimensionless .in this section we discuss some properties of spherically symmetric solutions to ( [ eq : selfconsistenteqlinear ] ) for spherically symmetric mass distributions of compact support . in the followingwe will simply refer to the object described by such a mass distribution as `` star '' . in terms of equation ( [ eq : selfconsistenteqlinear ] )is equivalent to the support of is a closed ball of radius , called the star s radius . for we shall assume ( weak energy condition ) .we seek solutions which correspond to everywhere positive and regular and hence everywhere positive and regular .in particular and must be finite. for equation ( [ eq : selfconsistenteqlinear - chi ] ) implies , the solution to which is where denotes the gravitational radius comes in because of ( [ eq : defgravmassnew ] ) , which fixes one of the two integration constants , the other being fixed by .let denote the solution in the interior of the star .continuity and differentiability at gives and .we observe that . for suppose , then ( [ eq : selfconsistenteqlinear - chi ] ) and the weak energy condition imply . butthis implies that for ] .this shows that for infinitely dispersed matter , where and hence , we have , as expected , and that for infinite compression . as the gained energy at stage is , we can at most gain .finally i wish to briefly comment on the relation of equation ( [ eq : secondpragueequation ] ) or ( [ eq : selfconsistenteq ] ) to gr .since einstein s 1912 theory was only meant to be valid for static situations , i will restrict attention to static spacetimes .hence i assume the existence of a timelike and hypersurface orthogonal killing field .my signature convention shall be `` mostly plus '' , i.e. .we choose adapted coordinates , , where the level sets of are the integral manifolds of the foliation defined by and .we can then write the metric in a form in which the coefficients do not depend on ( called `` time '' ) , clearly . from now on , all symbols with hats on refer to the spatial geometry , like the spatial metric .the -component of the geodesic equation is equivalent to , where an overdot refers to the derivative with respect to an affine parameter .this equation allows us to eliminate the affine parameter in favour of in the spatial components of the geodesic equation . if we set ) which we need and to which we return below . ]they read \,,\ ] ] where the are the christoffel coefficients for , and .this should be compared with ( [ eq : newforcedensity ] ) together with newton s second law , which give .as we did not attempt to include special relativistic effects in connection with high velocities , we should consistently neglect terms in ( [ eq : geodesiceqspatial ] ) .this results in dropping the rightmost term .the rest has the pseudo - newtonian form in arbitrary ( not just inertial ) spatial coordinates . a non - zero spatial curvature would , of course , be a new feature not yet considered .the curvature and ricci tensors for the metric ( [ eq : staticmetric ] ) are readily computed , most easily by using cartan s structure equations : here is the unit timelike vector characterising the static reference frame , is the levi - civita covariant derivative with respect to , and is the corresponding laplacian . using this in einstein s equations for pressureless ( we neglect the pressure since it enters multiplied with ) dust at rest and of mass - density in the static frame , i.e. we get we note that , apart from the space curvature , ( [ eq : einsteinsequationshere - a ] ) is almost but not quite identical to ( [ eq : selfconsistenteqlinear ] ) .they differ by a factor of 2 ! rewriting ( [ eq : einsteinsequationshere - a ] ) in terms of according to ( [ eq : phipsirelation ] ) , we get \,.\ ] ] this differs from ( [ eq : selfconsistenteq ] ) by the same factor of ( i.e. , ) .note that we can not simply remove this factor by rescaling and , as the equations are homogeneous in these fields .note also that the overall scale of is fixed by ( [ eq : geodesiceqspatial ] ) : it is the gradient of , and not a multiple thereof , which gives the acceleration .but then there is another factor of 2 in difference to our earlier discussion : if the metric ( [ eq : staticmetric ] ) is to approach the minkowski metric far away from the source , then should tend to one and hence should asymptotically approach according to ( [ eq : phipsirelation ] ) . in ( [ eq : selfconsistenteq ] ) , however , should asymptotically approach , i.e. twice that value . this additional factor of 2 ensures that both theories have the same newtonian limit . indeed ,if we expand the gravitational potential of an isolated object in a power series in , this implies that the linear terms of both theories coincide .however , the quadratic terms in gr are twice as large as in our previous theory based on ( [ eq : newforcedensity ] ) and ( [ eq : selfconsistenteq ] ) .this is not quite unexpected if we take into account that in gr we also have the space curvature that will modify the fields and geodesics in post newtonian approximations .we note that the spatial einstein equations ( [ eq : einsteinsequationshere - b ] ) prevent space from being flat .for example , taking their trace and using ( [ eq : einsteinsequationshere - a ] ) shows that the scalar curvature of space is , in fact , proportional to the mass density .finally we show that the total gravitational mass in gr is just given by the same formula ( [ eq : defgravmassnew ] ) , where is now that used here in the gr context . to see this we recall that for spatially asymptotically flatspacetimes the overall mass ( measured at spatial infinity ) is given by the adm - mass .moreover , for spatially asymptotically flat spacetimes which are stationary and satisfy einstein s equations with sources of spatially compact support , the adm mass is given by the komar integral ( this is , e.g. , proven in theorem 4.13 of ) .hence we have here , and is the corresponding 1-form .the star , , denotes the hodge - duality map . using ( [ eq : phipsirelation ] ) and asymptotic flatnessit is now straightforward to show that the right hand side of ( [ eq : komarmass ] ) can indeed be written in the form of the middle term in ( [ eq : defgravmassnew ] ) .this term only depends on at infinity , i.e. the newtonian limit , and hence gives a value independent of the factor-2 discrepancy discussed above . in this sensewe may say that the active gravitational mass defined earlier corresponds to in the gr context .* acknowledgements .* i sincerely thank the organisers and in particular ji bik for inviting me to the most stimulating and beautiful conference _ relativity and gravitation 100 years after einstein in prague . _ | i reconsider einstein s 1912 `` prague - theory '' of static gravity based on a scalar field obeying a non - linear field equation . i point out that this equation follows from the self - consistent implementation of the principle that _ all _ energies source the gravitational field according to . this makes it an interesting toy - model for the `` flat - space approach '' to general relativity ( gr ) , as pioneered by kraichnan and later feynman . solutions modelling stars show features familiar from gr , e.g. , buchdahl - like inequalities . the relation to full gr is also discussed . this lends this toy theory also some pedagogical significance . this paper is based on a talk delivered at the conference _ relativity and gravitation 100 years after einstein in prague _ , held in prague 25.-29 . june 2012 . |
the growth of ice crystals and aggregate snowflakes in clouds is a key process both for the development of precipitation ( jiusto and weickmann 1973 ) , and in terms of the effect such clouds have on climate ( houghton 2001 ) . in this work ,we use radar observations of deep cirrus to study the growth of ice particles as they sediment through the cloud .vertically - pointing measurements of radar reflectivity and doppler velocity were made using the 35 ghz ( 8.6 mm ) ` copernicus ' radar at the chilbolton observatory in southern england . at this wavelength ,the overwhelming majority of cirrus - sized particles are within the rayleigh regime where the backscattered intensity is proportional to the square of particle mass : where is the density of solid ice and is the number density of particles with mass between and .the dielectric factor contains the information about the shape and dielectric strength of the particles : for spherical ice particles and the permittivity of ice at millimetre wavelengths is approximately 3.15 ( jiang and wu 2004 ) .the rayleigh scattering approximation at 35 ghz is accurate to within 10% for particles with a maximum dimension of 1 mm or less ( westbrook _ et al _ 2006 ) .the doppler velocity is , where is the -weighted average terminal velocity of the particles and is the vertical air motion .we use these measurements to estimate a characteristic particle fall time , which we define as the time for which the ` average ' particle ( with terminal velocity ) has been falling .note that the doppler velocity is weighted by the reflectivity making it sensitive to the larger ice particles , and so our average fall time will also be weighted toward these large particles .taking the cloud top height to be the altitude at which there is no longer a detectable radar return , we calculate the fall time associated with height as : given this new measure , we are in a position to investigate the evolution of the ice particles , by studying the variation of reflectivity with increasing fall time .the advantage of this method , as opposed to simply studying as a function of height , is that represents the physical time for which the average ice particle has been falling to reach a given height , allowing us to relate our results to theoretical models of ice particle growth .note that we have implicitly assumed that the cloud is in a steady state , such that the properties of the ice particles at height do not change significantly over the length of time it takes a particle to fall from cloud top to cloud base ( which is between 45 minutes and 2 hours for the cases shown here ) .essentially this means that the cloud does not evolve significantly on this time scale and is advecting as a rigid body across the radar beam .we therefore apply our technique only to non - precipitating , well developed ice clouds where there is there is low wind shear .our case study is a cirrus cloud observed over chilbolton on the of may 2004 .the temperature at cloud top ( as forecast by the met office mesoscale model , cullen 1993 ) was approximately , and the cloud base was close to ; the average wind shear over the depth of the cloud was approximately .measurements of reflectivity and doppler velocity were made and the time series of these observations is shown in figure [ cloud ] .the radar gate length is 30 m ( illingworth _ et al _ 2006 ) .the values of and are averages over periods of 30 seconds : in figure [ cloud]c we also show the standard deviation of the 1-s average doppler velocity over each 30-s period , to indicate the small - scale variability in .this measure allows the level of turbulence in the cloud to be assessed ( bouniol _ et al _ 2003 ) .figure [ zt ] shows four representative vertical profiles sampled from different portions of the cloud , indicated by the dashed lines on figure [ cloud ] .ten consecutive 30-s profiles were averaged over a period of minutes in order to smooth out the variability caused by fall streaks in the data .the highest detectable cloud pixel ( corresponding to dbz ) from the profile is taken as a measure of cloud top .the fall time at each height bin is calculated from the doppler velocity profile as per equation [ avtime ] , and we plot as a function of . from figure[ zt ] we see that reflectivity increases rapidly with fall time ( note the logarithmic dbz units ) , which we interpret as rapid growth of the ice particles .this could potentially be occurring through a number of possible mechanisms : deposition of water vapour ; aggregation via differential sedimentation of the ice particles ; or collisions with supercooled drops ( riming ) . in section 4we show that it is likely that aggregation is the dominant growth mechanism .the increase in appears to be exponential to a good approximation , and occurs for between 2500 and 5000 seconds in the profiles shown here .the slopes on the log scale vary between approximately and , presumably depending on how much ice is being produced at cloud top . after this time there is a sharp turn over in the curves , and we attribute this to evaporation of the particles near cloud base .such evaporation often results in increased air turbulence for which the particles themselves act as tracers , resulting in large variability in the doppler velocity . in the earlier profiles ( 07:09 and 07:38 utc ) this was not evident ; however , in the later profiles ( 08:06 and 08:30 utc ) the higher ice water content and time - integrated evaporative cooling triggered convective overturning and turbulence , and this is reflected in our observations ( figures [ cloud ] and [ zt ] ) , which show a sudden increase in at approximately the same time as the turn over in . exponential growth has also been observed in a number of other cloud data sets , and four more example profiles from well developed non - precipitating ice clouds during april and may 2004 are shown in figure [ otherclouds ] .this is an interesting feature of the data , and a robust one in the face of errors in : if is exponential , then even if we have underestimated the cloud top somewhat ( on account of the limited sensitivity of the radar ) , this will merely correspond to an offset in the fall time , and the exponential shape of is still preserved .it is interesting to note that the transition from growth to evaporation is not always sharp as it is for the may profiles : we speculate that this may be the result of aggregation continuing to some extent within the evaporation layer .here we show how the reflectivity is related to the average particle size . scaling or `normalised ' forms for the size distributions of both liquid and ice particles have been proposed in a number of recent articles ( rain : testud _ et al _ 2001 , illingworth and blackman 2002 , lee _ et al _ 2005 ; ice : field and heymsfield 2003 , westbrook _et al _ 2004a , b , delano _ et al _ 2005 ) .the essence of these rescaling schemes is that the underlying shape of the distribution is the same throughout the vertical profile , but is rescaled as a function of the ( increasing ) average particle mass as the particles grow : where we have normalised by the ice water content iwc .the universal function is dimensionless .equation [ dseqn ] indicates that a single average particle mass is sufficient to characterise the evolution of the particle size distribution ( relative to the iwc or some other moment of the distribution ) , and this is key to our analysis .an example of such a distribution is that assumed in the uk met office s unified model ( wilson and ballard 1999 ) .mass and diameter are assumed to be in a power law relationship , with an exponential distribution for particle diameter : where .a single bulk prognostic variable is used for the ice particle mixing ratio and is parameterised to decrease with increasing temperature to mimic particle growth .the parameter is calculated from the predicted iwc and , and is interpreted as a reciprocal average diameter ( eg .et al _ 1995 ) . within the framework ( [ dseqn ] ) above, this distribution corresponds to : ^{-1}x^{(1-b)/b}\exp\left({-x^{1/b}}\right),\ ] ] where , and .irrespective of what form is assumed for , a scaling relationship between different moments of the distribution may be found .the moment of the mass distribution is given by : note that is a dimensionless constant .similarly , the radar reflectivity ( [ zeqn ] ) is given by : combining these two equations we may relate to an arbitrary moment of the distribution : at this point we make a crucial assumption : that there is some moment of the distribution which is approximately constant through the vertical profile .in the case where aggregation is the dominant growth mechanism with a fixed production of ice mass at cloud top , one would expect the mass flux density of ice to be constant .mitchell ( 1996 ) indicated that a power law for ice particle fall speeds is a good approximation : , so for pure aggregation . similarly , where diffusional growth or riming is dominant , the total number flux of particles would be roughly constant and would be the conserved moment .if this assumption holds then the bracketted expression in equation [ scale ] is fixed through the vertical profile , and .given our observations of exponential and the predicted power law between and above , we conclude that the average particle mass is growing exponentially with fall time .we offer a possible explaination for the exponential growth of ice particles described above .aircraft observations have indicated that aggregation is often the dominant growth mechanism for particles larger than a few hundred microns in cirrus clouds ( field and heymsfield 2003 ) , and it is these large particles which dominate the radar reflectivity .recently , westbrook _et al _ ( 2004a , b ) modelled ice particle aggregation by considering a rate of close approach between pairs of ice particles with masses and : where and are the associated fall speed and maximum dimension .particles were picked according to the rate above , and traced along possible trajectories to accurately sample the collision geometries of the non - spherical ice particles .the fall speeds were prescribed in the vein of mitchell ( 1996 ) : where the adjustable parameter determines the drag regime ( inertial flow ; viscous flow ) .one of the key results from these simulations was that the aggregates produced by the model had a power law relationship between mass and maximum dimension , where the exponent is determined purely by the drag regime : for .this relation is also backed up by a theoretical argument based on a feedback between the aggregate geometry and collision rate ( westbrook _ et al _ 2004b ) . for large snowflakes , and , in good agreement with aircraft observations ( eg . , brown and francis 1995 ; , heymsfield _ et al _ 2002 ) .in this study we are interested in the average ice particle growth rate , which is determined through the scaling of the collision kernel ( [ kernel ] ) . given the above relationship between and , and equations [ kernel ] and [ mitch ] , we see that if one doubles the masses of the aggregating particles : where .this parameter controls the scaling of the particle growth rates and as such controls the growth of the average particle mass .van dongen and ernst ( 1985 ) have shown that the coagulation equation ( pruppacher and klett 1997 ) has solutions with the same scaling form as ( [ dseqn ] ) , and predicts that the average particle mass grows according to the differential equation : where is a constant . given our prediction of from the aggregation model :the prediction from aggregation theory is that average particle mass grows exponentially with fall time , in agreement with our observations .we note that the van dongen and ernst analysis is for cases where total mass is conserved : however given the observed scaling behaviour ( [ dseqn ] ) and a power law relationship between mass and fall speed , the case where mass flux density is conserved should yield the same result .the growth of particles by diffusion of water vapour may also be described by a similar equation to ( [ dmdt ] ) . however in that case and , where is the ` capacitance ' per unit diameter , is the supersaturation with respect to ice , and the terms and depend on temperature and pressure ( pruppacher and klett 1997 ) . for a given set of conditions , the growth by depositionwould be expected to increase slower with particle size than for aggregation , taking a power law form . in real cloudsthese conditions do not stay constant , and there is a correlation between increasing particle size and increased temperature and supersaturation , which could lead to a faster growth rate .however , it would take a considerable conspiracy between these variables to obtain a constant exponential growth throughout such an extensive region of the cloud as is observed in our radar data .it also seems extremely unlikely that this correlation would be the same for all five cirrus cases shown in figures 2 and 3 .we note that there is a region of sub - exponential growth close to cloud top ( small ) in some of the profiles in figure 3 : we suggest that it is in this region , where the particles are small and falling slowly , that diffusional growth dominates .it seems very unlikely that riming dominated the ice particle growth : a large number of supercooled drops throughout the depth of the cloud would be required for this to be the case .given the cold temperatures in the cloud ( between and as discussed in section 2 ) , it is very unlikely that supercooled drops would persist on long enough time scales and in large enough quantities to dominate the growth over the 2.5 km or so for which we have observed to increase exponentially .we therefore discount deposition and riming , and assert that our observations are an indicator that aggregation is the dominant growth mechanism for the ice particles in these clouds .doppler radar measurements of cirrus cloud were used to study the evolution of the ice particles sedimenting through it .the results indicate that in the cases studied the average ice particle mass grows exponentially with fall time , in agreement with the theoretical expectation for aggregation , and we believe that this is evidence that aggregation of ice crystals is the dominant growth mechanism for large particles in deep , well developed ice clouds .vertical profiles of reflectivity in ice have been much studied in order to estimate rainrates at the ground .fabry and zawadzki ( 1995 ) observed an approximately constant d(dbz)/d gradient , and used this to rule out deposition as a growth mechanism .this may be linked to our cirrus observations ; however their results were near the melting layer , and was higher . we have compared profiles of dbz- and dbz- for our cirrus cases and find that while the dbz- profiles are straight lines with a constant gradient , the dbz- profiles have an appreciable curve to them .the fact that our analysis ` straightens ' these curved profiles is good evidence that our approach of using the doppler velocity to estimate from is an appropriate one , and that aggregation is controlling the distribution of large ice particles .the constant described in the aggregation theory above is directly related to the mass flux density , so measurements of the dbz- slope may allow the derivation of this quantity , and the data could be combined with doppler velocity measurements to estimate the ice water content . however , the sticking efficency of the ice particles ( which we assume to be constant with particle size ) is also a factor in , and this is a parameter for which there are few reliable experimental estimates . for warmer , ` stickier ' ice crystals at temperatures above this may be more feasible since the sticking efficiency should be close to unity .we have assumed the ice particles fall vertically . in realitythere is likely to be some horizontal shear , and this , combined with variability in ice production of the cloud - top generating cells results in visible fall streaks ( see fig .size - sorting along the streaks ( bader _ et al _ 1987 ) is a potential source of error in our analysis ; however , by averaging the reflectivity profiles over minutes of data we have been able to ameliorate it considerably .directions for future work are to make dual - wavelength radar measurements of cirrus in order to obtain a more direct estimate of particle size ( westbrook _ et al _ 2006 ) .this would help to pin down the dominant growth mechanism , allowing us to study moments other than , and analyse whether ( aggregation ) or ( deposition , riming ) is the moment conserved through the cloud .aircraft observations ( field _ et al _ 2005 ) have indicated a broadly exponential trend between and temperature - it would be valuable to combine simultaneous radar and aircraft measurements to see if the exponential growth in with is accompanied by exponential growth in and increased concentrations of aggregates .also , further studies of other cirrus cases , both at chilbolton and other radar sites , could be of interest to see how widespread the observed exponential trend is .this work was funded by the natural environment research council ( grant number ner / z/2003/00643 ) .we are grateful to the staff at the cclrc chilbolton observatory , ufam and the eu cloudnet project ( www.cloud-net.org ) , grant number evk2 - 2000 - 00065 .we are grateful to our two reviewers for their valuable comments and suggestions .bouniol , d. , a. j. illingworth and r. j. hogan ( 2003 ) , deriving turbulent kinetic energy dissipation rate within clouds using ground based 94 ghz radar , _ proc .31 ams conf . on radarmeteorology _ , seattle , 192196 .brown , p. r. a. , a. j. illingworth , a. j. heymsfield , g. m. mcfarquhar , k. a. browning and m. gosset ( 1995 ) , the role of spaceborne millimetre - wave radar in the global monitoring of ice cloud , _j. appl . met ._ , _ 34 _ 23462366 . delano , j. , a. protat , j. testud , d. bouniol , a. j. heymsfield , a. bansemer , p. r. a. brown and r. m. forbes ( 2005 ) , statistical properties of the normalised ice particle distribution , _ j. geophys ._ , _ 110 _ , d10201 . field , p. r. , r. j. hogan , p. r. a. brown , a. j. illingworth , t. w. choularton and r. j. cotton ( 2005 ) , parametrization of ice - particle size distributions for mid - latitude stratiform cloud , _ q. j. r. meteorol_ , _ 131 _ , 19972017 .heymsfield , a. j. , s. lewis , a. bansemer , j. iaquinta , l. m. miloshevich , m. kajikawa , c. twohy and m. r. poellot ( 2002 ) , a general approach for deriving the properties of cirrus and stratiform ice particles , __ , _ 60 _ , 17951808 . illingworth , a. j. and t. m. blackman ( 2002 ) , the need to represent raindrop size spectra as normalized gamma distributions for the interpretation of polarization radar observations , _ j. appl . met ._ , _ 41 _ , 286297 .testud , j. s. , s. oury , r. a. black , p. amayenc and x. k. dou ( 2001 ) , the concept of `` normalized '' distribution to describe raindrop spectra : a tool for cloud physics and remote sensing , _ j. appl . met ._ , _ 40 _ , 11181140 . four ` snapshot ' vertical profiles from the cirrus case , taken at 07:09 , 07:38 , 08:06 , and 08:30 utc .each profile shown is the average of ten consecutive 30-s profiles .top row is reflectivity in dbz as a function of characteristic fall time ( points ) .the solid line is intended to guide the eye , and indicates an exponential growth in with .bottom row is as a function of , which we use as an indicator of particle evaporation near cloud base.,width=624 ] | vertically pointing doppler radar has been used to study the evolution of ice particles as they sediment through a cirrus cloud . the measured doppler fall speeds , together with radar - derived estimates for the altitude of cloud top , are used to estimate a characteristic fall time for the ` average ' ice particle . the change in radar reflectivity is studied as a function of , and is found to increase exponentially with fall time . we use the idea of dynamically scaling particle size distributions to show that this behaviour implies exponential growth of the average particle size , and argue that this exponential growth is a signature of ice crystal aggregation . |
in many applications one fits a parametrized curve described by an implicit equation to experimental data , . here denotes the vector of unknown parameters to be estimated .typically , is a polynomial in and , and its coefficients are unknown parameters ( or functions of unknown parameters ) .for example , a number of recent publications are devoted to the problem of fitting quadrics , in which case is the parameter vector .the problem of fitting circles , given by equation with three parameters , also attracted attention .we consider here the problem of fitting general curves given by implicit equations with being the parameter vector .our goal is to investigate statistical properties of various fitting algorithms .we are interested in their biasedness , covariance matrices , and the cramer - rao lower bound .first , we specify our model .we denote by the true value of .let , , be some points lying on the true curve .experimentally observed data points , , are perceived as random perturbations of the true points .we use notation and , for brevity .the random vectors are assumed to be independent and have zero mean .two specific assumptions on their probability distribution can be made , see : * _ cartesian model _ :each is a two - dimensional normal vector with covariance matrix , where is the identity matrix . * _ radial model _ : where is a normal random variable , and is a unit normal vector to the curve at the point .our analysis covers both models , cartesian and radial . for simplicity , we assume that for all , but note that our results can be easily generalized to arbitrary . concerning the true points , , two assumptions are possible .many researchers consider them as fixed , but unknown , points on the true curve . in this casetheir coordinates can be treated as additional parameters of the model ( nuisance parameters ) .chan and others call this assumption a _functional model_. alternatively , one can assume that the true points are sampled from the curve according to some probability distribution on it .this assumption is referred to as a _ structural model _ .we only consider the functional model here .it is easy to verify that maximum likelihood estimation of the parameter for the functional model is given by the orthogonal least squares fit ( olsf ) , which is based on minimization of the function _ 1 ( ) = _ i=1^n [ d_i()]^2 [ fmain1 ] where denotes the distance from the point to the curve .the olsf is the method of choice in practice , especially when one fits simple curves such as lines and circles .however , for more general curves the olsf becomes intractable , because the precise distance is hard to compute .for example , when is a generic quadric ( ellipse or hyperbola ) , the computation of is equivalent to solving a polynomial equation of degree four , and its direct solution is known to be numerically unstable , see for more detail .then one resorts to various approximations .it is often convenient to minimize _ 2 ( ) = _ i=1^n [ p(x_i , y_i;)]^2 [ fmain2 ] instead of ( [ fmain1 ] ) .this method is referred to as a ( simple ) _ algebraic fit _ ( af ) , in this case one calls the _ algebraic distance _ from the point to the curve .the af is computationally cheaper than the olsf , but its accuracy is often unacceptable , see below .the simple af ( [ fmain2 ] ) can be generalized to a _ weighted algebraic fit _ , which is based on minimization of _ 3 ( ) = _i=1^n w_i [ p(x_i , y_i;)]^2 [ fmain3 ] where are some weights , which may balance ( [ fmain2 ] ) and improve its performance .one way to define weights results from a linear approximation to : where is the gradient vector , see .then one minimizes the function _ 4 ( ) = _ i=1^n [ fmain4 ] this method is called the _ gradient weighted algebraic fit _ ( graf ) .it is a particular case of ( [ fmain3 ] ) with .the graf is known since at least 1974 and recently became standard for polynomial curve fitting .the computational cost of graf depends on the function , but , generally , the graf is much faster than the olsf .it is also known from practice that the accuracy of graf is almost as good as that of the olsf , and our analysis below confirms this fact . the graf is often claimed to be a _ statistically optimal _ weighted algebraic fit , and we will prove this fact as well . not much has been published on statistical properties of the olsf and algebraic fits , apart from the simplest case of fitting lines and hyperplanes .chan , berman and culpin investigated circle fitting by the olsf and the simple algebraic fit ( [ fmain2 ] ) assuming the structural model .kanatani used the cartesian functional model and considered a general curve fitting problem .he established an analogue of the rao - cramer lower bound for unbiased estimates of , which we call here kanatani - cramer - rao ( kcr ) lower bound .he also showed that the covariance matrices of the olsf and the graf attain , to the leading order in , his lower bound .we note , however , that in most cases the olsf and algebraic fits are _ biased _ , hence the kcr lower bound , as it is derived in , does not immediately apply to these methods . in this paperwe extend the kcr lower bound to biased estimates , which include the olsf and all weighted algebraic fits .we prove the kcr bound for estimates satisfying the following mild assumption : * precision assumption*. for precise observations ( when for all ) , the estimate is precise , i.e. ( |*x*_1 , , |*x*_n ) = | [ tass ] it is easy to check that the olsf and algebraic fits ( [ fmain3 ] ) satisfy this assumption .we will also show that all unbiased estimates of satisfy ( [ tass ] ) .we then prove that the graf is , indeed , a statistically efficient fit , in the sense that its covariance matrix attains , to the leading order in , the kcr lower bound . on the other hand ,rather surprisingly , we find that graf is not the only statistically efficient algebraic fit , and we describe all statistically efficient algebraic fits .finally , we show that kanatani s theory and our extension to it remain valid for the radial functional model .our conclusions are illustrated by numerical experiments on circle fitting algorithms .recall that we have adopted the functional model , in which the true points , , are fixed .this automatically makes the sample size fixed , hence , many classical concepts of statistics , such as consistency and asymptotic efficiency ( which require taking the limit ) lose their meaning .it is customary , in the studies of the functional model of the curve fitting problem , to take the limit instead of , cf. .this is , by the way , not unreasonable from the practical point of view : in many experiments , is rather small and can not be ( easily ) increased , so the limit is of little interest . on the other hand , when the accuracy of experimental observations is high ( thus , is small ) , the limit is quite appropriate .now , let be an arbitrary estimate of satisfying the precision assumption ( [ tass ] ) . in our analysis we will always assume that all the underlying functions are regular ( continuous , have finite derivatives , etc . ) , which is a standard assumption .the mean value of the estimate is e ( ) = ( * x*_1, ,*x*_n ) _i=1^n f(*x*_i ) d*x*_1d*x*_n [ et ] where is the probability density function for the random point , as specified by a particular model ( cartesian or radial ) .we now expand the estimate into a taylor series about the true point remembering ( [ tass ] ) : ( * x*_1 , , * x*_n ) = | + _ i=1^n_ i ( * x*_i - |*x*_i ) + o(^2 ) [ texpand ] where _i = _ * x*_i ( |*x*_1 , , for the gradient with respect to the variables .in other words , is a matrix of partial derivatives of the components of the function with respect to the two variables and , and this derivative is taken at the point , substituting the expansion ( [ texpand ] ) into ( [ et ] ) gives e ( ) = | + o(^2 ) [ tbias ] since .hence , the bias of the estimate is of order .it easily follows from the expansion ( [ texpand ] ) that the covariance matrix of the estimate is given by \theta_i^t + { \cal o}(\sigma^4)\ ] ] ( it is not hard to see that the cubical terms vanish because the normal random variables with zero mean also have zero third moment , see also ) .now , for the cartesian model = \sigma^2 i\ ] ] and for the radial model = \sigma^2 { \bf n}_i { \bf n}_i^t\ ] ] where is a unit normal vector to the curve at the point . then we obtain _= ^2 _ i=1^n _ i _ i _ i^t + o(^4 ) [ csig0 ] where for the cartesian model and for the radial model . +* lemma*. _ we have for each .hence , for both models , cartesian and radial , the matrix is given by the same expression : __ = ^2 _ i=1^n _ i _ i^t + o(^4 ) [ csig ] this lemma is proved in appendix. our next goal is now to find a lower bound for the matrix _1:= _ i=1^n _ i_i^t [ calc1 ] following , we consider perturbations of the parameter vector and the true points satisfying two constraints .first , since the true points must belong to the true curve , , we obtain , by the chain rule , _ * x * p(|*x*_i;| ) , stands for the scalar product of vectors .second , since the identity ( [ tass ] ) holds for all , we get _ i=1^n _ i |*x*_i = [ tcon2 ] by using the notation ( [ ti ] ) .now we need to find a lower bound for the matrix ( [ calc1 ] ) subject to the constraints ( [ tcon1 ] ) and ( [ tcon2 ] ) .that bound follows from a general theorem in linear algebra : + * theorem ( linear algebra)*. _ let and .suppose nonzero vectors and nonzero vectors are given , .consider matrices for , and matrix assume that the vectors span ( hence is nonsingular ) .we say that a set of matrices ( each of size ) is * proper * if _a_i w_i = r [ propera1 ] for any vectors and such that u_i^tw_i + v_i^tr = 0 [ propera2 ] for all .then for any proper set of matrices the matrix is bounded from below by in the sense that is a positive semidefinite matrix .the equality holds if and only if for all ._ + this theorem is , probably , known , but we provide a full proof in appendix , for the sake of completeness . as a direct consequence of the above theoremwe obtain the lower bound for our matrix : + * theorem ( kanatani - cramer - rao lower bound)*. _ we have , in the sense that is a positive semidefinite matrix , where _ _ ^-1 = _ i=1^n [ dmin ] in view of ( [ csig ] ) and ( [ calc1 ] ) , the above theorem says that the lower bound for the covariance matrix is , to the leading order , _ _= ^2 d _ [ rc ] the standard deviations of the components of the estimate are of order . therefore, the bias of , which is at most of order by ( [ tbias ] ) , is infinitesimally small , as , compared to the standard deviations .this means that the estimates satisfying ( [ tass ] ) are practically unbiased .the bound ( [ rc ] ) was first derived by kanatani for the cartesian functional model and strictly unbiased estimates of , i.e. satisfying . one can easily derive ( [ tass ] ) from by taking the limit , hence our results generalize those of kanatani .here we derive an explicit formula for the covariance matrix of the weighted algebraic fit ( [ fmain3 ] ) and describe the weights for which the fit is statistically efficient . for brevity ,we write .we assume that the weight function is regular , in particular has bounded derivatives with respect to , the next section will demonstrate the importance of this condition .the solution of the minimization problem ( [ fmain3 ] ) satisfies p_i^2 _ w_i + 2 w_i p_i _p_i = 0 [ weq ] observe that , so that the first sum in ( [ weq ] ) is and the second sum is .hence , to the leading order , the solution of ( [ weq ] ) can be found by discarding the first sum and solving the reduced equation w_i p_i _p_i = 0 [ weq1 ] more precisely , if and are solutions of ( [ weq ] ) and ( [ weq1 ] ) , respectively , then , , and .furthermore , the covariance matrices of and coincide , to the leading order , i.e. as . therefore , in what follows , we only deal with the solution of equation ( [ weq1 ] ) . to find the covariance matrix of satisfying ( [ weq1 ] ) we put and and obtain , working to the leading order , hence ^{-1 } \left [ \sum w_i ( \nabla_{\bf x } p_i)^t \ , ( \delta { \bf x}_i)\ , ( \nabla_{\theta } p_i)\right ] + { \cal o}(\sigma^2)\ ] ] the covariance matrix is then \\ & = & \sigma^2 \left [ \sum w_i ( \nabla_{\theta } p_i ) ( \nabla_{\theta } p_i)^t \right ] ^{-1 } \left [ \sum w_i^2 \|\nabla_{\bf x } p_i\|^2 ( \nabla_{\theta } p_i ) ( \nabla_{\theta } p_i)^t \right ] \\ & & \times \left [ \sum w_i ( \nabla_{\theta } p_i ) ( \nabla_{\theta } p_i)^t \right ] ^{-1 } + { \cal o}(\sigma^3)\end{aligned}\ ] ] denote by the principal factor here , i.e.^{-1 } \left [ \sum w_i^2 \|\nabla_{\bf x } p_i\|^2 ( \nabla_{\theta } p_i ) ( \nabla_{\theta } p_i)^t \right ] \, \left [ \sum w_i ( \nabla_{\theta } p_i ) ( \nabla_{\theta } p_i)^t \right ] ^{-1}\ ] ] the following theorem establishes a lower bound for : + * theorem*. _ we have , in the sense that is a positive semidefinite matrix , where is given by ( [ dmin ] ) .the equality holds if and only if for all . in other words ,an algebraic fit ( [ fmain3 ] ) is * statistically efficient * if and only if the weight function satisfies w(x , y ; ) = [ wopt ] for all triples such that . here may be an arbitrary function of . _+ the bound here is a particular case of the previous theorem .it also can be obtained directly from the linear algebra theorem if one sets , , and ^{-1 } ( \nabla_{\theta } p_i ) \ , ( \nabla_{\bf x } p_i)^t\ ] ] for . the expression ( [ wopt ] )characterizing the efficiency , follows from the last claim in the linear algebra theorem .here we illustrate our conclusions by the relatively simple problem of fitting circles . the canonical equation of a circle is ( x - a)^2 + ( y - b)^2 -r^2=0 [ circ0 ] and we need to estimate three parameters . the simple algebraic fit ( [ fmain2 ] ) takes form _ 2(a , b , r ) = _ i=1^n [ ( x_i - a)^2 + ( y_i - b)^2 -r^2]^2 [ f2 ] and the weighted algebraic fit ( [ fmain3 ] ) takes form _ 3(a , b , r ) = _ i=1^n w_i [ ( x_i - a)^2 + ( y_i - b)^2 -r^2]^2 [ f3 ] in particular , the graf becomes _4(a , b , r ) = _i=1^n [ f4 ] ( where the irrelevant constant factor of 4 in the denominator is dropped ) . in terms of ( [ dmin ] ), we have and , hence =4r^2\ ] ] therefore , _ = ( ccc u_i^2 & u_iv_i & u_i + u_iv_i & v_i^2 & v_i + u_i & v_i & n + ) ^-1 [ dmincir ] where we denote , for brevity , the above expression for was derived earlier in .now , our theorem in section [ secse ] shows that the weighted algebraic fit ( [ f3 ] ) is statistically efficient if and only if the weight function satisfies .since may be an arbitrary function , then the denominator here is irrelevant .hence , statistically efficiency is achieved whenever is simply independent of and for all lying on the circle . in particular , the graf ( [ f4 ] ) is statistically efficient because ^{-1}=r^{-2}$ ] .the simple af ( [ f2 ] ) is also statistically efficient since .we note that the graf ( [ f4 ] ) is a highly nonlinear problem , and in its exact form ( [ f4 ] ) is not used in practice . instead, there are two modifications of graf popular among experimenters .one is due to chernov and ososkov and pratt : _4(a , b , r ) = r^-2_i=1^n [ ( x_i - a)^2 + ( y_i - b)^2 -r^2]^2 [ f4a ] ( it is based on the approximation ) , and the other due to agin and taubin : _ 4(a , b , r ) = _ i=1^n [ ( x_i - a)^2 + ( y_i - b)^2 -r^2]^2 [ f4b ] ( here one simply averages the denominator of ( [ f4 ] ) over ) .we refer the reader to for a detailed analysis of these and other circle fitting algorithms , including their numerical implementations .we have tested experimentally the efficiency of four circle fitting algorithms : the olsf ( [ fmain1 ] ) , the simple af ( [ f2 ] ) , the pratt method ( [ f4a ] ) , and the taubin method ( [ f4b ] ) .we have generated points equally spaced on a circle , added an isotropic gaussian noise with variance ( according to the cartesian model ) , and estimated the efficiency of the estimate of the center by e = [ e ] here is the true center , is its estimate , denotes averaging over many random samples , and , are the first two diagonal entries of the matrix ( [ dmincir ] ) .table 1 shows the efficiency of the above mentioned four algorithms for various values of .we see that they all perform very well , and indeed are efficient as .one might notice that the olsf slightly outperforms the other methods , and the af is the second best .[ cols=">,^,^,^,^",options="header " , ] table 3 .data are sampled along a quarter of a circle .it is interesting to test smaller circular arcs , too .figure 1 shows a color - coded diagram of the efficiency of the olsf and the af for arcs from to and variable ( we set , where is the height of the circular arc , see fig . 2 , and varies from 0 to 0.5 ) .the efficiency of the pratt and taubin is virtually identical to that of the olsf , so it is not shown here .we see that the olsf and af are efficient as ( both squares in the diagram get white at the bottom ) , but the af loses its efficiency at moderate levels of noise ( ) , while the olsf remains accurate up to after which it rather sharply breaks down . figure 1 : the efficiency of the simple olsf ( left ) and the af ( center ) .the bar on the right explains color codes .the following analysis sheds more light on the behavior of the circle fitting algorithms .when the curvature of the arc decreases , the center coordinates and the radius grow to infinity and their estimates become highly unreliable . in that case the circle equation ( [ circ0 ] ) can be converted to a more convenient algebraic form a(x^2+y^2 ) + bx + cy + d = 0 [ abcd ] with an additional constrain on the parameters : .this parametrization was used in , and analyzed in detail in .we note that the original parameters can be recovered via , , and .the new parametrization ( [ abcd ] ) is safe to use for arcs with arbitrary small curvature : the parameters remain bounded and never develop singularities , see .even as the curvature vanishes , we simply get , and the equation ( [ abcd ] ) represents a line .figure 2 : the height of an arc , , and our formula for . in terms of the new parameters ,the weighted algebraic fit ( [ fmain3 ] ) takes form _ 3(a , b , c , d ) = _i=1^n w_i [ a(x^2+y^2 ) + bx + cy + d]^2 [ ff3 ] ( under the constraint ) . converting the af ( [ f2 ] ) to the new parameters gives _2(a , b , c , d ) = _i=1^n a^-2 [ a(x^2+y^2 ) + bx + cy + d]^2 [ ff2 ] which corresponds to the weight function .the pratt method ( [ f4a ] ) turns to _ 4(a , b , c , d ) = _ i=1^n [ a(x^2+y^2 ) + bx + cy + d]^2 [ ff4 ] we now see why the af is unstable and inaccurate for arcs with small curvature : its weight function develops a singularity ( it explodes ) in the limit .recall that , in our derivation of the statistical efficiency theorem ( section 3 ) , we assumed that the weight function was regular ( had bounded derivatives ) .this assumption is clearly violated by the af ( [ ff2 ] ) . on the contrary ,the pratt fit ( [ ff4 ] ) uses a safe choice and thus behaves decently on arcs with small curvature , see next . figure 3 : the efficiency of the simple af ( left ) and the pratt method ( center ) .the bar on the right explains color codes .figure 3 shows a color - coded diagram of the efficiency of the estimate of the parameter , hence the estimation of is equivalent to that of the curvature , an important geometric parameter of the arc .] by the af ( [ ff2 ] ) versus pratt ( [ ff4 ] ) for arcs from to and the noise level , where is the height of the circular arc and varies from 0 to 0.5 . the efficiency of the olsf and the taubin method is visually indistinguishable from that of pratt ( the central square in fig .3 ) , so we did not include it here .we see that the af performs significantly worse than the pratt method for all arcs and most of the values of ( i.e. , ) .the pratt s efficiency is close 100% , its lowest point is 89% for arcs and ( the top right corner of the central square barely gets grey ) .the af s efficiency is below 10% for all and almost zero for .still , the af remains efficient as ( as the tiny white strip at the bottom of the left square proves ) , but its efficiency can be only counted on when is extremely small .our analysis demonstrates that the choice of the weights in the weighted algebraic fit ( [ fmain3 ] ) should be made according to our theorem in section 3 , and , in addition , one should avoid singularities in the domain of parameters .here we prove the theorem of linear algebra stated in section [ seckcr ] .for the sake of clarity , we divide our proof into small lemmas : * lemma 2*. _ if a set of matrices is proper , then rank .furthermore , each is given by for some vector , and the vectors satisfy where is the identity matrix .the converse is also true . __ proof_. let vectors and satisfy the requirements ( [ propera1 ] ) and ( [ propera2 ] ) of the theorem . consider the orthogonal decomposition where is perpendicular to , i.e. . then the constraint ( [ propera2 ] )can be rewritten as c_i = - [ propera3 ] for all and ( [ propera1 ] ) takes form _i=1^n c_ia_iu_i + _i=1^n a_iw_i^= r [ propera4 ] we conclude that for every vector orthogonal to , hence has a -dimensional kernel , so indeed its rank is zero or one .if we denote , we obtain .combining this with ( [ propera3])-([propera4 ] ) gives since this identity holds for any vector , the expression within parentheses is .the converse is obtained by straightforward calculations .lemma is proved . *lemma 3*. _ the sets of proper matrices make a linear variety , in the following sense .let and be two proper sets of matrices , then the set defined by is proper for every ._ _proof_. for each consider the matrix .using the previous lemma gives by construction , this matrix is positive semidefinite .hence , the following matrix is also positive semidefinite : by sylvester s theorem , the matrix is positive semidefinite . _proof_. assume that there is a proper set of matrices , different from , for which .denote . by lemma 3 ,the set of matrices is proper for every real . consider the variable matrix [ a_i(\gamma)]^t\\ & = & \sum_{i=1}^n a_i^{\rm o}(a_i^{\rm o})^t + \gamma\left ( \sum_{i=1}^n a_i^{\rm o}(\delta a_i)^t + \sum_{i=1}^n ( \delta a_i)(a_i^{\rm o})^t\right ) + \gamma^2\sum_{i=1}^n ( \delta a_i)(\delta a_i)^t\end{aligned}\ ] ] note that the matrix is symmetric . by lemma 5we have for all , and by lemma 6 we have .it is then easy to derive that .next , the matrix is symmetric positive semidefinite . since we assumed that , it is easy to derive that as well .therefore , for every .the theorem is proved .g. taubin , estimation of planar curves , surfaces and nonplanar space curves defined by implicit equations , with applications to edge and range image segmentation , _ ieee transactions on pattern analysis and machine intelligence _ , * 13 * , 1991 , 11151138 . | we study the problem of fitting parametrized curves to noisy data . under certain assumptions ( known as cartesian and radial functional models ) , we derive asymptotic expressions for the bias and the covariance matrix of the parameter estimates . we also extend kanatani s version of the cramer - rao lower bound , which he proved for unbiased estimates only , to more general estimates that include many popular algorithms ( most notably , the orthogonal least squares and algebraic fits ) . we then show that the gradient - weighted algebraic fit is statistically efficient and describe all other statistically efficient algebraic fits . keywords : least squares fit , curve fitting , circle fitting , algebraic fit , rao - cramer bound , efficiency , functional model . |
in this paper we present the software tool .first , we provide the motivation and a short description .we then present the natural deduction system as it is done in a popular textbook and as is it done in by looking at its formalization in isabelle .this illustrates the differences between the two approaches .we also present the semantics of first - order logic as formalized in isabelle , which was used to prove the proof system of sound .thereafter we explain how is used to construct a natural deduction proof .lastly , we compare to other natural deduction assistants and consider how could be improved .we have been teaching a bachelor logic course with logic programming for a decade using a textbook with emphasis on tableaux and resolution .we have started to use the proof assistant isabelle and refutation proofs are less preferable here .the proof system of natural deduction with the introduction and elimination rules as well as a discharge mechanism seems more suitable .the natural deduction proof system is widely known , used and studied among logicians throughout the world .however , our experience shows that many of our undergraduate computer science students struggle to understand the most difficult aspects .this also goes for other proof systems .the formal language of logic can be hard to teach our students because they do not have a strong theoretical mathematical background . instead, most of the students have a good understanding of concrete computer code in a programming language .the syntax used in isabelle is in many ways similar to a programming language , and therefore a clear and explicit formalization of first - order logic and a proof system may help the students in understanding important details .formalizations of model theory and proof theory of first - order logic are rare , for example .we present the natural deduction assistant with a formalization in the proof assistant isabelle of its proof system .it can be used directly in a browser without any further installation and is available here : http://nadea.compute.dtu.dk/ is open source software developed in typescript / javascript and stored on github .the formalization of its proof system in isabelle is available here : http://logic-tools.github.io/ once is loaded in the browser about 250 kb with the jquery core library no internet connection is required . therefore can also be stored locally .we display the natural deduction proofs in two different formats .we present the proof in an explicit code format that is equivalent to the isabelle syntax , but with a few syntactic differences to make it easier to understand for someone trying to learn isabelle . in this format , we present the proof in a style very similar to that of fitch s diagram proofs .we avoid the seemingly popular gentzen s tree style to focus less on a visually pleasing graphical representation that is presumably much more challenging to implement .we find that the following requirements constitute the key ideals for any natural deduction assistant .it should be : * easy to use .* clear and explicit in every detail of the proof .* based on a formalization that can be proved at least sound , but preferably also complete .based on this , we saw an opportunity to develop which offers help for new users , but also serves to present an approach that is relevant to the advanced users . in a paper considering the tools developed for teaching logic over the last decade ,the following is said about assistants ( not proof assistants like isabelle but tools for learning / teaching logic ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ assistants are characterized by a higher degree of interactivity with the user .they provide menus and dialogues to the user for interaction purposes .this kind of tool gives the students the feeling that they are being helped in building the solution .they provide error messages and hints in the guidance to the construction of the answer .many of them usually offer construction of solution in natural deduction proofs . [ ... ] they are usually free licensed and of open access . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we think that this characterization in many ways fits . while might not bring something new to the table in the form of delicate graphical features , we emphasize the fact that it has some rather unique features such as a formalization of its proof system in isabelle .we consider natural deduction as presented in a popular textbook on logic in computer science .first , we take a look substitution , which is central to the treatment of quantifiers in natural deduction. the following definition for substitution is used in ( * ? ? ?* top ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ given a variable , a term and a formula we define ] in your exercises or exams , then that is what you should do ; but any reasonable implementation of substitution used in a theorem prover would have to check whether is free for in and , if not , rename some variables with fresh ones to avoid the undesirable capture of variables ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we find that this way of presenting natural deduction proof systems leaves out some important notions that the students ought to learn . in our formalization such notions and their complicationsbecome easier to explain because all side conditions of the rules are very explicitly stated .we see it as one of the major advantages of presenting this formalization to students .we now present the natural deduction rules as described in the literature , again using .the first 9 are rules for classical propositional logic and the last 4 are for first - order logic .intuitionistic logic can be obtained by omitting the rule _ pbc _( proof by contradiction , called `` boole '' later ) and adding the -elimination rule ( also known as the rule of explosion ) .the rules are as follows : side conditions to rules for quantifiers : : can not occur outside its box ( and therefore not in ) . : must be free for in . : must be free for in . : is a new variable which does not occur outside its box .in addition there is a special copy rule : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a final rule is required in order to allow us to conclude a box with a formula which has already appeared earlier in the proof . [ ... ] the rule ` copy ' allows us to repeat something that we know already . we need to do this in this example , because the rule requires that we end the inner box with .the copy rule entitles us to copy formulas that appeared before , unless they depend on temporary assumptions whose box has already been closed . though a little inelegant, this additional rule is a small price to pay for the freedom of being able to use premises , or any other ` visible ' formulas , more than once . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ the copy rule is not needed in our formalization due to the way it manages assumptions .as it can be seen , there are no rules for truth , negation or biimplication , but the following equivalences can be used : the symbols and are arbitrary formulas .one of the unique features of is that it comes with a formalization in isabelle of its proof system .the terms and formulas of the first - order logic language are defined as the data types and ( later abbreviated and , respectively ) .the type represents predicate and function symbols ( later abbreviated ) .@ l + + + truth , negation and biimplication are abbreviations . in the syntax of our formalization, we refer to variables by use of the de bruijn indices . that is , instead of identifying a variable by use of a name , usually , , etc ., each variable has an index that determines its scope .the use of de bruijn indices instead of named variables allows for a simple definition of substitution .furthermore , it also serves the purpose of teaching the students about de bruijn indices .note that we are not advocating that de bruijn indices replace the standard treatment of variables in general .it arguably makes complex formulas harder to read , but the pedagogical advance is that the notion of scope is exercised .provability in is defined inductively as follows : { { \textsf{ok r a}}}{{\textsf{ok ( dis p q ) a } } ~~&~~ { \textsf{ok r ( p \ # a ) } } ~~&~~ { \textsf{ok r ( q \ # a)}}}\ ] ] { { \textsf{ok q a}}}{{\textsf{ok ( exi p ) a } } & ~ { \textsf{ok q ( ( sub 0 ( fun c [ ] ) p ) \ # a ) } } & { \textsf{news c ( p\#q\#a)}}}\ ] ] { { \textsf{ok ( exi p ) a}}}{{\textsf{ok ( sub 0 t p ) a}}}\ ] ] means that the formula follows from the list of assumptions and means that is a member of . the operator is between the head and the tail of a list .checks if the identifier does not occur in the any of the formulas in the list and returns the formula where the term has been substituted for the variable with the de bruijn index . instead of writing we could also use the syntax , even in isabelle , but we prefer a more programming - like approach . in the types we use for function spaces .the definitions of , and are as follow : @ l + + + + + + + + + + + + + + + + + + + @ l + + + + + + + + + + + + + + + + + + + to give meaning to formulas and to prove sound we need a semantics of the first - order logic language .this semantics is defined in the formalization in isabelle , and it is thus not part of the tool itself .we present the semantics below .is the environment , i.e. a mapping of variables to elements .maps function symbols to the maps they represent .these maps are from lists of elements of the universe to elements of the universe .likewise , maps predicate symbols to the maps they represent . is a type variable that represents the universe . incan be instantiated with any type .for instance , it can be instantiated with the natural numbers , the real number or strings .@ l + + + + + + + + + + + + semantics e f g ( p ) = ( x. semantics ( % n. if n = 0 then x else e ( n 1 ) ) f g p ) + semantics e f g ( p ) = ( x. semantics ( % n. if n = 0 then x else e ( n 1 ) ) f g p ) most of the cases of should be self - explanatory , but the case is complicated .the details are not important here , but in the case for it uses the universal quantifier ( ) of isabelle s higher - order logic to consider all values of the universe. it also uses the lambda abstraction operator ( % ) to keep track of the indices of the variables .likewise , the case for uses the existential quantifier ( ) of isabelle s higher - order logic .we have proved soundness of the formalization in isabelle ( shown here as a derived rule ) : {{\textsf{semantics e f g p}}}{{\textsf{ok p [ \,]}}}\ ] ] this result makes interesting to a broader audience since it gives confidence in the formulas proved using the tool .we now describe the core features of from the perspective of the user .that is , we uncover how to use to conduct and edit a proof as well as how proofs are presented . in order to start a proof, you have to start by specifying the goal formula , that is , the formula you wish to prove . to do so, you must enable editing mode by clicking the edit button in the top menu bar .this will show the underlying proof code and you can build formulas by clicking the red symbol . alternatively , you can load a number of tests by clicking the load button . at all times , once you have fully specified the conclusion of any given rule , you can continue the proof by selecting the next rule to apply .again you can do this by clicking the the red symbol . furthermore , allows for undoing and redoing editing steps with no limits .all proofs are conducted in backward - chaining mode .that is , you must start by specifying the formula that you wish to prove .you then apply the rules inductively until you reach a proof if you can find one .the proof is finished by automatic application of the rule once the conclusion of a rule is found in the list of assumptions . to start over on a new proof, you can load the blank proof by using the load button , or you can refresh the page . please note that any unsaved work will then be gone .in we present any given natural deduction proof ( or an attempt at one ) in two different types of syntax .one syntax follows the rules as defined in section [ dedrules ] and is closely related to the formalization in isabelle , but with a redefined and more simple syntax in terms of learning .the proof is not built as most often seen in the literature about natural deduction .usually , for each rule the premises are placed above its conclusion separated by a line .we instead follow the procedure of placing each premise of the rule on separate lines below its conclusion with an additional level of indentation . { p \land ( p \rightarrow q ) \rightarrow q } { \infer { q } { \infer { p \rightarrow q } { \infer[^{(1 ) } ] { p \land ( p \rightarrow q ) } { } } & \infer { p } { \infer[^{(1 ) } ] { p \land ( p \rightarrow q ) } { } } } } \ ] ]throughout the development of we have considered some of the natural deduction assistants currently available .several of the tools available share some common flaws. they can be hard to get started with , or depend on a specific platform . however , there are also many tools that each bring something useful and unique to the table .one of the most prominent is panda , described in .panda includes a lot of graphical features that make it fast for the experienced user to conduct proofs , and it helps the beginners to tread safely .another characteristic of panda is the possibility to edit proofs partially before combining them into a whole .it definitely serves well to reduce the confusion and complexity involved in conducting large proofs .however , we still believe that the way of presenting the proof can be more explicit . in , every detailis clearly stated as part of the proof code . in that sense, the students should become more aware of the side conditions to rules and how they work .another tool that deserves mention is proofweb which is open source software for teaching natural deduction .it provides interaction between some proof assistants ( coq , isabelle , lego ) and a web interface .the tool is highly advanced in its features and uses its own syntax .also , it gives the user the possibility to display the proof in different formats .however , the advanced features come at the cost of being very complex for undergraduate students and require that you learn a new syntax .it serves as a great tool for anyone familiar with natural deduction that wants to conduct complex proofs that can be verified by the system .it may , on the other hand , prove less useful for teaching natural deduction to beginners since there is no easy way to get started . in , you are free to apply any ( applicable ) rule to a given formula , and thus , beginners have the freedom to play around with the proof system in a safe way . furthermore , the formalized soundness result for the proof system of makes it relevant for a broader audience , since this gives confidence in that the formulas proved with the tool are actually valid .in there is support for proofs in propositional logic as well as first - order logic .we would also like to extend to more complex logic languages , the most natural step being higher - order logic .this could be achieved using the cakeml approach .other branches of logic would also be interesting , and the possibilities are numerous .apart from just extending the natural deduction proof system to support other types of logic , another option is to implement other proof systems as well .because the tool has a formalization in isabelle of its proof system , we would like to provide features that allow for a direct integration with isabelle .for instance , we would like to allow for proofs to be exported to an isabelle format that could verify the correctness of the proofs .a formal verification of the implementation would require much effort , but perhaps it could be reimplemented on top of isabelle ( although probably not in typescript / javascript ) .we would like to extend with more features in order to help the user in conducting proofs and in understanding logic .for example , the tool could be extended with step - by - step execution of the auxiliary primitive recursive functions used in the side conditions of the natural deduction rules .so far only a small group of computer science students have tested , but it will be classroom tested with around 60 bachelor students in the next semester . currently the tool has no support for student assignments and automatic feedback and/or grading . the tool could be extended such that the students are evaluated and perhaps given a score based on the proofs they conduct .it is not obvious how this could best be implemented .we hope to find the resources for the development of such features but already now we think that the tool has the potential to be one of the main ways to teach logic in mathematics and computer science .we would like to thank stefan berghofer for discussions about the formalization of natural deduction in isabelle .we would also like to thank andreas halkjr from and andreas viktor hess for comments on the paper .jrgen villadsen , anders schlichtkrull and andreas viktor hess .meta - logical reasoning in higher - order logic . accepted at 29th international symposium logica , hejnice monastery , czech republic , 15 - 19 june 2015 .olivier gasquet , franois schwarzentruber and martin strecker .panda : a proof assistant in natural deduction for all .a gentzen style proof assistant for undergraduate students .lecture notes in computer science 6680 , 8592 .springer 2011 . | we present a new software tool for teaching logic based on natural deduction . its proof system is formalized in the proof assistant isabelle such that its definition is very precise . soundness of the formalization has been proved in isabelle . the tool is open source software developed in typescript / javascript and can thus be used directly in a browser without any further installation . although developed for undergraduate computer science students who are used to study and program concrete computer code in a programming language we consider the approach relevant for a broader audience and for other proof systems as well . |
tools for enclosing the image set of factorable functions are the basis for many reliable numerical computing methods including global optimization algorithms , robust and semi - infinite optimization algorithms , as well as validated integration algorithms . here , factorable functions are functions that can be represented as a finite recursive composition of atom operations from a ( finite ) library this library typically includes binary sums , binary products , and a number of univariate atom functions such as trigonometric functions , exponential functions , or logarithms . in practice ,factorable functions over a given library are represented in the form of a computational graph , which can be obtained conveniently in most object oriented programming languages by using operator overloading or source code transformation .existing methods for computing enclosures of factorable functions can be divided into three categories : traditional interval arithmetics and its variants , arithmetics using other convex sets such as ellipsoids or zonotopes , as well as non - convex set arithmetics . in the following , advantages and limitations of these existing methods are reviewed .[ [ interval - arithmetic . ] ] interval arithmetic .+ + + + + + + + + + + + + + + + + + + + interval arithmetic is one of the oldest and most basic tools for computing enclosures of the image set of a factorable function on a given compact domain . throughout this paper, we use the notation \subseteq r \ , \mid \ , a , b \in \mathbb r , \ , a \ , \leq \ , b \ , \right\}\ ] ] to denote real valued intervals .interval arithmetic proceeds by defining bounding rules for binary sums and products as well as for all univariate atom operations in the given library .for example , for two intervals \in \mathbb i ] , their sum , product , and exponential are given by + [ c , d ] & = & [ a+b , c+d ] \ ; , \\[0.16 cm ] \label{eq::intervalproduct } [ a , b]*[c , d ] & = & \left [ \ , \min \ { ac , ad , bc , bd \ } , \ , \max \ { ac , ad , bc , bd \ } \, \right ] \ ; , \\[0.16 cm ] \text{and } \quad e^{[a , b ] } & = & \left [ e^a , e^b \right ] \ ; .\end{aligned}\ ] ] the derivation of other univariate composition rules is straightforward for most commonly used univariate atom operations . additionally , in order to ensure that an interval arithmetic is compatible with the addition and multiplication with scalars , the notation with ] .similarly , denotes the scaled interval ] if .unfortunately , one of the main limitations of interval arithmetics is that the computed interval enclosures are often much wider than the exact range of the given factorable function .this overestimation effect is mainly caused by the so called _dependency problem_. the dependency problem can already be observed for very simple functions , such as , ] , but a naive application of standard interval arithmetics gives the interval bound ] , as the analytic continuation of this function has poles at , .a promising direction towards overcoming this limitation of taylor models is the ongoing research on so - called chebychev models . for functions with one or two variableschebychev models can be constructed by the software ` chebfun ` as developed by trefethen and coworkers .chebychev models for functions with more than two variables are the focus of recent research . while computing bounds on convex sets is computationally tractable , finding tight bounds of a multivariate polynomial is itself a complex task . here , one way to compute bounds on such polynomials is to use linear matrix inequalities .other heuristics for computing range bounders for multivariate polynomials can be found in .a principal goal of this paper is to develop a non - convex set arithmetic for factorable function that exploits _ global algebraic structures _ rather than attempting to borrow methods from local and numerical analysis such as variational analysis , taylor expansions , or other polynomial approximation techniques .the following paragraphs outline two global algebraic structures that are exploited by the proposed interval superposition arithmetic , namely , _ addition theorems _ and _ additive separability _ .[ [ addition - theorems . ] ] addition theorems .+ + + + + + + + + + + + + + + + + + a rational addition theorem for a univariate atom operation with domain is a formula that expresses as a rational function of and , with being a bivariate rational function .an important example is the addition theorem for the exponential function , , which holds globally on the domain .notice that the right - hand side of the addition theorem for the sine function , , is an exception in the sense that its right - hand side depends not only and but also and .however , if we allow complex arguments , this addition theorem can alternatively be regarded as a special case of the addition theorem for the exponential function such that it can be written in the form of .a similar statement holds for the cosine function .moreover , in analogy to the univariate case , a rational addition theorem for a bivariate atom operation with domain , is a formula of the form for all with and all with . here , is a rational function with arguments .an incomplete list of examples for addition theorems , which are either in this form or in generalized versions relevant for the derivations in this paper , are collected in table [ tab::additiontheorems ] ..a list of rational addition theorems for common atom operations .the addition theorems for the sine and cosine functions can alternatively be regarded as special cases of the addition theorem for the exponential function , if complex arguments are allowed.[tab::additiontheorems ] [ cols="^,^,^ " , ] the goal of this section is to compare the performance of first order interval superposition models versus taylor models on wider domains .let denote a non - convex factorable function of the form on the two - dimensional domain \times [ 0 , \overline x_2 ] \subseteq \mathbb r^2 ] , i.e. , for and .the upper right plot in figure [ fig::result1 ] shows the overestimation of five different enclosure methods for bounding for as a function of the domain parameter ] .similarly , the lower right plot in figure [ fig::result1 ] depicts the corresponding results for , again as a function of ] the overestimation of the first order interval superposition method with yields an enclosure that is approximately times larger than the width of the exact range , i.e. , the relative over - approximation is approximately .this is in contrast to taylor models , which yield bounds that are more than times larger than the exact image set .the performance of taylor models of order larger than is not shown in the figure , as they perform even worse than the taylor models of order and on the analyzed , particularly large domains .here again , of course , if we would zoom in on smaller domains , we could see that increasing the taylor model order does improve the accuracy for such smaller , but this effect is well - known and therefore not analyzed further at this point .moreover , as the above example considers a function with two variables , second or other higher order interval superposition models are equivalent to exhaustive branching , i.e. , this example is not suited for making a fair comparison of such higher order methods .therefore , we refer to section [ sec::recursion ] , where first and second order interval superposition models are analyzed for a more challenging case study . in order to illustrate how the proposed interval superposition arithmetics performs for a more challenging example ,we introduce the function \frac{1}{4 } x_2 + \frac{1}{4 } x_2 x_3 \\[0.16 cm ] -x_3 ^ 2 + x_3 ^ 4 + \frac{4}{10 } x_1 x_3 + \frac{1}{10 } x_2 ^ 2 \end{array } \right ) \ ; . \label{eq::f1}\end{aligned}\ ] ] notice that is a multivariate polynomial of order . in the next stepwe define the functions recursively . of course , the function are `` simple '' polynomial functions in principle , but the actual challenge is that the order of these polynomials grows exponentially with .more precisely , is a polynomial function of order .the goal of this section is to find enclosure sets of the exact image sets on the rather large interval domain ^ 3 \subseteq \mathbb r^3 ] .of course , this conclusion changes if the functions are bounded on smaller interval domains , where taylor models tend to yield more accurate enclosures . because the exact image sets converge linearly and because theorem [ thm::higherorderlocal ] predicts that the overestimation error of second order interval superpositions converge has the form ^{3 } \right ) } \ ; , \ ] ] with respect to the diameter of the input set , one would expect that the overestimation error converges cubically for larger input domains and then linearly , once term the linearly convergent term becomes dominant .the convergence plot of the second order interval superposition models in figure [ fig::recursion ] indicates indeed such a behavior : for the range the hausdorff distance between the second order interval superposition enclosure and the exact set converges superlinearly , although one might argue that it is hard to tell from these numerical results whether this range corresponds to a cubic convergence phase .however , for the convergence rate switches and becomes linear , since in this phase the terms with linear convergence order , , dominate recalling that these terms did not dominate early because we chose a rather large number of branches , , in this example .consequently , the numerical convergence behavior of the second order interval superposition models for can be explained by the theoretical analysis results from theorem [ thm::higherorderlocal ] .this paper has introduced an interval superposition arithmetic and illustrated the advantages of this new arithmetic compared to existing enclosure methods for factorable functions on wider domains .the construction of interval superposition models is based on derivative - free composition rules which exploit global algebraic properties such as rational addition theorems , inverse rational addition theorems , and partially separable sub - structures of the computational graph of factorable functions .the corresponding technical developments are based on dependency expansions , which can be viewed as a derivative - free , algebraic abstraction of taylor expansions .interval superposition arithmetic has polynomial run - time and storage complexity of order , which depends on the number of variables of the factorable functions and the branching accuracy .the order of this polynomial run - time bound is equal to the order of the dependency expansion .moreover , this paper has established local and global convergence estimates of the proposed arithmetic in dependence on the order of the interval superposition and in dependence on the coupling degree of the given factorable function . from a practical perspective, the main advantage of interval superposition arithmetics compared to other enclosure methods is that it yields reasonably accurate bounds of the image set of factorable functions on wider interval domains , for which existing methods often yield divergent or very conservative bounds .this advantage has been illustrated by analyzing two numerical case studies .many algorithms of current interest such as semi - infinite programming algorithms , validated integrators , as well as branch & bound methods for global optimization rely on the availability of set arithmetics for factorable functions . as interval superposition models have major advantages compared to existing set computation methods on wider domains , this novel arithmetic should be of practical relevance to everyone working on or using such algorithms . moreover , since interval superposition arithmetic is , on the one hand , a bounding tool , but , on the other hand , exploits the concept of coordinate aligned branching while enforcing polynomial run - time and storage requirements , this arithmetic might lead to new types of global optimization algorithms , where the branching and bounding operations are not considered as separate routines anymore . however , a deeper analysis of the corresponding interplay between the presence of global algebraic structures in factorable functions and the complexity of their associated optimization problems is beyond the scope of this paper and shall be part of future research. 99 y. abe .a statement of weierstrass on meromorphic functions which admit an algebraic addition theorem . j. math .japan 57(3):709723 , 2005 . s.a .algebraic addition theorems . adv .math . 13:2030 , 1974 .z. battles , l.n .an extension of matlab to continuous functions and operators .siam j. sci .comput . 25:17431770 , 2004 .m. berz . from taylorseries to taylor models . in nonlinear problems in accelerator physics , american institute of physics cp405 , pp.:127 , 1997 .m. berz , g. hoffsttter . computation and application of taylor polynomials with remainder bounds .comput . 4:8397 , 1998 .a. bompadre , a. mitsos , b. chachuat .convergence analysis of taylor and mccormick - taylor models .journal of global optimization 57(1):75114 , 2013 .b. buchberger , r. loos . algebraic simplification . in : computer algebra - symbolic and algebraic computation .b. buchberger , g. e. collins , r. loos ( eds . ) , pp . 1143 , 1982 .b. chachuat , b. houska , r. paulen , n. peric , j. rajyaguru , m.e .set theoretic approaches in analysis , estimation and control of nonlinear systems .ifac - papersonline volume 48(8 ) , pp:981995 , 2015 .conn , n.i.m .gould , ph.l .an introduction to the structure of large scale nonlinear optimization problems and the lancelot project . in : glowinski , r. ,lichnewsky , a. ( eds . ) computing methods in applied sciences and engineering , pp .4251 . siam , philadelphia , 1990 .conn , a.r ., gould , n.i.m . ,toint , ph.l . : improving the decomposition of partially separable functions in the context of large - scale optimization : a first approach . in : w.w .hager , d.w .hearn , p.m. pardalos ( eds . ) .large scale optimization : state of the art , pp . 8294 .kluwer academic publishers , amsterdam , 1994 .du , r.b .the cluster problem in multivariate global optimization .journal of global optimization 5(3):253265 , 1994 .eckmann , h. koch , p. wittwer . a computer - assisted proof of universality in area - preserving maps .memoirs of the ams 47:289 , 1984 .de figueiredo , j. stolfi .affine arithmetic : concepts and applications .numerical algorithms 37(1 - 4):147158 , 2004 .floudas and o. stein .the adaptative convexification algorithm : a feasible point method for semi - infinite programming .siam journal on optimization , 18(4):11871208 , 2007 .c.a . floudas .deterministic global optimization : theory , methods and applications .springer science & business media , vol .37 , 2013 .a. griewank and a. walther . .siam , 2008 .goemans and d.p .improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming ., 42:11151145 , 1995 .d. henrion , s. tarbouriech , and d. arzelier .pproximations for the radius of the intersection of ellipsoids : a survey ., 108(1):128 , 2001 .b. houska , f. logist , j. van impe , m. diehl .robust optimization of nonlinear dynamic systems with application to a jacketed tubular reactor .journal of process control , volume 22(6 ) , pp . 11521160 , 2012 .b. houska , m.e .villanueva , b. chachuat .stable set - valued integration of nonlinear dynamic systems using affine set parameterizations .siam journal on numerical analysis , 53(5 ) , pp:23072328 , 2015 .a. kurzhanski , i. valyi .ellipsoidal calculus for estimation and control .series in systems & control : foundations & applications , birkhuser , 1997 .moments , positive polynomials and their applications .imperial college press , 2009 .q. lin , j.g .methods for bounding the range of a polynomial .j. comput appl math 58:193199 , 1995 .m. neher , k.r .jackson , n.s .on taylor model based integration of odes .siam journal on numerical analysis 45:236262 , 2007 .a. nemirovski , c. roos , and t. terlaky . on maximization of quadratic form over intersection of ellipsoids with common center ., 86(3):463473 , 1999 .y. nesterov .semidefinite relaxation and non - convex quadratic optimization . , 12:120 , 1997 .a. neumaier .complete search in continuous global optimization and constraint satisfaction .acta numer . 13:271369 , 2004 .k. makino , m. berz .efficient control of the dependency problem based on taylor model methods .comput . 5(1):312 , 1999 .computability of global solutions to factorable nonconvex programs : part i convex underestimating problems .mathematical programing 10:147175 , 1976 .r. misener , c.a .glomiqo : global mixed - integer quadratic optimizer .journal of global optimization , 57(1):350 , 2013 .a. mitsos , p. lemonidis , p.i .global solution of bilevel programs with a nonconvex inner program .journal of global optimization 42.4:475513 , 2008 .a. mitsos , b. chachuat , p.i .mccormick - based relaxations of algorithms .siam journal on optimization 20(2):573601 , 2009 .moore . interval analysis .prentice - hall , englewood cliffs , nj , 2966 .moore , r.b .kearfott , m.j .introduction to interval analysis .siam , philadelphia , pa , 2009 .p.m. pardalos , s.a .quadratic programming with one negative eigenvalue is np - hard .journal of global optimization 1 ( 1 ) : 1522 , 1991 .j. rajyaguru , m.e .villanueva , b. houksa , b. chachuat .higher - order inclusions of factorable functions by chebyshev models .journal of global optimization , 2016 .( submitted ) h. ratschek , j. rokne .computer methods for the range of functions .series in mathematics and its applications , ellis horwood ltd , mathematics and its applications , chichester , uk , 1984 .a general purpose global optimization software package .journal of global optimization , 8(2):201205 , 1996 .a.m. sahlodin , b. chachuat .convex / concave relaxations of parametric odes using taylor models .computers and chemical engineering 35(5):844857 , 2011 .m. tawarmalani , n.v .sahinidis . a polyhedral branch - and - cut approach to global optimization .mathematical programming , 103(2):225249 , 2005 .computing numerically with functions instead of numbers .sci . 1:919 , 2007 .a. townsend , l.n .an extension of chebfun to two dimensions .siam j. sci .comput 35(6):c495c498 , 2013 .villanueva , j. rajyaguru , b. houska , b. chachuat .ellipsoidal arithmetic for multivariate systems .comput . aided chem .eng . 37:767772 , 2015 .villanueva , b. houska , b. chachuat .unified framework for the propagation of continuous - time enclosures for parametric nonlinear odes .j. of global optim 62(3 ) , pp:575613 , 2015 .villarino . algebraic addition theorems . retrieved february 12 , 2016 , from the arxiv database , arxiv:1212.6471 , 2013 .a. wechsung , s.d .schaber , p.i .the cluster problem revisited .journal of global optimization 58(3):429438 , 2014 .this section briefly discusses how to derive remainder bounds for the proposed first order interval superposition arithmetic .these remainder bounds are needed in the composition rule of the first order interval superposition arithmetic ( algorithm 1 ) requiring that the inequality is satisfied for all with . in the sections belowwe verify this inequality for all rows of table [ tab::bounds ] except for the remainder bounds for the univariate minus and univariate square in the first two rows in table [ tab::bounds ] , which are easy to check and left as an exercise for the reader . for the atom function we have to bound the expression \notag\end{aligned}\ ] ] for all with .in the latter equation , we follow the concept of strategy 1 ; that is , we apply the addition theorem for the exponential function , it is convenient to introduce the auxiliary variables such that & = & e^{\omega } \left [ \sum_{i=1}^n t_i + 1 - \prod_{i=1}^n ( 1+t_i ) \right ] \ ; .\end{aligned}\ ] ] the absolute value of this expression can be bounded as this motivates to choose the central points such that takes the smallest possible value , given by in summary , we have shown that this is the remainder bound from that is listed in the fourth row of table [ tab::bounds ] . the aim of this section is to find a remainder bound for the atom function on the positive domain .this is sufficient , as bounds on the domain can be found analogously .if an interval contains , the bounds are set to $ ] .the main idea is to exploit the addition theorem for the inverse function and to apply further standard manipulations for rational functions in order to simplify the result .we start with the equation & = & \frac{1}{\omega } \left ( \sum_{i=1}^n \frac{-\delta_i}{\omega+\delta_i } + \frac{\sum_{i=1}^n \delta_i}{\omega + \sum_{i=1}^n \delta_{i } } \right ) \notag \\[0.16 cm ] & = & \frac{1}{\omega } \ , \frac{1}{\omega + \sum_{i=1}^n \delta_{i } } \ , \left ( \sum_{i=1}^n \frac { \delta_i ( \delta_i - \sum_{k=1}^n \delta_k)}{\omega+\delta_i } \right ) \ ; . \notag\end{aligned}\ ] ] next , we bound the terms in the last equation separately under the assumption that ( if we do nt have , we can not ensure that is not in the interval ) , \left| \frac{\delta_i}{\omega + \delta_{i } } \right| & \leq & \max \left\ { \ , \frac{a_i - l(a_i ) } { \omega - a_i + l(a_i ) } \ , , \ ,\frac{u(a_i ) - a_i } { \omega - a_i + u(a_i ) } \ , \right\ } \ ; = \ ; s_i \ ; , \notag \\[0.16 cm ] \text{and } \quad \left| \delta_i - \sum_{k=1}^n \delta_k \right| & \leq & \mu(a ) - \omega - ( u(a_i)-a_i ) \ ; . \notag\end{aligned}\ ] ] substituting these inequalities yields this is the remainder bound that is listed in the third row of table [ tab::bounds ] .the aim of this section is to find a remainder bound for the atom function on the positive domain .we start by exploiting the inverse addition theorem for the logarithm , which leads to the equation & = & \log \left ( \frac{\prod_{i=1}^n \left ( \omega+\delta_i \right ) } { \omega^{n-1 } \left ( \omega + \sum_{i=1}^n \delta_{i } \right ) } \right ) \notag \\[0.16 cm ] & = & \log \left ( 1 + \frac{\prod_{i=1}^n \left ( \omega+\delta_i \right ) - \omega^{n-1 } \left ( \omega + \sum_{i=1}^n \delta_{i } \right ) } { \omega^{n-1 } \left ( \omega + \sum_{i=1}^n \delta_{i } \right ) } \right ) \ ; . \notag\end{aligned}\ ] ] next we bound the absolute value of this term by choosing the central points such that and this is the remainder bound that is listed in the fifth row of table [ tab::bounds ] .the real - valued sine and cosine functions are similar to the exponential function in the sense that an addition theorem is available and can be used for constructing remainder terms . in order to exploit this relation systematically , we use euler s formula , with .the derivation passes through the following steps . in the first step ,we derive for all the bound & = & 2 \left| \sin \left ( \pm \frac{\delta_k}{2 } \right ) \right| \notag \\[0.16 cm ] & \leq & 2 \left| \sin \left ( \left [ -\frac{u(a_k)-l(a_k)}{4},\frac{u(a_k)-l(a_k)}{4 } \right ] \right ) \right| = s_k \ ; .\end{aligned}\ ] ] here , the expression for the scalars is evaluated by using standard interval arithmetic , i.e. , we define \right ) \right| = \left\ { \begin{array}{ll } 2 \sin \left ( \frac{u(a_k)-l(a_k)}{4 } \right ) \ ; \ ; & \text{if } \ ; \ ; \frac{u(a_k)-l(a_k)}{4 } \leq \frac{\pi}{2 } \\[0.16 cm ] 2 & \text{otherwise } \end{array } \right\ } \ ; .\ ] ] the auxiliary inequalities from step 2 , in turn , can be used to establish the inequalities & \leq & \prod_{k=1}^n ( 1+s_k ) - \sum_{k=1}^n s_k - 1\end{aligned}\ ] ] as well as using an analogous argument . for the sine function , the estimate from step 3can be used to find the remainder bound & = & \left| \sum_{k=1}^n \sin \left ( \omega+\delta_k \right ) - \sin \left ( \omega + \sum_{k=1}^n \delta_{k } \right ) - ( n-1 ) \sin \left ( \omega \right ) \right| \notag \\[0.16 cm ] & = & \left| \sin(\omega ) \left ( \sum_{k=1}^n \cos \left ( \delta_k \right ) - \cos \left ( \sum_{k=1}^n \delta_{k } \right ) - ( n-1 ) \right ) + \cos(\omega ) \left ( \sum_{k=1}^n \sin \left ( \delta_k \right ) - \sin \left ( \sum_{k=1}^n \delta_{k } \right ) \right ) \right| \notag \\[0.16 cm ] & \leq & \left ( |\sin(\omega)| + |\cos(\omega)| \right ) \left ( \prod_{k=1}^n ( 1+s_k ) -\sum_{k=1}^n s_k - 1 \right ) = r_g(a ) \ ; . \notag\end{aligned}\ ] ] the corresponding bound for the atom function is given by & = & \left| \sum_{k=1}^n \cos \left ( \omega+\delta_k \right ) - \cos \left ( \omega + \sum_{k=1}^n \delta_{k } \right ) - ( n-1 ) \cos \left ( \omega \right ) \right| \notag \\[0.16 cm ] & = & \left| \cos(\omega ) \left ( \sum_{k=1}^n \cos \left ( \delta_k \right ) - \cos \left ( \sum_{k=1}^n \delta_{k } \right ) - ( n-1 ) \right ) - \sin(\omega ) \left ( \sum_{k=1}^n \sin \left ( \delta_k \right ) - \sin \left ( \sum_{k=1}^n \delta_{k } \right ) \right ) \right| \notag \\[0.16 cm ] & \leq & \left ( |\sin(\omega)| + |\cos(\omega)| \right ) \left ( \prod_{k=1}^n ( 1+s_k ) - \sum_{k=1}^n s_k - 1 \right ) = r_g(a ) \ ; .\notag\end{aligned}\ ] ] the above remainder bounds are listed in table [ tab::bounds ] . in order to construct a remainder bound for the function on the open domain it is helpful to notice that the addition theorem for the tangent function , given by can alternatively be written in the difference form the correctness of this equation can be verified by multiplying the addition theorem for the tangent function by on both sides and by re - bracketing terms .next , a generalization of the difference formula for general sums is given by the eqaution this equation can be proven by induction .firstly , is true for , as this case reduces to .secondly , if is true for a given , we have & \overset{\eqref{eq::tangentdifference}}{= } & \tan \left ( \sum_{i=1}^{n+1 } \delta_i \right ) \tan \left ( \sum_{i=1}^{n } \delta_i \right ) \tan(\delta_{n+1 } ) + \left [ \tan \left ( \sum_{i=1}^{n } \delta_i \right ) - \sum_{i=1}^{n } \tan ( \delta_i ) \right ] \notag \\[0.16 cm ] & \overset{\eqref{eq::gentandiff}}{= } & \sum_{i=1}^{n } \tan(\delta_{i+1 } ) \tan \left ( \sum_{k=1}^i \delta_k \right)\tan \left ( \sum_{k=1}^{i+1 } \delta_k \right ) \ ; , \end{aligned}\ ] ] which completes the induction step .thus , the difference formula holds for all integers . in order to generalize the above formula further for the case ,the following algebraic manipulations are made & = & \left [ \tan \left ( \omega + \sum_{i=1}^n \delta_i \right ) - \tan(\omega ) \right ] - \sum_{i=1}^n \left [ \tan ( \omega + \delta_i ) - \tan(\omega ) \right ] \notag \\[0.16 cm ] & \overset{\eqref{eq::tangentdifference}}{= } & \tan \left ( \sum_{i=1}^n \delta_i \right ) \left [ 1 + \tan \left ( \omega + \sum_{i=1}^n \delta_i \right ) \tan(\omega ) \right ] - \sum_{i=1}^n \tan ( \delta_i ) \left [ 1 + \tan ( \omega + \delta_i ) \tan(\omega ) \right ] \notag \\[0.16 cm ] & = & \left ( \tan \left ( \sum_{i=1}^{n } \delta_i \right ) - \sum_{i=1}^{n } \tan ( \delta_i ) \right ) + \tan(\omega ) \left ( \tan \left ( \sum_{i=1}^n \delta_i \right)\tan \left ( \omega + \sum_{i=1}^n \delta_i \right ) - \sum_{i=1}^n \tan ( \delta_i ) \tan ( \omega + \delta_i ) \right)\notag \\[0.16 cm ] & \overset{\eqref{eq::gentandiff}}{= } & \rho_0(\delta ) + \tan(\omega ) \left ( \rho_0(\delta ) \tan \left ( \omega + \sum_{i=1}^n \delta_i \right ) + \sum_{i=1}^n \tan \left ( \delta_i \right ) \left [ \tan \left ( \omega + \sum_{i=1}^n \delta_i \right ) - \tan ( \omega + \delta_i ) \right ] \right ) \notag \\[0.16 cm ] & = & \rho_0(\delta ) \left[1 + \tan(\omega ) \tan \left(\omega + \sum_{i=1}^n \delta_i \right ) \right ] \notag \\[0.16 cm ] & & + \sum_{i=1}^n \tan(\omega ) \tan \left ( \delta_i \right ) \tan \left ( \sum_{k \neq i } \delta_k \right ) \left [ 1 + \tan(\omega+\delta_i)\tan \left ( \sum_{k \neq i } \delta_k \right ) \tan \left(\omega + \sum_{i=1}^n \delta_i \right ) \right ] \ ; . \notag\end{aligned}\ ] ] now , we can bound the right - hand expression by using standard interval arithmetic .this leads to a remainder bound of the form \right .\notag \\[0.16 cm ] & & \left .+ \sum_{i=1}^n \tan(\omega ) \tan \left ( s_i \right ) \tan \left ( t_i \right ) \left [ 1 + \tan ( \omega+s_i)\tan \left ( t_i \right ) \tan \left ( \omega + \sigma \right ) \right ] \right| \ ; , \notag\end{aligned}\ ] ] where we have introduced the auxiliary variables \quad \text{and } \quad \sigma = \sum_{i=1}^n s_i \ ; , \ ; \ ; \sigma = [ -\sigma , \sigma ] \ ; , \ ; \ ; t_i = [ -\sigma + s_i , \sigma - s_i ] \ ; .\ ] ] this is the remainder bound for the tangent function that is listed in the last row of table [ tab::bounds ] . | this paper is about a novel set based computing method , called interval superposition arithmetic , for enclosing the image set of multivariate factorable functions on a given domain . in order to represent enclosure sets , the proposed arithmetic stores a matrix of intervals . every point in the domain is associated with a sequence of interval valued components of this matrix and the superposition , i.e. , the minkowski sum , over these components is the actual enclosure of the function at this point . interval superposition arithmetic has polynomial runtime complexity with respect to the number of variables of the factorable function . it is capable of representing highly complex enclosure sets , because the number of choices for picking a sequence of components from a given interval matrix grows exponentially with respect to the size of the matrix . the composition rules that are associated with interval superposition arithmetic exploit algebraic addition theorems of atom operations as well as partially separable sub - structures of the computational graph of factorable functions . besides analyzing the accuracy and favorable convergence properties of interval superposition arithmetic , this paper illustrates the advantages of the proposed method compared to existing set arithmetics by studying numerical examples . |
the high energy frontier will soon be explored by four detector experiments recording the results of collisions of the large hadron collider ( lhc)located at the european center for nuclear research ( cern ) in geneva , switzerland . designed to record ,identify , and study the higgs boson and a wide array of potential new physics signatures as well as mesons and quark - gluon plasma , these experiments ultimately hope to observe an inconsistency between nature and the standard model ( sm ) of high energy physics .however , since the typical production and detection probabilities for interesting processes are eleven orders of magnitude smaller than that of uninteresting sm backgrounds , these experiments must sift through forty million events per second while the lhc is colliding beams , recording roughly one in two hundred thousand events in order to accommodate bandwidth constraints .the resultant 1 to 2 petabytes ( pb ) of monthly data must then be processed and analyzed offline into a form which is suitable for extraction of measurements .the unprecedented scale of the required computing resources and the complexity of the computing challenges have made computing an important element of hep .each lhc experiment has deployed its own `` computing model '' ( cm ) , which consists of several classes ( `` tiers '' ) of facilities which together comprise a grid of resources held bound by fast links and special middleware . in atlas experiment, the tier-0 facility at cern will perform the first - pass processing of the data .the ten national tier-1 facilities will reprocess these data with better calibrations within two months after data collection .meanwhile , the roughly thirty tier-2 facilities placed at specific universities and labs will focus on simulations and data analysis .in addition , considerable effort is directed towards development of : * monte carlo tools ( which link theory and experiment ) , * detector simulation frameworks , * algorithmic and statistical analysis tools , * data processing frameworks and algorithms , * grid middleware , * data production and management systems , and * underlying data persistency and database infrastructure .ultimately the performance of these software components , many of which are used by multiple experiments , determines the scale of computing resources required by hep experiments . despite the impressive scale of the lhc computing grid and the sophistication of underlying software technologies ,a dearth of computing resources will be one of the primary bottlenecks in extracting measurements from lhc data .for example , figure [ fig : cm ] shows the percentage of atlas tier 2 resources required in 2010 for fast and full simulation production as function of the fraction of recorded data .the atlas cm model expects that physics analysis activity will require roughly half of the resources at atlas tier 2s , leaving the other half for simulation .but we see that the allocating the nominal 50% of tier 2 resources to simulation limits the volume of fully simulated data to roughly 20% of the data the atlas detector will collect .this deficiency will fundamentally limit the significance of the comparisons between recorded data and theoretical predictions ( via detailed detector modeling ) which are necessary to make any statements about nature . therefore atlasmust rely on less accurate fast simulations to produce monte carlo statistics comparable to the recorded data . andif data needs to be resimulated , as is often the case in the first years of an experiment , cpu resources must be borrowed from analysis activity , thereby stalling extraction of measurements .in addition , the experiment cms do not typically provide physicists the necessary resources for the cpu - intensive activities which in the past decade have come to characterize analyses of hep data at the tevatron and the factories .these activities include sophisticated fits , statistical analysis of large `` toy '' monte carlo models , matrix element calculations , and use of the latest discriminant techniques , such as boosted decision trees .physicists must therefore rely on leveraged resources and emerging technologies to accomplish such tasks .in tandem with these developing needs in high - energy physics , the role of computing in both business and everyday life has evolved significantly . in the early 1990s ,the html protocol was developed in the context allowing the hundreds of collaborating physicists in hep experiments at cern to communicate with their global group of colleagues .this development was crucial to the later transformation of the internet from an academic tool to a global medium . today, computing is seen as a commodity , and a critical component of the world economy .major companies like google and amazon manage huge data centers and sell cpu cycles , disk , and bandwidth by the minute and megabyte .regular innovations in data processing , delivery , organization , and communication continuously drive significant business and social progress .figure [ fig : googletrends ] compares the google search volume for `` grid computing '' with technologies such as solid - state drives ( ssds ) , general purpose graphics processing units ( gpgpu ) , virtualization , and cloud computing .we see that while hep has been building large and expensive grid sites along with the necessary grid software , virtualization and cloud computing have developed a wider appeal .fortunately , efforts to take advantage of these technologies have recently begun in hep .virtualization is likely to help address problems in hep computing that can be traced back to the fact that hep experiment software generally require specific operating systems ( os ) and are typically difficult to install .virtualization provides a means of providing single file ( ie a virtual hard drive image ) which can be pre - packaged with all necessary software , including the os , and can run on any modern `` host '' machine regardless of the host hardware and os .the first benefit of virtualization is that average physicists can simply download such an file and then instantly be able to use their experiment s software on their personal computer without a great deal of expertise .the cernvm effort have been particularily successful in this area .similarly system administrators can use virtualization to simplify deployment of the complicated set of software packages required by grid sites .this route is particularly attractive for tier 3 sites that do not necessary have a full - time system administrator .but perhaps the most promissing application of virtualization is in the area of oppertunistic computing where idle desktop computers ( for example , in university offices , class - rooms , and labs ) , can be used to assist data simluation . the appeal of cloud computing is the promise of on - demand access to vast computing resources hosted by companies that can provide attractive pricing due to the economy of scale .so , for example , a hep experiment can short - term lease huge cpu resources for data simulation .it is noteworthy that cloud computing has an implicit reliance on virtualization for delivering the appropriate software environment to the cloud resource . at this point , it is not clear that such cloud computing is appropriate for hep .current cloud computing offerings such as amazon s ec2 are targeted for web - hosting rather than large data processing tasks and therefore offer prohibitive pricing and performance for hep applications .what s more , the large cloud computing providers such as amazon and microsoft favor their own proprietry software solutions that lock their clients to their cloud and have avoided open efforts such the `` open cloud manifesto '' by ibm .in addition some in the industry view the clound computing vision as an unrealistic `` myth '' .nonetheless , many within the hep community are exploring the potential of hep computing .for example nimbus now provides a means of turning amazon ec2 resources into a self - configuring cluster for hep computing . a very promising technology , that is now becoming cost - effective , is ssd .because these storage devices have no mechanical or moving parts , they provide impressive input / output ( i / o ) rates , in particular huge gains in random read access in comparison to traditional hard drives ( hd ) .two high i / o hep applications that can benefit from ssds immediately come to mind .first is ntuple data analysis , a task that is typically characterized by rapid iterations over upto terabytes of data .the second is in computing sites where hundreds of processes running on multiple cores access data stored on a single storage device .since ssds present the same interface as hds , their deployment in hep environments is rather easy .the primarly limitation is then the additional cost of the ssds , which we may expect to lead to ssd / hd hybrid solutions where hds are used for long - term storage and data is prestaged to ssds for faster access .we may also note that while hep has traditionally interpreted moore s law as predicting faster processors , the industry has shifted to more cores per processor .thus we find that though many hep applications in principle lend themselves to parallelization , very few existing hep software were fundamentally designed with parallel processing on multiple cores in mind .recently , the omnipresence of dual and quad - core processors has prompted some parallelization efforts within the hep community .these efforts take advantage of the embarrassingly parallel nature of hep computations by simply running multiple instances ( or threads ) of the software .the only challenge then is sharing memory across these parallel threads in applications that have a large memory footprint such as simulation and reconstruction . assuming that memory cost is nt a factor ,the problem then becomes the processor to memory bandwidth which can be saturated when a large number of cores access large amount of memory .this bandwidth limitation constrains performance gain when scaling to large number of cores per processor . as a result ,most hep multi - core optimization efforts are concentrated on processing forking and thread safety . while central processing units ( cpus ) have been evolving towards more cores , graphics processingunits ( gpus ) that were originally targeted to the personal computer gaming enthusiasts have evolved to be capable of computing traditionally performed on the cpu .these modern processors that are also now present in many desktops and computers are known as the general purpose graphical processor units ( gpgpus ) . despite that fact that for specific computing tasks , these gpgpus can marshall thousands of simplified processing units in parallel in order to reduce computing times by orders of magnitude , they have yet to capture the attention of hep as a whole .a very simple study of read / write rates of root analyses illustrates the potential of ssds and gpgpus on the data analysis iteration rate . for this study , we consider two simple root applications , one which creates an ntuple ( ttree ) with random data ( simple types such as bools , ints , floats , and vectors of simple types ) , and another that reads histograms all quantities in these ntuples . we findthat simple read / write rates stablilize with about 20 variables of each type per event ( 3 kb / event ) and 600 events .table [ fig : ssdhd ] summarizes the results of the study . withroot s data compression turned off , a single instance of each application achieve approximately 25 mb / s read or write rate on a hard drive which provide 70 mb / sec sequential read .with compression turned on ( providing 30% file size reduction ) , this figure falls to 4 and 16 mb / s read and write , respectively , illustrating that ( de)compression is the main bottleneck in input / output bound root analyses .ideally , the gpu can be used to eliminate this bottleneck . in order to observe the benefits of the fast random access of ssds, we ran eight instances of these applications with the data stored on a single hd or ssd .we also observe that only uncompressed data writing appears to be limited by disk access . and though ssds generally provide faster rates , the improvement is significantly less where the i / o is limited by ( de)compression . [ cols="<,^,^,^,^ " , ] [ fig : ssdhd ]from the cell processor in the playstation 3 ( ps3 ) to in newer - generation graphic processing units ( gpus ) used in desktop and laptop computers , we find the building blocks for high - performance computing ( hpc ) systems already present in devices we daily use .originally driven by the gaming industry , gpu architectures have recently been developed to also support general - purpose computing .these gpus offer impressive power consumption / performance and price / performance ratios .the omnipresence of these general purpose gpus have pushed industry leaders such as microsoft and apple into a race to develop strategies that take advantage their capabilities .the raw computational horsepower of gpus is staggering .a single modern gpus provides nearly one tflops ( floating - point operations per second ) , roughly 20 times more than a typical multi - core cpu .what s more , the trend over the past decade exhibits an exponential growth in the ratio of the computational power of gpu to cpu .while use of computer graphics hardware for general - purpose computation has been an area of active research for many years ( eg , ) , the wide deployment of gpus in the last several years has resulted in an increase in experimental research with graphics hardware .some notable work include password cracking , artificial neural networks , solving partial differential equations ( pdes ) , line integral convolution and lagrangian - eulerian advection , and protein folding ( folding ) . in hep, the developers of the fairroot demonstrated a two orders of magnitude acceleration of track fitting in high multiplicity environments using gpus .currently , the primary players in the gpu arena are nvidia and amd ( through the purchase of ati ) .nearly all of nvidia s gpus , from the low - end laptop gpu to the professional tesla line , can be programmed using their compute unified device architecture ( cuda ) , a parallel programming compiler and software environment designed for issuing and managing general purpose computations on the gpu through extensions to the standard c language .there are also standard numerical libraries for fft ( fast fourier transform ) and blas ( basic linear algebra subroutines ) .ati / amd has also developed a proprietary gpgpu hardware and software known as data parallel virtual machine ( dpvm ) .however , their focus seems to now shifted to open computing language ( opencl ) , a gpu software standard originally developed by apple and then released to an open consortium which includes all the relevant companies as signatories .the primary appeal of opencl is it s architecture independence which is achieved through run - time compilation of computing software kernels . the first publicly available implementation of opencl was recently released last week as part of apple s snow leopard operating system .other , less popular , gpu architectures and software include clearspeed , brookgpu , amd stream computing , and sh . finally , a very promising upcoming product is intel s larabee , a x86-based many - core gpu which is rumored to be release later this year .the processing model for gpu is very different than cpu .whereas cpus are optimized for low latency , gpus are optimized for high throughput .gpus are essentially stream processors , hardware that operates in parallel by running a single computation on many records in a stream at once .the limitation is that each parallel computation may only process independent memory elements that can not share memory with others .the result is that gpus are generally suitable for computations that exhibit specific characteristics. they must be compute intensive with large number of arithmetic operations per i / o or global memory reference .they must be amiable to data parallelism , where the same function is applied to all records of an input stream and a number of records can be processed simultaneously without waiting for results from previous records . and they must exhibit data locality , a specific type of temporal locality common in signal and media processing applications , where data is produced once , read once or twice later in the application , and never read again .hep applications that are good candidates for gpgpu acceleration include monte carlo integration for matrix element methods , discriminant training or calculation in multivariate analysis , maximum - likelihood fitting , compression / decompression during data input / output , event generation , full or fast detector simulation , event reconstruction ( in particular for the high level trigger ) , and detector alignment .the difficulty with employing gpgpus is that existing applications can not be simply rebuilt to run on gpgpus , but must rather treat the gpu as a co - processor .the software must explicitly adopt a data - parallel processing model where the data is broken into chunks and independently processed by algorithms which are highly constrained in both their memory access and complexity .a practical approach of gpgpu accelerating existing applications is to rewrite computational bottlenecks to prepare the data on the cpu , transfer it to the gpu memory , execute the computation on the gpu , and then transfer the results back .even modest acceleration of detector simulation and reconstruction using gpgpu will have a significant impact on hep computing . for reconstruction , where thousands of tracks and hundreds of thousands calorimeter cells must be processed, any gain directly translates to higher trigger output . for simulation , where thousands of particles must be propagated through detector and magnetic fields ,gains translate into fewer computing resource requirements .the practical time - scale for deployment of such gpgpu accelerated strategies is for lhc upgrade .while strategies for gpgpu acceleration of tracking and calorimetry are rather straight - forward , the complexity of geant4 and the fact that it was nt written with parallelization in mind , make simulation a much more difficult problem . taking full advantage of gpgpus in simulation will likely only be possible in the next generation software ( perhaps geant5 )one promising interim strategy is to employ multiple parallel geant4 threads running on the cpu ( see geant4 parallelization efforts of ) that offload specific calculations ( eg magnetic field extrapolation ) to a service that can batch perform the calculation using the gpu . | while in the early 90s high energy physics ( hep ) lead the computing industry by establishing the http protocol and the first web - servers , the long time - scale for planning and building modern hep experiments has resulted in a generally slow adoption of emerging computing technologies which rapidly become commonplace in business and other scientific fields . i will overview some of the fundamental computing problems in hep computing and then present the current state and future potential of employing new computing technologies in addressing these problems . |
the investigation of the factorization properties of scattering amplitudes at their singularities lead to important results in quantum field theory and theoretical particle physics , such as the development of new methods for phenomenological computations .in particular , integrand reduction methods , developed for one - loop diagrams and recently extended to higher loops , use the knowledge of the analytic and algebraic structure of loop integrands in order to rewrite scattering amplitudes as linear combinations of master integrals . at one loop ,integrand - reduction methods allow to express any integrand in dimensional regularization as a sum of contributions with at most five propagators in the loop , regardless of the number of external legs of the amplitude .the numerators of these contributions are _ polynomial residues _ which have a universal parametric form that does not depend on the process .this parametric form can be written as a sum of monomials in the components of the loop momentum , multiplied by unknown process - dependent coefficients .after integration , the amplitude becomes a linear combination of known integrals .the coefficients of this linear combination can be identified with a subset of the ones which parametrize the residues .therefore , the problem of the computation of any one - loop amplitude can be reduced to the one of performing a polynomial fit of the coefficients of the residues .the fit of the unknown coefficients can be efficiently performed by evaluating the numerator of the integrand on _ multiple cuts _, i.e. on values of the loop momentum such that a subset of the loop denominators vanish .the multiple - cut conditions can be viewed as _ projectors _ which isolate the residue sitting on the cut denominators .a residue can be evaluated by putting on - shell the corresponding loop propagators and subtracting from the integrand the non - vanishing contributions coming from higher - point residues .this leads to a top - down algorithm which allows to compute any one - loop amplitude with any number of external legs . within semi - numerical computations ,the algorithm is usually implemented by sampling the integrand on several solutions of the multiple cuts and subtracting at each step of the reduction all the non - vanishing contributions coming from higher - point residues .this yields a system of equations for the coefficients of each residue .the method is suited for automation and it has been implemented in several codes , some of which are public ( e.g.cuttools and samurai ) .its usage within several automated frameworks has been particularly successful and produced highly non - trivial phenomenological results . in this paperwe present a new public c++ library called ninja , which implements an alternative integrand - reduction algorithm first proposed in ref .this is based on the systematic application of the laurent series expansion to an integrand on the multiple cuts . after performing a suitable laurent expansion on a multiple cut , in the asymptotic limit both the integrand and the subtraction termsexhibit the same polynomial behavior as the residue .this allows one to directly identify the coefficients of the residues ( and thus the ones of the master integrals ) with the ones of the laurent expansion of the integrand , corrected by subtractions terms which can be computed once and for all as functions of a subset of the higher - point coefficients .this leads to a diagonal system of equations for each residue and to a significant reduction of the number of subtraction terms which affect the computation of lower - point contributions .ninja takes as input the numerator cast into four different forms .the first is a procedure which evaluates it as a function of the loop momentum .the others instead compute the leading terms of properly defined laurent expansions of the numerator .since the integrand of a one - loop amplitude is a rational function of the loop momentum , the laurent expansions for an integrand can be obtained via a partial fraction decomposition .ninja implements it semi - numerically via a simplified polynomial division algorithm between the expansions of the numerator and the ones of the denominators .the coefficients of the laurent expansion are then corrected by the subtraction terms and multiplied by the master integrals .these are computed by interfacing ninja with an external library which can be specified by the user .interfaces for oneloop and looptools are already provided with the distribution .the simplified subtractions and the diagonal systems of equations make the algorithm implemented in ninja significantly simpler and lighter than the traditional one .the library has been interfaced with the one - loop package gosam and has already been used to compute nlo corrections to higgs boson production in association with a top - quark pair and a jet and several six- , seven- and eight - point amplitudes involving both massive and massless particles as external states or propagating in the loop .these applications showed that ninja has better performance and numerical stability than implementations of traditional integrand reduction algorithms .in particular , ref . provides a detailed analysis of the performance and accuracy of this library . with this paper ,we make ninja publicly available as a standalone library which can be interfaced to other packages and frameworks for one - loop computations . in order to simplify the generation of the numerator expansions and the corresponding source code needed by ninja as input , we distribute with the library a small python package called ninjanumgen which uses form-4 in order to compute the expansions and produce an optimized source code which can be used by the library . in this paper , besides the description of the implementation of the algorithm and the usage of the library, we also propose a method for the extension of the integrand reduction via laurent expansion for the computation of integrals whose rank is one unit higher than the number of loop denominators .this method is not present elsewhere in the literature and has been implemented in ninja , allowing thus to use the library for computations in non - renormalizable and effective theories as well .the paper is organized as follows . in section [ sec : laurent ] we give a review of the laurent expansion method for the integrand reduction of one - loop amplitudes . in section [ sec : ninja ] we discuss its semi - numerical implementation in the library ninja .the usage of the library is explained , with the help of simple examples , in section [ sec : usage ] . in section[ sec : examples ] we give a description of the examples which are distributed with the library , giving a better view of its usage and capabilities . appendix [ sec : ninjanumgen ] gives more details on the usage of the package ninjanumgen . in appendix[ sec : hr ] we present the extension of the algorithm to higher - rank integrals .appendix [ sec : intlib ] gives more information on the interface between ninja and the libraries of master integrals .since this paper is rather technical , the reader which is mostly interested in the usage of the library might want to read sections [ sec : usage ] and [ sec : examples ] first , referring to the previous sections or the appendices at a later time if needed .in this section we review the _ integrand reduction via laurent - expansion method _ for the computation of one - loop integrals .the method is based on the systematic application of the laurent series expansion on the universal integrand decomposition of one - loop amplitudes , which allows to reduce any amplitude as a linear combination of known master integrals .a generic contribution to an -point one - loop amplitude in dimensional regularization has the form in the previous equation , the integrand is a _ rational function _ of the components of the -dimensional loop momentum , with .the numerator is a process - dependent polynomial in , while the denominators are quadratic polynomials in and correspond to feynman loop propagators , the function appearing in eq .is a conventional normalization factor given by as a function of the renormalization scale and the dimension .the -dimensional loop momentum can be split into a four - dimensional part and a -dimensional part as the numerator will therefore be a polynomial in the four components of and the extra - dimensional variable .every one - loop integrand in dimensional regularization can be decomposed as sum of integrands having five or less loop denominators where the second sum on the r.h.s. runs over all the subsets of the denominator indexes containing elements .the _ residues _ appearing in eq .are irreducible polynomials , i.e.polynomials which do not contain terms proportional to the corresponding loop denominators .these residues have a universal , process - independent parametric form in terms of unknown , process - dependent coefficients . for any set of denominators with , one can build a -dimensional basis of massless momenta .the first two elements of the basis are linear combinations of two external momenta , of the sub - diagram identified by the considered set of loop denominators . more explicitly , we define with where the momenta were defined in eq . .if the sub - diagram has less than two independent external momenta , the remaining ones are substituted by arbitrary reference vectors in the definition of and .the momenta and are instead chosen to be orthogonal to the first two and can be defined using the spinor notation as }{2 } , \qquad e_4^\mu = \frac{\langle e_2\ , \gamma^\mu\ , e_1]}{2}.\ ] ] they satisfy the property . for subsets of denominators , we also define the vectors and with .we observe that the vector is orthogonal to all the external legs of the sub - diagram identified by the four denominators . by expanding the four dimensional part of the loop momentum in the basis , the numerator and the denominators can be written as polynomials in the coordinates , with the coordinates can be written as scalar products where .for we also consider the alternative expansion of the loop momentum with the universal parametric form of the residues in a renormalizable theory is where we understand that the unknown coefficients depend on the indexes of the residue ( e.g. ) , while the scalar products and depend on both the indexes of the residue and the loop momentum .the parametrization in eq . can be extended to effective and non - renormalizable theories where the rank of the numerator can be larger than the number of loop propagators .more details on the higher - rank case are given in appendix [ sec : hr ] .most of the terms appearing in eq .are _ spurious _ , i.e. they vanish after integration and do not contribute to the final result .the amplitude can thus be expressed as a linear combination of master integrals , corresponding to the non - spurious terms of the integrand decomposition , namely \bigg\ } \pagebreak[1 ] \nn & + \sum_{\{i_1 , i_2 , i_3\}}\bigg\ { c_{0}^ { ( i_1 i_2 i_3 ) } i_{i_1 i_2 i_3 } + c_{7}^ { ( i_1 i_2 i_3 ) } i_{i_1 i_2 i_3}[\mu^2 ] \bigg\ } \pagebreak[1 ] \nn & + \sum_{\{i_1 , i_2\}}\bigg\ { c_{0}^ { ( i_1 i_2 ) } i_{i_1 i_2 } + c_{1}^ { ( i_1 i_2 ) } i_{i_1 i_2}[(q+p_{i_1})\cdot e_2 ] \nn & \qquad + c_{2}^ { ( i_1 i_2 ) } i_{i_1 i_2}[((q+p_{i_1})\cdot e_2)^2 ] + c_{9}^ { ( i_1 i_2 ) } i_{i_1 i_2}[\mu^2 ] \bigg\ } \pagebreak[1 ] \nn & + \sum_{i_1 } c_{0}^ { ( i_1 ) } i_{i_1 } , \label{eq : integraldecomposition}\end{aligned}\ ] ] where \equiv h(\mu_r^2,d)\ , \int d^d \bar q { \alpha \over d_{i_1 } \cdots d_{i_k } } , \qquad i_{i_1 \cdots i_k } \equiv i_{i_1 \cdots i_k}[1].\ ] ] the coefficients of this linear combination can be identified with a subset of the coefficients of the parametric residues in eq . .since all the master integrals appearing in eq . are known , the problem of the computation of an arbitrary one - loop amplitude can be reduced to the problem of the determination of the non - spurious coefficients appearing in the parametrization of the residues .the coefficients appearing in the integrand decomposition can be computed by evaluating the integrand on _ multiple cuts _ ,i.e. on values of the loop momentum such that a subset of loop denominators vanish .more in detail , the coefficients of a -point residue can be determined by evaluating the integrand on the corresponding -ple cut .for these values of the loop momentum , the only non - vanishing contributions of the integrand decomposition are the ones coming from the residue in consideration and from all the higher - point residues which have as a subset of their loop denominators . within the original integrand reduction algorithm ,the coefficients are computed by sampling the numerator of the integrand on a finite subset of the on - shell solutions , subtracting all the non - vanishing contributions coming from higher - point residues , and finally solving the resulting linear system of equations .this is therefore a top - down approach , where higher - point residues are computed first , starting from , and systematically subtracted from the integrand for the evaluation of lower - point contributions .these are referred to as subtractions at the integrand level .the integrand reduction via laurent expansion method , presented in ref . , improves this reduction strategy by elaborating on techniques proposed in .whenever the analytic dependence of the integrand on the loop momentum is known , this approach allows to compute the coefficients of a residue by performing a laurent expansion of the integrand with respect to one of the components of the loop momentum which are not fixed by the on - shell conditions of the corresponding multiple cut . in the asymptotic limit defined by this laurent expansion ,both the integrand and the higher - point subtractions exhibit the same polynomial behavior as the residue. therefore one can directly identify the unknown coefficients with the ones of the laurent expansion of the integrand , corrected by the contributions coming from higher - point residues .hence , by choosing a suitable laurent expansion , one obtains a diagonal system of equations for the coefficients of the residues , while the subtractions of higher - point contributions can be implemented as _corrections at the coefficient level _ which replace the subtractions at the integrand level of the original algorithm .since the polynomial structure of the residues is universal and does not depend on the process , the parametric form of the coefficient - level corrections can be computed once and for all , in terms of a subset of the higher - point coefficients .more in detail , the corrections at the coefficient level are known functions of a subset of the coefficients of 3- and 2-point residues . in particular , no subtraction term coming from 4- and 5-point contributions is ever needed .this allows to skip the computation of the ( spurious ) 5-point contributions entirely , and to completely disentangle the determination of 4-point residues from the one of lower point contributions . in the following ,we address more in detail the computation of 5- , 4- , 3- , 2- , and 1-point residues , also commonly known as _ pentagons _ , _ boxes _ , _ triangles _ , _ bubbles _ and _ tadpoles _ respectively . for simplicity , we focus on renormalizable theories , where ( up to a suitable choice of gauge ) the maximum allowed rank of the integrand is equal to the number of loop denominators and the most general parametrization of the residues is the one given in eq . .ninja can also be used for the computation of integrals whose rank exceeds the number of denominators by one .the extension of the method to the higher - rank case is discussed in appendix [ sec : hr ] .[ [ point - residues ] ] 5-point residues + + + + + + + + + + + + + + + + as mentioned above , pentagon contributions are spurious . within the original integrand reduction algorithm ,their computation is needed because they appear in the subtractions at the integrand level required for the evaluation of lower - point contributions .a 5-point residue only has one coefficient , which can easily be computed by evaluating the numerator of the integrand on the corresponding 5-ple cut . within the laurent expansion approach ,the subtraction terms coming from five - point residues always vanish in the asymptotic limits we consider , therefore their computation can be skipped .for this reason , in the library ninja the computation of pentagons is disabled by default , even though it can be enabled for debugging purposes , as explained in section [ sec : runtime ] .[ [ point - residues-1 ] ] 4-point residues + + + + + + + + + + + + + + + + the coefficient of a box contribution can be determined via four - dimensional quadruple cuts .a quadruple cut in four dimensions ( i.e. , ) has two solutions , and .the coefficient can be expressed in terms of these solutions as given the simplicity of eq ., this is the only coefficient which ninja computes in the same way as the traditional algorithm .the coefficient can instead be determined by evaluating the integrand on -dimensional quadruple cuts in the asymptotic limit of large .a -dimensional quadruple cut has an infinite number of solutions which can be parametrized by the ( )-dimensional variable .these solutions become simpler in the considered limit , namely where the vector and the constant are fixed by the cut conditions .the coefficient is non - vanishing only if the rank of the numerator is greater or equal to the number of loop denominators . in a renormalizable theory, it can be found in the asymptotic limit as the leading term of the laurent expansion of the integrand the other coefficients of the boxes are spurious and , since they neither contribute to the final result nor to the subtraction terms , their computation can be skipped .[ [ point - residues-2 ] ] 3-point residues + + + + + + + + + + + + + + + + the coefficients of the residues of a generic triangle contribution can be determined by evaluating the integrand on the solutions of the corresponding -dimensional triple cut .these can be parametrized by the variable and the free parameter , where the vector and the constant are fixed by the cut conditions .the momentum is a linear combination of and and is therefore orthogonal to and . on these solutions ,the non - vanishing contributions to the integrand decomposition are the ones of the residue , as well as the ones of the boxes and pentagons which share the three cut denominators . however , after performing a laurent expansion for large and dropping the terms which vanish in this limit , the pentagon contributions vanish , while the box contributions are constant in but they also vanish when taking the average between the parametrizations and of eq . .more explicitly , moreover , the expansion of the integrand is given by and it has the same polynomial behavior as the expansion of the residue , by comparison of eq . , and one can directly identify the ten triangle coefficients as the corresponding terms of the expansion of the integrand , hence , with the laurent expansion method , the determination of the 3-point residues does not require any subtraction of higher - point terms .[ [ point - residues-3 ] ] 2-point residues + + + + + + + + + + + + + + + + the coefficients of a generic 2-point residue can be evaluated on the on - shell solutions of the corresponding double cut , which can be parametrized as in terms of the three free parameters , and , while the constants and are fixed by the on - shell conditions . after evaluating the integrand on these solutions and performing a laurent expansion for , the only non - vanishing subtraction terms come from the triangles , even though the integrand and the subtraction terms are rational functions , in the asymptotic limit they both have the same polynomial behavior as the residue , namely \\ \frac{\delta_{i_1 i_2 j}(q_+,\mu^2)}{d_j } = { } & c_{s_3,0}^{(j)}+ c_{s_3,9}^{(j)}\ , { \mu^2}+ c_{s_3,1}^{(j)}\ , { x } + c_{s_3,2}^{(j)}\ , { x^2 } - \big ( c_{s_3,5}^{(j ) } + c_{s_3,8}^{(j ) } { x}\big ) { t } + c_{s_3,6}^{(j)}\ , { t^2}+\o(1/{t } ) \nn \frac{\delta_{i_1 i_2 j}(q_-,\mu^2)}{d_j } = { } & c_{s_3,0}^{(j)}+ c_{s_3,9}^{(j)}\ , { \mu^2}+ c_{s_3,1}^{(j)}\ , { x } + c_{s_3,2}^{(j)}\ , { x^2 } - \big ( c_{s_3,3}^{(j ) } + c_{s_3,7}^{(j ) } { x}\big ) { t } + c_{s_3,4}^{(j)}\ , { t^2}+\o(1/{t } ) \label{eq : bubblessubexp } \pagebreak[1 ] \\ \delta_{i_1i_2}(q_+,\mu^2 ) = { } & c_0+c_9\ , { \mu^2}+c_1\ , ( e_1\cdot e_2)\ , { x } + c_2 \ , ( e_1\cdot e_2)^2\ , { x^2}\nn & + \big(c_5 + c_8 \ , ( e_1\cdot e_2)\ , { x}\big)\ , ( e_3\cdot e_4)\ , { t } + c_6\ , ( e_3\cdot e_4)^2\ , { t^2}+\o(1/{t } ) \nn \delta_{i_1 i_2}(q_-,\mu^2 ) = { } & c_0+c_9\ , { \mu^2}+c_1\ , ( e_1\cdot e_2)\ , { x } + c_2\ , ( e_1\cdot e_2)^2\ , { x^2 } \nn & + \big(c_3 + c_7 \ , ( e_1\cdot e_2)\ , { x}\big)\ , ( e_3\cdot e_4)\ , { t } + c_4\ , ( e_3\cdot e_4)^2\ , { t^2}+\o(1/{t } ) .\label{eq : bubbledeltaexp}\end{aligned}\ ] ] the coefficients of the expansion of the subtractions terms in eq.s are known parametric functions of the triangle coefficients .hence , the subtraction of the triangle contributions can be implemented by applying coefficient - level corrections to the terms appearing in the expansion of the integrand .more explicitly , by inserting eq.s , and in eq .one gets [ [ point - residues-4 ] ] 1-point residues + + + + + + + + + + + + + + + + the only non - spurious coefficient of a tadpole residue can be computed by evaluating the integrand on solutions of the single cut . for this purpose, one can consider 4-dimensional solutions of the form parametrized by the free variable . in the asymptotic limit ,only bubble and triangle subtraction terms are non - vanishing , similarly to the case of the 2-point residues , in this limit the integrand and the subtraction terms exhibit the same polynomial behavior as the residue , i.e. putting everything together , the coefficient of the tadpole integral can be identified with the corresponding one in the expansion of the integrand , corrected by coefficient - level subtractions from bubbles and triangles the subtraction terms and , coming from 2-point and 3-point contributions respectively , are known parametric functions of the coefficients of the corresponding higher - point residues .the c++ library ninja provides a semi - numerical implementation of the laurent expansion method described in section [ sec : laurent ] .the laurent series expansion is typically an analytic operation , but since a one - loop integrand is a rational function of the loop variables , its expansion can be obtained via a partial fraction decomposition between the numerator and the denominators .this is implemented in ninja via a simplified polynomial - division algorithm , which takes as input the coefficients of a parametric expansion of the numerator and computes the leading terms of the quotient of the polynomial division with respect to the uncut denominators . in this sectionwe describe the input needed for the reduction performed by ninja and we give further details about the implementation of the reduction .all the types , classes and functions provided by the ninja library are defined inside the ` ninja ` namespace . in particular , the types ` real ` and ` complex ` are aliases for ` double ` and ` std::complex < double > ` , unless the library was compiled in quadruple precision .classes for real and complex momenta are defined as ` realmomentum ` and ` complexmomentum ` respectively .they are wrappers of four - dimensional arrays of the corresponding floating - point types , which overload arithmetic and subscript operators .more in detail , an instance ` p ` of one of these classes represents a momentum according to the representation },\texttt{p[1]},\texttt{p[2]},\texttt{p[3]}\ } = \{e_p , x_p , y_p , z_p\},\ ] ] i.e. with the energy in the zeroth component , followed by the spatial components .the inputs needed from the reduction algorithm implemented in ninja are the momenta and the masses of the loop denominators defined in eq . , besides the numerator of the integrand .the latter must be cast in four different forms , one of which is optional .the c++ implementation requires the numerator to be an instance of a class inherited from the abstract class ` ninja::numerator ` .the latter defined as .... class numerator { public : virtual complex evaluate(const ninja::complexmomentum & , const ninja::complex & , int cut , const ninja::partitionint partition [ ] ) = 0 ; virtual void muexpansion(const ninja::complexmomentum [ ] , const ninja::partitionint partition [ ] , ninja::complex c [ ] ) { } virtual void t3expansion(const ninja::complexmomentum & , const ninja::complexmomentum & , const ninja::complexmomentum & , const ninja::complex & ,int mindeg , int cut , const ninja::partitionint partition [ ] , ninja::complex c [ ] ) = 0 ; virtual void t2expansion(const ninja::complexmomentum & , const ninja::complexmomentum & , const ninja::complexmomentum & , const ninja::complexmomentum & , const ninja::complex [ ] , int mindeg , int cut , const ninja::partitionint partition [ ] , ninja::complex c [ ] ) = 0 ; virtual ~numerator ( ) { } } ; .... the input parameters ` cut ` and ` partition ` are common to more methods and give information about the multiple cut where ninja is currently evaluating the numerator .although this information is not always necessary , there might be occasions where it could be useful for an efficient evaluation of the numerator .the integer ` cut ` is equal to if the numerator is being evaluated on a -ple cut , with .this parameter is not given in the method ` muexpansion ` because the latter is always evaluated on quadruple cuts ( i.e. ) .the parameter ` partition ` points to an array of integers ( namely of integer type ` ninja::partitionint ` ) , with length equal to ` cut ` , containing the indexes of the cut numerators .if the user asks to perform a global test ( see section [ sec : runtime ] ) , the numerator will also be evaluated outside the solutions of the multiple cuts , in which case the parameter ` cut ` will be set to zero . as an example , if the method ` t3expansion ` is evaluated on the 3-ple cut for the determination of the 3-point residue , we will have , }=0 ] , and }=5 \(\n\)\(t^{r}\)\(r\)\(r\)\(r\) \(\n\)\(t^{j}\mu^{2k}\) ] , which are passed as parameters to the method .the maximum value of ` mindeg ` is . in a renormalizable theory, this implies that one can have at most 7 non - vanishing terms in this range of .it is worth observing that the expansion in eq .can be obtained from the previous one in eq . with the substitutions the terms of the expansionmust be stored in the entries of the array pointed by ` c ` , ordered by decreasing powers of .terms with the same power of should be ordered from the lowest to the highest with respect to the lexicographical order in the variables .a pseudo - implementation will have the form .... int idx = 0 ; for ( int j= ; j>=-mindeg ; --j ) for ( int l=0 ; l<=-j ; + + l ) for ( int k=0 ; 2*k<=-j - l ; + + k ) c[idx++ ] = [ ] ; .... for every phase - space point , ninja at run - time computes the parametric solutions of the multiple cuts corresponding to each residue .the laurent expansion of the integrand on these solutions is performed via a simplified polynomial division between the expansion of the numerator and the set of the uncut denominators .the coefficients of this expansion are corrected by the coefficient - level subtractions appearing in eq . and .the non - spurious coefficients are finally multiplied by the corresponding master integrals in order to obtain the integrated result as in eq . .the coefficients of the expansions of the numerator are written on a contiguous array by the numerator methods described in section [ sec : input ] .the laurent expansion is obtained via a simplified polynomial division .the latter is performed in - place on the same array , keeping only the elements which are needed for the final result . a possible implementation for an univariate expansion , with a numerator }\ ,t^r + \texttt{num[1]}\ , t^{r-1 } + \ldots + \texttt{num[nterms-1]}\ , t^{r-\texttt{nterms}+1 } + \o(t^{r-\texttt{nterms}})\ ] ] and denominator }\ , t + \texttt{d[1]}\ , + \texttt{d[2]}\ , \frac{1}{t},\ ] ] would have the form .... void division(complex num [ ] , int nterms , complex den[3 ] ) { for ( int i=0 ; i < nterms ; + + i ) { num[i ] /= den[0 ] ; if ( i+1<nterms ) { num[i+1 ] -= den[1]*num[i ] ; if ( i+2<nterms ) num[i+2 ] -= den[2]*num[i ] ; } } } .... one can check that this routine correctly replaces the first ` nterms ` elements of the array ` num ` with the first ` nterms ` leading elements of the laurent expansion of . the actual implementation in ninja , having to deal with multivariate expansions , is significantly more involved than the ` division ` procedure presented here .nevertheless , it qualitatively follows the same algorithm .the coefficients obtained by the division are then corrected by the coefficient - level subtractions and thus identified with the corresponding coefficients of the residues , as explained in section [ sec : laurent ] .once the reduction is complete , the non - spurious coefficients are multiplied by the corresponding master integrals .ninja calls the routines implementing the master integrals through a generic interface which , as in the case of the numerator , is defined in the c++ code via an abstract class ( called ` integrallibrary ` ) .this allows one to use any integral library which can be specified at run - time .more details on the implementation of this interface are given in appendix [ sec : intlib ] .the current version of ninja already implements it for two integral libraries . the first built - in interface , `ninja::avhoneloop ` , is a c++ wrapper of the routines of the oneloop library .this wrapper caches every computed integral allowing constant - time lookup of their values from their arguments .the caching of the integrals can significantly speed up the computation , especially for complex processes .every instance of the class ` avhoneloop ` has an independent cache of master integrals ( hence , one may safely use it in multi - threaded applications by using one instance of the class per thread ) .the second implementation of the interface , ` ninja::looptools ` , uses instead the looptools library , which already has an internal cache of computed integrals .in this section we explain how to use the library for the computation of a generic one - loop integral .ninja can be interfaced to any one - loop generator capable of providing the input it needs , and in particular to packages which can reconstruct the analytic dependence of the numerators on the loop momentum .an interface for the one - loop package gosam is already built in the library , and has been used for highly non - trivial phenomenological computations .an interface with the package formcalc is currently under development .the author is open to give his assistance in interfacing other packages as well . in this paperwe focus on the usage of ninja as a standalone library .we will show how to generate the numerator methods needed as input , starting from an analytic expression of the numerator , with the help of the python package ninjanumgen which is distributed with the library .we will then explain how to perform the reduction , and how to set the main available options .ninja can be obtained at the url ` http://ninja.hepforge.org ` .the library is distributed with its source code using the gnu build system ( also known as autotools ) .it can be compiled and installed with the shell commands ...../configure make make install ....this will typically install the library and the header files in sub - directories of ` /usr / local ` .the ` prefix ` option can be used in order to specify a different installation path . in this case, you might need to update your ` ld_library_path ` ( or ` dyld_library_path ` on mac os ) environment variable accordingly . in order to use ninja forthe production of phenomenological results , one must interface it with a library of master integrals .as already mentioned , interfaces to the oneloop and looptools libraries are provided ( see appendix [ sec : intlib ] for interfacing a different library ) .they can be enabled by passing to the ` configure ` script the options ` with - avholo[=flags ] ` and ` with - looptools[=flags ] ` .for instance , the following commands [ source , bash ] ---- ./configure--prefix= home / ninja ` and build the interface with the oneloop library , which must be already installed and linkable with the flags specified with the ` with - avholo ` option .we also specified the ` fcinclude ` variable with the flags which are needed to find fortran-90 modules when they are not installed in a default directory .given the importance of numerical precision in the calculation of scattering amplitudes , there is also the option to compile the library in quadruple precision ( ` with - quadruple ` ) , which uses the gcc libquadmath library . by using the ` ninja::real ` and ` ninja::complex ` floating point types , the same source code can be compiled both in double and quadruple precision , depending on how ninja has been configured .a full list of optional arguments for the ` configure ` script can be obtained with the command ` ./configurehelp ` . while most of them are common to every package distributed with the gnu build system , some are instead specific to the ninja library and they are described in table [ tab : configure ] . inmost of the cases , only the options for interfacing the integral libraries should be needed ..options and environment flags for the ` configure ` script . only the options which modify the default behavior of ninja are listed .[ cols="<,<",options="header " , ] in this example we consider six incoming photons .this is a non - trivial case where the ` setcutstop ` method of an ` amplitude ` class can make the computation more efficient when lower point cuts do not contribute to the total result .a generic six - photon diagram has an integrand of the form where we assumed the fermion running in the loop to be massless .the momenta are defined by one can work out the algebra , define the corresponding spinor products and vectors , and generate the input for ninja in the same way as for the four - photons case .one can also check that the terms proportional to in the final expression for the integral vanish upon integration .therefore , we can perform the simplification , or equivalently , in the numerator .moreover , one can exploit the knowledge that only the cut - constructible contributions of boxes and triangles contribute to the total result , hence we can ask ninja to stop the reduction at triple cuts with ....amp.setcutstop(3 ) ; .... and remove the rational part from the result with .... amp.onlycutconstructible ( ) ; .... which will make the computation more efficient ( in the example implemented here , the run - time is reduced by about 33% ) . in the file ` 6photons.cc ` we call the method ` evaluate ` on all the independent permutations of the external legs , generated at run - time with the function ` std::next_permutation ` of the c++ standard library .the results have been compared with the ones in ref.s as well as with a similar computation performed with samurai for several helicity choices . with this example, we discuss a possible strategy for the generation of the input needed by ninja which can be suited for more complex computations where an efficient evaluation of the numerator methods at run - time can be important .we consider the one - loop integral defined by the diagram depicted in figure [ fig : ttbarh ] , contributing to the 5-point helicity amplitude .[ sec : ttbarh ] .this picture has been generated using gosam.,scaledwidth=45.0% ] the analytic expression for the integrand of this example , which can be worked out from the feynman rules of the standard model , has been generated with the help of the package gosam and can be found in the form file ` ttbarh.frm ` .this already contains some abbreviations which are independent from the loop momentum of the diagram . at run - time , these -independent abbreviations are computed only once per phase space point , making thus the evaluation of the numerator and its expansions more efficient .this analytic expression is processed by ninjanumgen which produces the numerator expansions .we also add to the numerator class ` ttbarhdiagram ` an ` init ` method which uses the spinor library described in section [ sec:4photons ] in order to compute the relevant spinor products and polarization vectors , as well as the abbreviations which do not depend on the loop momentum .these are stored as private data members of the class . for simplicity ,our result neglects the coupling constants and an overall color factor .even though we considered a single diagram and a specific helicity choice , this example illustrates a general strategy for the generation of an analytic numerator expression which is suited for the numerical evaluations performed by integrand - reduction algorithms like the one implemented in the library ninja .the full amplitude for this process has been computed in ref.s , while an additional jet has recently been added to the final state in ref.s where ninja has been used for the reduction of the corresponding integrands generated by gosam . in this examplewe show how ninja can be used in order to compute integrals whose rank is higher than the number of loop denominators .this simple test is similar to the example presented in ref .[ sec : usage ] , hence we will describe each step as in the previous case . we define a 5-point amplitude of rank 6 , with kinematics and integrand in terms of the reference vectors ( ) and the momenta running into the loop we follow the same steps outlined in section [ sec : usage ] . with ninjanumgenwe generate the methods for ninja . after writing the integrand in the form file ` mynumhr.frm ` we call the script with the command .... ninjanumgen mynumhr.frm --nlegs 5 --rank 6 -o mynumhr.cc .... which creates the file ` mynumhr.cc ` and a template for the header ` mynumhr.hh ` .once again , we define the vectors as public members of the numerator class ` diagram ` , by inserting .... public : ninja::complexmomentum v0,v1,v2,v3,v4,v5 ; .... in the class definition .a possible test program can be almost identical to the one we showed in section [ sec : dummyex4 ] , with obvious changes in the definition of the rank , the number of external legs and the reference vectors .this is implemented in the file ` simple_higher_rank_test.cc ` . in order to run this example ,the user must compile the library with the ` enable - higher_rank ` option , otherwise a call to the ` evaluate ` method of an ` amplitude ` object will cause a run - time error .as one can see , when ninjanumgen is used for the generation of the expansions , the higher - rank case is handled automatically without any intervention by the user .besides , the internal higher - rank routines of ninja will be automatically called whenever the rank is equal to ( where is the number of loop propagators ) , while in the public programming interface there is no difference with respect to the normal - rank case . in the last examples , we wish to illustrate the possibility of using ninja in a multi - threaded application .these examples are implemented using posix threads , which are a standard in unix - like operating systems , but adapting them to different programming interfaces for threads ( such as openmp ) should be straightforward . in order to implement a thread - safe application , one should avoid race conditions which might occur if different threads try to write on the same variables .in particular , one should avoid accessing global variables for writing from different threads .the only global variables used directly by ninja are those controlling the global options described in section [ sec : globaloptions ] .as explained in that section , these options are only meant to change the general behavior of the library for debugging purposes ( e.g. for checking that the provided numerator methods are correct ) , while in general the default options should not be changed during a phase - space integration , especially when performance is important . hence ,on the side of the ninja library , there should be no issue and one can safely call the ` evaluate ` method from different ` amplitude ` objects in different threads . during a call of the ` evaluate ` method on an ` amplitude ` object ,issues might however arise from global variables used by the chosen library of master integrals or the numerator methods .as for the numerator methods , all the examples distributed with ninja define a thread - safe numerator class ( more specifically , one can safely call numerator methods from different instances of the class in different threads ) .this is simply done by using data members of the class instead of global variables , making thus different instances of the same class completely independent .if the procedures implemented by libraries of master integrals are thread - safe , one can therefore use ninja in multi - threaded applications . as an example, one can use the class ` avhoneloop ` which , as explained in section [ sec : mis ] , wraps routines of the oneloop library and adds a cache of computed integrals .the cache is a non - static data member of the class .one can therefore create one instance of this class per thread and assign it accordingly to the ` amplitude ` objects to be evaluated in the same thread .as an example , with ....avh_olo.init(1 ) ; avhoneloop my_lib[n_threads ] ; amplitude < realmasses > amp[n_threads ] ; for ( int i=0 ; i < n_threads ; + + i ) amp[i].setintegrallibrary(my_lib[i ] ) ; .... we create ` n_threads ` amplitude objects whose ` evaluate ` method can be safely called in a separate thread ( in the first line , we called the ` init ` method on the global instance ` avh_olo ` defined in the library , in order to allow oneloop to perform its global initialization ) . in this way, different threads will also have an independent cache of master integrals .this strategy allows to build a multi - threaded application which uses ninja for the reduction of one - loop integrals .recent versions of looptools ( namely looptools-2.10 or later ) can also be used in threaded applications , since they have a mutex regulating writing access to the internal cache of integrals . inthe following we discuss the possibility to build a multi - threaded application with ninja and any other ( not necessarily thread - safe ) library of master integrals .indeed , even though ninja has obviously no control over possible issues arising from routines of external libraries , we offer an easy way to work around any potential problem . in this case, there is no general way to ensure that calling routines of the same integral library from different threads will not cause conflicts .however , one can avoid these conflicts by scheduling the calls of the external procedures in such a way that they are never evaluated at the same time from two or more threads .if the computation of the master integrals takes only a small fraction of the total run time ( which is usually the case when a cache of integrals is present ) , the effects of this on the performance will in general be reasonably small . within ninja , implementing a scheduled access on the routines used by a library of master integralsis straightforward . as explained more in detail in appendix[ sec : intlib ] , the generic interface used by ninja in order to call master integral procedures , has two methods called ` init ` and ` exit ` which are evaluated exactly once in each call of the ` evaluate ` method , immediately before the computation of the first master integral and after the computation of the last master integral respectively .therefore we can use mutexes ( such as the ones present the posix standard for threads ) in order to _ lock _ the calls to the master integrals in the ` init ` method and _ unlock _ them in the ` exit ` method .this makes sure that , between the calls of the ` init ` and ` exit ` methods , no other thread will use the master integral routines , hence avoiding any possible conflict . in order to make a library of master integrals thread - safe, we use the template class ` threadsafeintegrallibrary ` , which is included in the distribution .this automatically wraps an existing class derived from ` integrallibrary ` and adds to it a mutex that schedules the calls to the master integrals as explained above . as an example , defining a thread - safe version of a generic library ` baselibrary ` can be simply achieved with .... # include < ninja / thread_safe_integral_library.hh > using namespace ninja ; threadsafeintegrallibrary < baselibrary > my_lib ; .... which defines a new interface ` my_lib ` that can be made the default by calling .... setdefaultintegrallibrary(my_lib ) ; .... before any thread is created ( alternatively , we could call the ` setintegrallibrary ` method on each ` amplitude ` object , either outside or inside the threads ) . in the files ` thread_4photons.cc ` and ` thread_6photons.cc ` we repeat the examples of the four- and six - photons amplitudes , but this time we compute several phase - space points in parallel on different threads . as mentioned before, we do not need to implement other numerator classes , since the ones described in sections [ sec:4photons ] and [ sec : sixphotons ] can be safely used in multi - threaded applications . in the source files ,we implement both the approaches described in this section. the preprocessor will select the former if the oneloop interface has been enabled and the latter otherwise .the multi - threaded examples can be compiled with .... make thread - examples .... if at least one between the oneloop and looptools libraries was enabled during configuration and your system supports posix threads .a complete discussion on the implementation of multi - threaded applications for doing phenomenology at one - loop is beyond the purposes of this paper .moreover , a detailed assessment of possible advantages of this approach would generally depend on the generator of the integrands and the phase space integration . in these examples we showed that the methods implementing the reduction via laurent expansion in ninja can be safely used in multi - threaded programs .we presented the public c++ library ninja which implements the integrand reduction via laurent expansion method for the computation of one - loop amplitudes in quantum field theories .the main procedures of the library take as input the numerator of the integrand and some parametric expansions of the same , which can be generated with the help of the simple python package ninjanumgen included with the distribution .the expansions of the integrand on the multiple cuts are computed semi - numerically at run - time , via a simplified polynomial - division algorithm .some of the coefficients of the laurent expansions are thus identified with the ones which multiply the master integrals .the algorithm is light and proved to have good performance and numerical stability , hence it is suited for applications to complex one - loop processes , characterized by either several external legs or several mass scales .we described the usage of the library , in particular the generation of the input , the calls of the procedures for the reduction , and the interface to libraries of master integrals .this information can be used in order to interface the library with existing one - loop packages .we thus expect that ninja will be useful for future computations in high - energy physics , especially for those involving more complex processes .the author thanks all the other members of the gosam collaboration for the common development of a one - loop package which could be interfaced with ninja , and especially pierpaolo mastrolia , edoardo mirabella and giovanni ossola for innumerable discussions and exchanges .the author also thanks thomas hahn for his support with looptools and comments on the draft .this work was supported by the alexander von humboldt foundation , in the framework of the sofja kovalevskaja award project `` advanced mathematical methods for particle physics '' , endowed by the german federal ministry of education and research .the reduction procedures implemented in ninja take as input a class derived from the abstract class ` ninja::numerator ` .this must implement the required expansions in the corresponding methods .if the analytic expression of the numerator can be provided by the user , the source code for the methods can be automatically generated with the help of the simple python package ninjanumgen , which is distributed with the library and can be installed as explained in section [ sec : installation ] . the package can be used both as a script or as a module within python . in section [ sec : integrand ] we already gave a simple example of its usage as a script .as explained there , the user must create a file containing a form expression of the numerator of the integrand .the package uses form-4 in order to generate the expansions which are needed and produce a c++ source file with the definitions of the corresponding methods . if not already present , an header file with a sketch of the definition of the class will also be created. the user can complete it by adding data members and methods which are specific of this class .form allows one to define symbols between square brackets ( e.g.`[symbol_name ] ` ) , containing characters which otherwise would not be permitted in a declaration .ninjanumgen also allows the usage of such symbols in the expression of the numerator , and it will remove the brackets ( which would produce illegal c++ code ) when writing the final source files .this gives the user a wider range of possibilities , for instance using symbols which correspond to variable names containing underscores or data members of structures ( e.g. with ` [ structure_instance.data_member ] ` ) .we first give a few more details about the usage of the package as a script .it is invoked with the command .... ninjanumgen --nlegs nlegs file .... where ` file ` is the name of the file which contains the numerator expression and ` nlegs ` is the number of external legs of the loop , which is equal to the number of loop denominators .a description of all the allowed arguments can be obtained with the command .... ninjanumgen --help .... and the most important ones are : ` rank rank ` , ` -r rank ` : : rank of the numerator , by default it will be assumed to be equal to the number of external legs of the loop ` diagname diagname ` , ` -d diagname ` : : name of the numerator expression in the form file , by default it will be assumed to be ` diagram ` ` cdiagname cdiagname ` : : name of the numerator class in the generated c++ files , by default it will be the same as the form expression ` formexec formexec ` : : the form executable , the default is ` form ` ` qvar qvar ` : : name of the loop momentum variable defined in eq. , the default is ` q ` ` mu2var mu2var ` : : name of the loop variable defined in eq. , the default is ` mu2 ` ` output output ` , ` -o output ` : : name of the output source file , the default is ` ninjanumgen.cc ` ` header header ` : : c++ header file containing the definition of the numerator class : if the file does not exists , one will be created . by defaultit will have the same name as the output but with ` .hh ` extension .as mentioned , one can also use the package as a python module ( ` ninjanumgen ` ) .this contains a class ` diagramexpansion ` which can be used for the generation of the source code which implements the numerator methods .the input parameters of the constructor of this class roughly correspond to the arguments which can be used in the script .a detailed description can be obtained , after installation , by invoking python in interactive mode ( usually done with the command ` python ` ) and typing .... import ninjanumgen help(ninjanumgen.diagramexpansion ) .... the method ` writesource ` generates the source files . as a simple example , the source for the integrand we defined in section [ sec : integrand ] could have been generated within python with the commands .... # import the module import ninjanumgen # define the mandatory arguments for the constructor n_legs = 4 input_file = ' mynum.frm ' output_file = ' mynum.cc ' # define an instance of the class diagramexpansion mynum = ninjanumgen.diagramexpansion(input_file , output_file , n_legs , rank=4 ) # generate the source mynum.writesource ( ) .... we suggest to look at the python files in the ` examples ` directory for other basic examples .as pointed out in ref . , the laurent expansion method can be generalized to non - renormalizable and effective theories with higher - rank numerators . in a renormalizable theory , with a proper choice of gauge the rank can not be greater than the number of loop propagators .ninja , if configured with the ` enable - higher_rank ` option , can also be used for the computation of integrals with . herewe describe the generalization of the method to the higher - rank case , underlining the points where it differs from the renormalizable case . in eq ., we gave the most general parametrization of the residues in a renormalizable theory .in the higher - rank case with , such parametrization is generalized as follows the generalized integral decomposition is thus \pagebreak[1 ] \nn & + \sum_{\{i_1 , i_2\}}\bigg\ { c_{10}^ { ( i_1 i_2 ) } i_{i_1 i_2}[\mu^2\ , ( q+p_{i_1})\cdot e_2 ) ] + c_{13}^ { ( i_1 i_2 ) } i_{i_1 i_2}[((q+p_{i_1})\cdot e_2)^3 ] \bigg\ } \pagebreak[1 ] \nn & + \sum_{i_1 } \bigg\ { c_{14}\ , i_{i_1}[\mu^2 ] + c_{15}^ { ( i_1)}\ , i_{i_1}[((q+p_{i_1})\cdot e_3)((q+p_{i_1})\cdot e_4 ) ] \bigg\ } \label{eq : hrintegraldecomposition}\end{aligned}\ ] ] this higher - rank decomposition has been used for the computation of nlo corrections to higgs - boson production in association with two and three jets .other libraries which implement the reduction of higher - rank integrals are xsamurai , which extends the more traditional integrand reduction algorithm of samurai , and golem95 . while the extension of the laurent expansion method for the computation of higher - rank 3-point and 2-point residues is straightforward , for 4-point and 1-point residues some further observations are in order . herewe propose a generalization of the laurent expansion method which allows to efficiently compute the non - spurious coefficients of 4- and 1-point residues without spoiling the nice features of the algorithm , such as the simplified subtractions of higher - point contributions and the diagonal systems of equations .this generalization is not present elsewhere in the literature and has been implemented in the ninja library .[ [ point - residues-5 ] ] 4-point residues + + + + + + + + + + + + + + + + the coefficient can be computed exactly as in the renormalizable case . for the coefficient , one needs instead to keep also the next - to - leading term in the expansion described before , so that the -dimensional solutions of a quadruple cut , given in eq ., in the asymptotic limit become where it is worth noticing that can be obtained as the average of the two solutions of the corresponding four - dimensional quadruple cut . in this limit, the expansion of the integrand reads hence the leading term is now the spurious coefficient , but can still be obtained as the next - to - leading term .this can be implemented semi - numerically , by keeping the two leading terms of the expansion of the numerator and performing a polynomial division with respect to the two leading terms in the expansion of the uncut denominators which have the form given the very limited number of terms involved , the division can be implemented very efficiently in a small number of operations .more in detail , if ` num ` and ` den ` are arrays of length two containing the leading and next - to - leading terms in the expansion of the numerator and a denominator respectively , we can perform the division in place with the commands ....num[0 ] /= den[0 ] ; num[1 ] -= den[1]*num[0 ] ; num[1 ] /= den[0 ] ; .... which will have the effect of replacing the entries of ` num ` with the ones of the expansion of .we also observe that the computation and the subtraction of pentagons is not needed in the higher - rank case either .[ [ point - residues-6 ] ] 1-point residues + + + + + + + + + + + + + + + + on higher - rank 1-point residues we consider -dimensional solutions of the corresponding single cut of the form in terms of the free variables and . by taking the limit of the integrand and the subtraction terms evaluated on these solutions , we obtain an asymptotic polynomial expansion of the form one can check that the non - spurious coefficients of the tadpole are given in terms of the ones of the expansions above by in the higher - rank case , the ` muexpansion ` method of the numerator needs to compute both the leading and the next - to - leading term of the expansion in .the package ninjanumgen , takes care of this automatically when the specified rank is higher than the number of external legs of the loop .the information in the next paragraph is only needed for a custom implementation of the method without ninjanumgen .the ` muexpansion ` method in the higher - rank case should compute the two leading terms of the expansion in of the numerator , defined by with }} \(\n\)\(t^{r}\) \(\n\)\(t^{r-1}\)$ ] ; ....all the other methods should have instead the same definition described in section [ sec : input ] . as one can see from eq ., in the higher - rank case five new types of integral appear in the final decomposition .they are a 2-point integral of rank 3 , a 1-point integral of rank 2 , and three more integrals containing at the numerator which contribute to the rational part of the amplitude .ninja contains an implementation of all these higher - rank integrals in terms of lower - rank ones .this means that , should the user choose to interface a custom integral library ( see appendix [ sec : intlib ] ) , these higher - rank integrals would not be needed , although specifying an alternative implementation would still be possible .all the integrals of eq . which contribute to the rational part of the amplitudehave already been computed in ref . . with our choice of the normalization factor given in eq . , they read = { } & \frac{1}{6}\left ( \frac{s_{i_2 i_1 } + s_{i_3 i_2 } + s_{i_1 i_3}}{4 } - m_{i_1}^2 - m_{i_2}^2 - m_{i_3}^2 \right ) + \o(\epsilon)\\ i_{i_1 i_2}[\mu^2\ , ( ( q+p_{i_1 } ) \cdot v ) ] = { } & \frac{((p_{i_2}-p_{i_1 } ) \cdot v)}{12}\big ( s_{i_2 i_1 } - 2\ , m_{i_1}^2 - 4\ , m_{i_2}^2 \big ) + \o(\epsilon ) \\ i_{i_1}[\mu^2 ] = { } & \frac{m_{i_1}^4}{2 } + \o(\epsilon)\end{aligned}\ ] ] where were defined in eq .and is an arbitrary vector .the tadpole of rank 2 appearing in eq . can also be written as a function of the scalar tadpole integral as follows = { } m_{i_1}^2\ , \frac{(e_3\cdot e_4)}{4}\left ( i_{i_1 } + \frac{m_{i_1}^2}{2 } \right ) + \o(\epsilon).\ ] ] since the vector in the bubble integral of rank 3 appearing in eq .is massless , the corresponding integral is simply proportional to the form factor , = ( ( p_{i_2}-p_{i_1})\cdot e_2)^3\ , b_{111}(s_{i_2 i_1},m_{i_1}^2 , m_{i_2}^2).\ ] ] the form factor can be computed using the formulas of ref . , as a function of form factors of scalar integrals . in the special case with we use eq .( a.6.2 ) and ( a.6.3 ) of that reference . for the general case implement instead the following formula -m_{i_2}^2 \ , i_{i_2}-i_{i_2}[\mu^2 ] \nn & - 4 \ , i_{i_1 i_2}[\mu^2\ , ( ( q+p_{i_1})\cdot ( p_{i_2}-p_{i_1 } ) ) ]\nn & - 4 \ , m_{i_1}^2 \ ,i_{i_1 i_2}[(q+p_{i_1})\cdot ( p_{i_2}-p_{i_1})]\big ) \nn & + 4\ , ( m_{i_2}^2-m_{i_1}^2-s_{i_2 i_1 } ) \ , i_{i_1 i_2}[((q+p_{i_1})\cdot ( p_{i_2}-p_{i_1}))^2 ] \bigg\}.\end{aligned}\ ] ]ninja already implements interfaces for the oneloop and the looptools integral libraries .these libraries have been used in a large number of computations and provide very reliable results , hence they should suffice for most purposes . however , ninja has been designed considering the possibility of using any other library of master integrals .the master integrals are computed by calling virtual methods of the abstract class ` ninja::integrallibrary ` , which is defined in the header file ` ninja / integral_library.hh ` .therefore , any library of master integrals can be interfaced by implementing a class derived from ` integrallibrary ` .each method of the library corresponds to a different master integral appearing in eq . , which should be implemented for both real and complex internal masses ( and optionally for the massless case ) .an implementation of higher - rank integrals can also be provided but it is not needed , since ninja has a default implementation of them in terms of lower rank integrals .there are two further methods , namely ` init ` and ` exit ` .the former is called inside the method ` amplitude::evaluate ` just before the computation of the first needed master integral , while the latter is called after the last master integral has been computed .the method ` init ` takes as input the square of the renormalization scale to be used in the subsequent calls of the methods implementing the master integrals .it can also be used in order to perform any other initialization the library might need before computing the integrals .the ` exit ` method instead , does not need to be implemented and by default it will not perform any action . in section [ sec : threads ] we gave an example of a case where a non - trivial implementation of the ` exit ` method could be useful .- real masses virtual void getboxintegralrm(complex rslt[3 ] , real s21 , real s32 , real s43 , real s14 , real s31 , real s42 , real m1sq , real m2sq , real m3sq , real m4sq ) = 0 ; // - complex masses virtual void getboxintegralcm(complex rslt[3 ] , real s21 , real s32 , real s43 , real s14 , real s31 , real s42 , const complex & m1sq , const complex & m2sq , const complex & m3sq , const complex & m4sq ) = 0 ; .... and they must write the term of the result in the -th entry of the array ` rslt ` , for .the arguments are the invariants and the squared masses .similar methods need to be provided for 3-point , 2-point and 1-point master integrals , as described in detail in the comments inside the header file ` ninja / integral_library.hh ` .examples of implementation of this interface for the libraries oneloop and looptools can be found in the source code .more in detail , we define the instances ` ninja::avh_olo ` and ` ninja::loop_tools ` of the classes ` ninja::avhoneloop ` and ` ninja::looptools ` respectively , which implement the methods described above as wrappers of the corresponding routines in each integral library .the oneloop interface also implements a cache of master integrals on top of these routines .the cache is implemented similarly to a hash table , which allows constant - time look - up of each computed integral from its arguments .hence , the methods of the ` avhoneloop ` class will call the routines of the oneloop library only if a master integral is not found in the cache . the cache can be cleared with the class method ` avhoneloop::clearintegralcache ` . during a phase - space integration ,we suggest calling this method once per phase space point , especially for more complex processes .this method does not completely free the allocated memory , but keeps the buckets of the hash table available in order to store the integrals more efficiently in subsequent calls of the respective methods .if the user wishes to completely free the allocated memory , the method ` avhoneloop::freeintegralcache ` can be used , although in general ` clearintegralcache ` should be preferred .as already mentioned , every instance of ` avhoneloop ` has a cache of master integrals as data member. this can be useful for building multi - threaded applications , as discussed in the examples of section [ sec : threads ] .since looptools already has an internal cache of master integrals , the implementation of its interface is much simpler and only consists in wrapper of its routines .we implemented a ` clearintegralcache ` method in the ` looptools ` class as well , which in this case simply calls the routine which clears the cache of integrals in looptools . | we present the public c++ library ninja , which implements the integrand reduction via laurent expansion method for the computation of one - loop integrals . the algorithm is suited for applications to complex one - loop processes . |
perhaps the most basic estimation problem in statistics is the canonical problem of estimating a multivariate normal mean .based on the observation of a -dimensional multivariate normal random the problem is to find a suitable estimator of .the celebrated result of stein ( ) dethroned , the maximum likelihood and best location invariant estimator for this problem , by showing that , when , is inadmissible under quadratic loss from a decision theory point of view , an important part of the appeal of was the protection offered by its minimax property .the worst possible risk incurred by was no worse than the worst possible risk of any other estimator .stein s result implied the existence of even better estimators that offered the same minimax protection .he had begun the hunt for these better minimax estimators . in a remarkable series of follow - up papersstein proceeded to set the stage for this hunt .james and stein ( ) proposed a new closed - form minimax shrinkage estimator the now well - known james stein estimator , andshowed explicitly that its risk was less than for every value of when , that is , it uniformly dominated .the appeal of under was compelling .it offered the same guaranteed minimax protection as while also offering the possibility of doing much better .stein ( ) , though primarily concerned with improved confidence regions , described a parametric empirical bayes motivation for ( [ jsestimator ] ) , describinghow could be seen as a data - based approximation to the posterior mean the bayes rule which minimizes the average risk when .he here also proposed the positive - part james stein estimator , a dominating improvementover , and commented that `` it would be even better to use the bayes estimate with respect to a reasonable prior distribution . ''these observations served as a clear indication that the bayesian paradigm was to play a major role in the hunt for these new shrinkage estimators , opening up a new direction that was to be ultimately successful for establishing large new classes of shrinkage estimators .dominating fully bayes shrinkage estimators soon emerged .strawderman ( ) proposed , a class of bayes shrinkage estimators obtained as posterior means under priors for which strawderman explicitly showed that uniformly dominated and was proper bayes , when and or when and .this was especially interesting because any proper bayes was necessarily admissible and so could not be improved upon .then , stein ( ) showed that , the bayes estimator under the harmonic prior dominated when . a special case of when , was only formal bayes because is improper .undeterred , stein pointed out that the admissibility of followed immediately from the general conditions for the admissibility of generalized bayes estimators laid out by brown ( ) .a further key element of the story was brown s ( ) powerful result that all such generalized bayes rules ( including the proper ones of course ) constituted a complete class for the problem of estimating multivariate normal mean under quadratic loss .it was now clear that the hunt for new minimax shrinkage estimators was to focus on procedures with at least some bayesian motivation .perhaps even more impressive than the factthat dominated was the way stein proved it . making further use of the rich results in brown ( ) , the key to his proof was the fact that any posterior mean bayes estimator under a prior can be expressed as where is the marginal distribution of under .[ here is the familiar gradient . ] at first glance it would appear that ( [ keyrep1 ] ) has little to do with the risk . however , stein noted that insertion of ( [ keyrep1 ] ) into , followed by expansion and an integration - by - parts identity , now known as one of stein s lemmas , yields the following general expression for the difference between the risks of and : \\[-8.5pt ] & & \quad = e_\mu \biggl[\|\nabla\log m_\pi(x)\|^2 - 2 \frac{\nabla^2 m_\pi(x)}{m_\pi ( x ) } \biggr ] \nonumber\\[-0.5pt ] \label{uber1b } & & \quad = e_\mu \bigl[-4\nabla^2 \sqrt{m_\pi(x)}\big/\sqrt{m_\pi ( x ) } \bigr].\end{aligned}\ ] ] ( here is the familiar laplacian . ) because the bracketed terms in ( [ uber1a ] ) and ( [ uber1b ] ) do not depend on ( they are unbiased estimators of the risk difference ) , the domination of by would follow whenever was such that these bracketed terms were nonnegative .as stein noted , this would be the case in ( [ uber1a ] ) whenever was superharmonic , , and in ( [ uber1b ] ) whenever was superharmonic , , a weaker condition .the domination of by was seen now to be attributable directly to the fact that the marginal ( [ margx ] ) under , a mixture of harmonic functions , is superharmonic when .however , such an explanation would not work for the domination of by , because the marginal ( [ margx ] ) under in ( [ pi_a ] ) is not superharmonic for any .indeed , as was shown later by fourdrinier , strawderman and wells ( ) , a superharmonic marginal can not be obtained with any proper prior .more importantly , however , they were able to establish that the domination by was attributable to the superharmonicity of under when ( and strawderman s conditions on ) .in fact , it also followed from their results that is superharmonic when and , further broadening the class of minimax improper bayes estimators .prior to the appearance of ( [ uber1a ] ) and ( [ uber1b ] ) , minimaxity proofs , though ingenious , had all been tailored to suit the specific estimators at hand .the sheer generality of this new approach was daunting in its scope . by restricting attention to priors that gave rise to marginal distributions with particular properties ,the minimax properties of the implied bayes rules would be guaranteed .[ sec : pred - emerges ] the seminal work of stein concerned the canonical problem of how to estimate based on an observation of .a more ambitious problem is how to use such an to estimate the entire probability distribution of a future from a normal distribution with this same unknown mean , the so - called predictive density of .such a predictive density offers a complete description of predictive uncertainty . to conveniently treat the possibility of different variances for and , we formulate the predictive problem as follows .suppose and are independent -dimensional multivariate normal vectors with common unknownmean but known variances and .letting denote the density of , the problem is to find an estimator of based on the observation of only .such a problem arises naturally , for example , for predicting based on the observation of which is equivalent to observing .this is exactly our formulation with and . for the evaluation of as an estimator of ,the analogue of quadratic risk for the mean estimation problem is the kullback leibler ( kl ) risk where denotes the density of , and is the familiar kl loss . for a ( possibly improper ) prior distribution on ,the average risk is minimized by the bayes rule \nonumber \\[-8pt ] \\[-8pt ] & = & \int p(y { |}\mu ) \pi(\mu{|}x)\,d\mu , \nonumber\end{aligned}\ ] ] the posterior mean of under ( aitchison , ) .it follows from ( [ eq : bayes ] ) that is a proper probability distribution over whenever the marginal density of is finite for all ( integrate w.r.t . and switch the order of integration ) .furthermore , the mean of ( when it exists ) is equal to , the bayes rule for estimating under quadratic loss , namely the posterior mean of .thus , also carries the necessary information for that estimation problem .note also that unless is a trivial point prior , such will not be of the form of for any .the range of the bayes rules here falls outside the target space of the densities which are being estimated .a tempting initial approach to this predictive density estimation problem is to use the simple plug - in estimator to estimate , the so - called estimative approach .this was the conventional wisdom until the appearance of aitchison ( ) .he showed that the plug - in estimator is uniformly dominated under by \hspace*{-17pt}\nonumber \\[-7pt ] \\[-8pt ] & = & \frac{1}{\ { 2\pi(v_x + v_y)\}^{{{p}/{2 } } } } \exp \biggl\ { -\frac{\|y - x\| ^2}{2(v_x + v_y ) } \biggr\},\hspace*{-17pt } \nonumber\end{aligned}\ ] ] the posterior mean of with respect to the uniform prior , the so - called predictive approach . in a related vein ,akaike ( ) pointed out that , by jensen s inequality , the bayes rule would dominate the random plug - in estimator when is a random draw from .strategies for averaging over were looking better than plug - in strategies .the hunt for predictive shrinkage estimators had turned to bayes procedures .distinct from , was soon shown to be the best location invariant predictive density estimator ; see murray ( ) and ng ( ) . that is best invariant and minimax also follows from the more recent general results of liang and barron ( ) , who also showed that is admissible when .the minimaxity of was also shown directly by george , liang and xu ( ) .thus , , rather than , here plays the role played by in the mean estimation context .not surprisingly , , the posterior mean under the uniform prior is identical to in that context .the parallels between the mean estimation problem and the predictive estimation problem came into sharp focus with the stunning breakthrough result of komaki ( ) .he proved that when , itself is dominated by the bayes rule ,\ ] ] under the harmonic prior in ( [ pi_h ] ) used by stein ( ) . shortly thereafterliang ( ) showedthat is dominated by the proper bayes rule under for which when , and when and or and , the same conditions that strawderman had obtained for his estimator .notethat in ( [ eq : pa ] ) is an extension of ( [ pi_a ] ) which depends on the constant .as before , is the special case of when .note that is now playing the `` straw - man '' role that was played by in the mean estimation problem .[ sec : theory ] the proofs of the domination of by in komaki ( ) and by in liang ( ) were both tailored to the specific forms of the dominating estimators .they did not make direct use of the properties of the induced marginal distributions of and . from the theory developed by brown ( ) and stein ( ) for the mean estimation problem, it was natural to ask if there was a theory analogous to ( [ keyrep1])([uber1b ] ) which would similarly unify the domination results in the predictive density estimation problem .as it turned out , just such a theory was established in george , liang and xu ( ) , the main results of which we now proceed to describe .the story begins with a representation , analogous to brown s representation in ( [ keyrep1 ] ) , that is available for posterior mean bayes rules in the predictive density estimation problem .a key element of the representation is the form of the marginal distributions for our context which we denote by for and a prior . in terms of our previous notation ( [ margx ] ) , .[ thm : pform ] the bayes rule in ( [ eq : bayes ] ) can be expressed as where is the bayes rule under given by ( [ eq : piu ] ) , is the marginal distribution of , and , where , is the marginal distribution of for independent and .lemma [ thm : pform ] shows how the form of is determined entirely by and the form of and .the essential step in its derivation is to factor the joint distribution of and into terms including a function of the sufficient statistic . inserting the representation ( [ eq : mform ] ) into the risk leads immediately to the following unbiased estimate for the risk difference between and : as one can see from ( [ eq : uber3 ] ) and the fact that , would be uniformly dominated by whenever is decreasing in . as if by magic , the sign of turned out to be directly linked to the same unbiased risk difference estimates ( [ uber1a ] ) and ( [ uber1b ] ) of stein ( ) .[ thm : rderiv ] \\[-8pt ] & & \quad \quad = e_{\mu , v } \biggl[\frac { \nabla^2 m_\pi(z ; v)}{m_\pi(z ; v ) } - \frac{1}{2 } \|\nabla\log m_\pi(z ; v)\|^2 \biggr ] \nonumber \\ \label{uber2a } & & \quad \quad = e_{\mu , v } \bigl [ 2\nabla^2 \sqrt{m_\pi(z ; v)}\big/\sqrt{m_\pi(z ; v ) } \bigr].\end{aligned}\ ] ] the proof of lemma [ thm : rderiv ] relies on brown s representation , stein s lemma , and the fact that any normal marginal distribution satisfies the well - known heat equation which has a long history in science and engineering ; for example , see steele ( ). combining ( [ eq : uber3 ] ) and lemma [ thm : rderiv ] with the fact that is minimax yields the following general conditions for the minimaxity of a predictive density estimator , conditions analogous to those obtained by stein for the minimaxity of a normal mean estimator .[ theo1 ] if is finite for all , then will be minimax if either of the following hold for all : is superharmonic . is superharmonic .although condition ( i ) implies the weaker condition ( ii ) above , it is included because of its convenience when it is available . since a superharmonic prior always yields a superharmonic for all , the following corollary is immediate .[ cor1 ] if is finite for all , then will be minimax if is superharmonic . because is superharmonic , it is immediate from corollary [ cor1 ] that is minimax . because is superharmonic for all ( under suitable conditions on ) , it is immediate from theorem [ theo1 ] that is minimax .it similarly follows that any of the improper superharmonic -priors of faith ( ) or any of the proper generalized -priors of fourdrinier , strawderman and wells ( ) yield minimax bayes rules .the connections between the unbiased risk difference estimates for the kl risk and quadratic risk problems ultimately yields the following identity : \\[-8pt ] & & \quad \quad= \frac{1}{2 } \int_{v_w}^{v_x } \frac{1}{v^2 } [ r_q(\mu , \hat\mu _ u ) - r_q(\mu , \hat\mu_\pi ) ] _ v \,dv , \nonumber\end{aligned}\ ] ] explaining the parallel minimax conditions in both problems .brown , george and xu ( ) used this identity to further draw out connections to establish sufficient conditions for the admissibility of bayes rules under kl loss , conditions analogous to those of brown ( ) and brown and hwang ( ) , and to show that all admissible procedures for the kl risk problems are bayes rules , a direct parallel of the complete class theorem of brown ( ) for quadratic risk .the james stein estimator in ( [ jsestimator ] ) provided an explicit example of how risk improvements for estimating are obtained by shrinking toward 0 by the adaptive multiplicative factor .similarly , under unimodal priors , posterior mean bayes rules shrink toward the center of , the mean of when it exists .( section [ sec : multshrink ] will describe how multimodal priors yield multiple shrinkage estimators . ) as we saw earlier , here plays the role both of and of the formal bayes estimator .the representation ( [ eq : mform ] ) reveals how analogously `` shrinks '' the formal bayes estimator , but not , by an adaptive multiplicative factor however , because must be a proper probability distribution ( whenever is always finite ) , it can not be the case that for all at any .thus , `` shrinkage '' here really refers to a reconcentration of the probability distribution of .furthermore , since the mean of is , this reconcentration , under unimodal priors , is toward the center of , as in the mean estimation case .consider , for example , what happens under which is symmetric and unimodal about 0 .figure [ fig1 ] illustrates how this shrinkage occurs for for various values of when . figure [ fig1 ] plots and as functions of when and .note first that is always the same symmetric shape centered at .when , shrinkage occurs by pushing the concentration of = toward 0 . as moves further from to and this shrinkage diminishes as becomes more and more similar to .as in the problem of mean estimation , the shrinkage by manifests itself in risk reduction over . to illustrate this , figure [ fig2 ] displays the risk difference ] at various obtained by which adaptively shrinks toward the closer of the two points and using equal weights .as in figure [ fig2 ] , we considered the case for .as the plot shows , maximum risk reduction occurs when is close to or , and goes to 0 as moves away from either of these points . at the same time , for each fixed , risk reduction by is larger for larger .it is impressive that the size of the risk reduction offered by is nearly the same as each of its single target counterparts .the cost of multiple shrinkage enhancement seems negligible , especially compared to the benefits .beyond their attractive risk properties , the james stein estimator and its positive - part counterpart are especially appealing because of their simple closed forms which are easy to compute .as shown by xu and zhou ( ) , similarly appealing simple closed - form predictive density shrinkage estimators can be obtained by the same empirical bayes considerations that motivate and .the empirical bayes motivation of , alluded to in section [ sec1 ] , simply entails replacing in ( [ conjugate - coef ] ) by , its unbiased estimate under the marginal distribution of when .the positive - part is obtained by using the truncated estimate which avoids an implicitly negative estimate of the prior variance . proceeding analogously , xu and zhou considered the bayesian predictive density estimate , \\[-8pt ] & & \hspace*{65pt}\frac{v_x}{v_x + \nu}v_y + \biggl(1-\frac{v_x}{v_x + \nu } \biggr)(v_x+v_y ) \biggr ) , \nonumber\end{aligned}\ ] ] when and are independent , and . replacing by its truncated unbiased estimate under the marginal distribution of , they obtained the empirical bayes predictive density estimate \\[-8pt ] & & \hphantom{\hat p_{p-2}(y { |}x ) \simn_p \biggl ( } v_y + \biggl(1-\frac{(p-2)v_x}{\|x\|^2 } \biggr)_+ v_x \biggr ) \nonumber\end{aligned}\ ] ] where , an appealing simple closed form .centered at , converges to the best invariant procedure as , and converges to as .thus , can be viewed as a shrinkage predictive density estimator that `` pulls '' toward , its shrinkage adaptively determined by the data . to assess the kl risk properties of such empirical bayes estimators, xu and zhou considered the class of estimators of the form ( [ eq : positive_js_hp ] ) with replaced by a constant , a class of simple normal forms centered at shrinkage estimators of with data - dependent variances to incorporate estimation uncertainty .for this class , they provided general sufficient conditions on and the dimension for to dominate the best invariant predictive density and thus be minimax .going further , they also established an `` oracle '' inequality which suggests that the empirical bayes predictive density estimator is asymptotically minimax in infinite - dimensional parameter spaces and can potentially be used to construct adaptive minimax estimators .it appears that these minimax empirical bayes predictive densities may play the same role as the james stein estimator in such problems .it may be of interest to note that a particular pseudo - marginal empirical bayes construction that works fine for the mean estimation problem appears not to work for the predictive density estimation problem .for instance , the positive - part james stein estimator can be expressed as , where is the function with ( see stein , ) .we refer to as a pseudo - marginal because it is not a bona fide marginal obtained by a real prior .nonetheless , it plays the formal role of a marginal in the mean estimation problem , and can be used to generate further innovations such as minimax multiple shrinkage james stein estimators ( see george , ) .proceeding by analogy , it would seem that could be inserted into the representation ( [ eq : mform ] ) from lemma [ thm : pform ] to obtain similar results under kl loss .unfortunately , this does not yield a suitable minimax predictive estimator because is not a proper probability distribution .indeed , and varies with .what has gone wrong ? because they do not correspond to real priors , such pseudo - marginals are ultimately at odds with the probabilistic coherence of a valid bayesian approach .in contrast to the mean estimation framework , the predictive density estimation framework apparently requires stronger fidelity to the bayesian paradigm .moving into the multiple regression setting , stein ( ) considered the estimation of a -dimensional coefficient vector under suitably rescaled quadratic loss .he there established the minimaxity of the maximum likelihood estimators , and then proved its inadmissibility when , by demonstrating the existence of a dominating shrinkage estimator . in a similar vein , as one might expect , the theory of predictive density estimation presented in sections [ sec : pred - emerges ] and [ sec : theory ] can also be extended to the multiple regression framework .we here describe the main ideas of the development of this extension which appeared in george and xu ( ) .similar results , developed independently from a slightly different perspective , appeared at the same time in kobayashi and komaki ( ) .consider the canonical normal linear regression setup : where is a full rank , fixed , is a fixed matrix , and is a common unknown regression coefficient .the error variance is assumed to be known , and set to be without loss of generality .the problem is to find an estimator of of the predictive density , evaluating its performance by kl risk where is the kl loss between the density and its estimator the story begins with the result , analogous to aitchison s ( ) for the normal mean problem , that the plug - in estimator , where is the least squares estimate of based on , is dominated under kl risk by the posterior mean of , the bayes rule under the uniform prior \\[-8pt ] & & { } \times\exp \biggl\ { - \frac { { \mathit{rss}}_{x , y } - { \mathit{rss}}_{x}}{2 } \biggr\}. \nonumber\end{aligned}\ ] ] here , too , is minimax ( liang , ; liang and barron , ) and plays the straw - man role of the estimator to beat .the challenge was to determine which priors would lead to bayes rules which dominated , and hence would be minimax .analogously to the representation ( [ eq : mform ] ) in lemma [ thm : pform ] for the normal mean problem , the following representation for a bayes rule here , was the key to meeting this challenge .[ thm : pform : reg ] the bayes rule can be expressed as where , is the least squares estimates of based on , and based on and , and is the marginal distribution of under .the representation ( [ eq : mform : reg ] ) leads immediately to the following analogue of ( [ eq : uber3 ] ) for the kl risk difference between and : the challenge thus became that of finding conditions on to make this difference positive , a challenge made more difficult than the previous one for ( [ eq : uber3 ] ) because of the complexity of and .fortunately this could be resolved by rotating the problem as follows to obtain diagonal forms .since and are both symmetric and positive definite , there exists a full rank matrix , such that \\[-8pt ] d&= & \operatorname{diag}(d_1,\ldots , d_p ) .\nonumber\end{aligned}\ ] ] because where is nonnegative definite , it follows that ] , the risk difference ( [ eq : uber3:reg ] ) could be reexpressed as \\[-8pt ] & & { } \qquad - e_{\mu , i } \log m_{\pi_w } ( \hat{\mu}_{x } ; i ) \nonumber\\ & & \quad = h_{\mu}(v_0 ) - h_{\mu}(v_1 ) , \nonumber\end{aligned}\ ] ] where and .the minimaxity of would now follow from conditions on such that for all and .]for all ]for all ] , , and s are i.i.d . . a central problem here is to estimate or various functionals of based on observing . transforming the problem with an orthonormal basis , ( [ eq : x : nonpa ] )is equivalent to estimating the s in known as the gaussian sequence model .the model above is different from the ordinary multivariate normal model in two aspects : ( 1 ) the model is increasing with the sample size , and ( 2 ) under function space assumptions on , the s lie in a constrained space , for example , an ellipsoid .a large body of literature has been devoted to minimax estimation of under risk over certain function spaces ; see , for example , johnstone ( ) , efromovich ( ) , and the references therein . as opposed to the ordinary multivariate normal mean problem, exact minimax analysis is difficult for the gaussian sequence model ( [ seq : model : nonpa ] ) when a constraint on the parameters is considered .this difficulty has been overcome by first obtaining the minimax risk of a subclass of estimators of a simple form , and then showing that the overall minimax risk is asymptotically equivalent to the minimax risk of the subclass .for example , an important result from pinsker ( ) is that when the parameter space is constrained to an ellipsoid , the nonlinear minimax risk is asymptotically equivalent to the linear minimax risk , namely the minimax risk of the subclass of linear estimators of the form . for nonparametric regression , the following analogue between estimation under risk and predictive density estimation under kl risk was established in xu and liang ( ) .the prediction problem for nonparametric regression is formulated as follows .let be future observations arising at a set of dense ( ) and equally spaced locations .given , the predictive density is just a product of gaussians .the problem is to find an estimator of , where performance is measured by the averaged kl risk in this formulation , densities are estimated at the locations simultaneously by . as it turned out, the kl risk based on the simultaneous formula - tion ( [ risk : simultaneous : nonpa ] ) is the analog of the risk for estimation .indeed , under the kl risk ( [ risk : simultaneous : nonpa ] ) , the prediction problem for a nonparametric regression model can be converted to the one for a gaussian sequence model .based on this formulation of the problem , minimax analysis proceeds as in the general framework for the minimax study of function estimation used by , for example , pinsker ( ) and belitser and levit ( , ) .the linear estimators there , which play a central role in their minimax analysis , take the same form as posterior means under normal priors .analogously , predictive density estimates under the same normal priors turned out to play the corresponding role in the minimax analysis for prediction .( the same family of bayes rules arises from the empirical bayes approach in section [ sec7 ] . )thus , xu and liang ( ) were ultimately able to show that the overall minimax kl risk is asymptotically equivalent to the minimax kl risk of this subclass of bayes rules , a direct analogue of pinker s theorem for predictive density estimation in nonparametric regression .stein s ( ) discovery of the existence of shrinkage estimators that uniformly dominate the minimax maximum likelihood estimator of the mean of a multivariate normal distribution under quadratic risk when was the beginning of a major research effort to develop improved minimax shrinkage estimation . in subsequent papersstein guided this effort toward the bayesian paradigm by providing explicit examples of minimax empirical bayes and fully bayes rules .making use of the fundamental results of brown ( ) , he developed a general theory for establishing minimaxity based on the superharmonic properties of the marginal distributions induced by the priors .the problem of predictive density estimation of a multivariate normal distribution under kl risk has more recently seen a series of remarkably parallel developments . with a focus on bayes rules catalyzed by aitchison ( ) , komaki ( ) provided a fundamental breakthrough by demonstrating that the harmonic prior bayes rule dominated the best invariant uniform prior bayes rule .these results suggested the existence of a theory for minimax estimation based on the superharmonic properties of marginals , a theory that was then established in george , liang and xu ( ) .further developments of new minimax shrinkage predictive density estimators now abound , including , as described in this article , multiple shrinkage estimators , empirical bayes estimators , normal linear model regression estimators , and nonparametric regression estimators .examples of promising further new directions for predictive density estimation can be found in the work of komaki ( ) which included results for poisson distributions , for general location - scale models and for wishart distributions , in the work of ghosh , mergel and datta ( ) which developed estimation under alternative divergence losses , and in the work of kato ( ) which established improved minimax predictive domination for the multivariate normal distribution under kl risk when both the mean and the variance are unknown .minimax predictive density estimation is now beginning to flourish .this work was supported by nsf grants dms-07 - 32276 and dms-09 - 07070 .the authors are grateful for the helpful comments and clarifications of an anonymous referee . | in a remarkable series of papers beginning in 1956 , charles stein set the stage for the future development of minimax shrinkage estimators of a multivariate normal mean under quadratic loss . more recently , parallel developments have seen the emergence of minimax shrinkage estimators of multivariate normal predictive densities under kullback leibler risk . we here describe these parallels emphasizing the focus on bayes procedures and the derivation of the superharmonic conditions for minimaxity as well as further developments of new minimax shrinkage predictive density estimators including multiple shrinkage estimators , empirical bayes estimators , normal linear model regression estimators and nonparametric regression estimators . , . |
in many combinatorial optimization problems we seek an object composed of some elements of a finite set whose total cost is minimum .this is the case , for example , in an important class of network problems where the set of elements consists of all arcs of some network and we wish to find an object in this network such as a path , a spanning tree , or a matching whose total cost is minimum . in general , the combinatorial optimization problems can often be expressed as 0 - 1 programming problems with a linear objective function , where a binary variable is associated with each element and a set of constraints describes the set of feasible solutions . for a comprehensive description of this class of problemswe refer the reader to .the usual assumption in combinatorial optimization is that all the element costs are precisely known .however , the assumption that all the costs are known in advance is often unrealistic . in practice , before solving a problem , we only know a set of possible realizations of the element costs .this set is called a _ scenario set _ and each particular realization of the element costs within this scenario set is called a _scenario_. several methods of defining scenario sets have been proposed in the existing literature .discrete _ and _ interval _ uncertainty representations are among the most popular ( see , e.g. , ) . in the former, scenario set contains a finite number of explicitly given cost vectors . in the latter one , for each element an interval of its possible values is specified and scenario set is the cartesian product of these intervals . in the discrete uncertainty representation , each scenario can model some event that has a global influence on the element costs . on the other hand , the interval uncertainty representation is appropriate when each element cost may vary within some range independently on the values of the other costs . a modification of the interval uncertainty representation was proposed in , where the authors assumed that only a fixed and a priori given number of costs may vary .more general scenario sets which can be used in mathematical programming problems were discussed , for example , in . in this paperwe assume that no additional information , for example a probability distribution , for scenario set is provided .if scenario set contains more than one scenario , then an additional criterion is required to choose a solution . in _ robust optimization _( see , e.g. , ) we typically seek a solution minimizing the worst case behavior over all scenarios .hence the min - max and min - max regret criteria are widely applied .however , this approach to decision making is often regarded as too conservative or pessimistic ( see , e.g. , ) .in particular , the min - max criterion takes into account only one , the worst - case scenario , ignoring the information connected with the remaining scenarios .this criterion also assumes that decision makers are very risk averse , which is not always true . in this paperwe wish to investigate a class of combinatorial optimization problems with the discrete uncertainty representation .hence , a scenario set provided with the input data , contains a finite number of explicitly given cost scenarios . in order to choose a solution we propose to use the _ ordered weighted averaging _ aggregation operator ( owa for short ) introduced by yager in .the owa operator is widely applied to aggregate the criteria in multiobjective decision problems ( see , e.g. , ) , but it can also be applied to choose a solution under the discrete uncertainty representation .it is enough to treat the cost of a given solution under scenario as a criterion .the key elements of the owa operator are weights whose number equals the number of scenarios .the weight expresses an importance of the largest cost of a given solution .hence , the weights allow a decision maker to take his attitude towards a risk into account and use the information about all scenarios while computing a solution .the owa operator generalizes the traditional criteria used in decision making under uncertainty such as the maximum , minimum , average , median , or hurwicz criterion .so , by using owa we can generalize the min - max approach , typically used in robust optimization .let us also point out that the owa operator is a special case of _ choquet integral _, a sophisticated tool for aggregating criteria in multiobjective decision problems ( see , e.g. , ) .the choquet integral has been recently applied to some multicriteria network problems in .unfortunately , the min - max combinatorial optimization problems are almost always harder to solve than their deterministic counterparts , even when the number of scenarios equals 2 . in particular , the min - max versions of the shortest path , minimum spanning tree , minimum assignment , minimum cut , and minimum selecting items problems are np - hard even for 2 scenarios .furthermore , if the number of scenarios is a part of the input , then all these problems become strongly np - hard and hard to approximate within any constant factor .since the maximum criterion is a special case of owa , the general problem of minimizing owa is not easier .however , it is not difficult to show that some other particular cases of owa , such as the minimum or average , lead to problems whose complexity is the same as the complexity of their deterministic counterparts .it is therefore of interest to provide a characterization of the problem complexity depending on various weight distributions . in this paperwe provide the following new results . in section [ sec2scen ] , we study the case when the number of scenarios equals 2 .we give a characterization of the problem complexity depending on the weight distribution . in section [ secfptas ] ,we show some sufficient conditions for the problem to admit a fully polynomial time approximation scheme ( fptas ) , when the number of scenarios is constant .finally , in section [ secunb ] , we consider the case in which the number of scenarios is a part of the input .we discuss different types of weight distributions .we show that for nonincreasing weights ( i.e. when larger weights are assigned to larger solution costs ) and for the hurwicz criterion , the problem admits an approximation algorithm whose worst case ratio depends on the problem parameters , in particular on the number of scenarios . on the other hand, we show that if the weights are nondecreasing , or the owa criterion is median , then the problem is not at all approximable unless p = np .let be a finite set of _ elements _ and be a set of _ feasible solutions_. in the deterministic case , each element has a nonnegative _ cost _ and we seek a solution whose total cost is minimum .namely , we wish to solve the following optimization problem : this formulation encompasses a large class of combinatorial optimization problems . in particular , for the class of network problems is the set of arcs of a given network and contains the subsets of the arcs forming , for example , paths , spanning trees , assignments , or cuts in . in practice ,problem is often expressed as a 0 - 1 programming one , where binary variable is associated with each element , , and a system of constraints describes the set in a compact form .before we discuss the uncertain version of problem , we recall the definition of the owa operator , proposed by yager in .let be a vector of reals .let us introduce a vector such that ] ( we use ] such that .then } w_i f_{\sigma(i)}.\ ] ] the owa operator has several natural properties which easily follow from its definition ( see , e.g. ) . since it is a convex combination of it holds .it is also _ monotonic _ , i.e. if for all ] .the cost of a given solution depends on scenario and will be denoted by . in this paperwe will aggregate the costs by using the owa operator .namely , given a weight vector , let us define } w_j f(x,\pmb{c}_{\sigma(j)}),\ ] ] where is a permutation of ] , then owa is the -th largest cost and the problem is denoted as min - quant . in particular , when , then the -th largest cost is median and the problem is denoted as min - median .if for all ] , and for the remaining weights , then we get the hurwicz pessimism - optimism criterion and the problem is then denoted as min - hurwicz .our goal has been to provide general properties of min - owa , which follow only from the type of the weight distribution in the owa operator .we have not taken into account a particular structure of an underling deterministic problem .thus , the results obtained may be additionally refined if some properties of are taken into account .s. mittal and a. s. schulz . a general framework for designing approximation schemes for combinatorial optimization problems with many objectives combined into one . in a.goel , k. jansen , j. d. p. rolim , and r. rubinfeld , editors , _ approx - random _ , volume 5171 of _ lecture notes in computer science _ , pages 179192 .springer - verlag , 2008 . | in this paper a class of combinatorial optimization problems with uncertain costs is discussed . the uncertainty is modeled by specifying a discrete scenario set containing distinct cost scenarios . the ordered weighted averaging ( owa for short ) aggregation operator is applied to choose a solution . some well known criteria used in decision making under uncertainty such as the maximum , minimum , average , hurwicz and median are special cases of owa . furthermore , by using owa , the traditional robust ( min - max ) approach to combinatorial optimization problems with uncertain costs can be generalized . the computational complexity and approximability of the problem of minimizing owa for the considered class of problems are investigated and some new positive and negative results in this area are provided . these results remain valid for many basic problems , such as network or resource allocation problems . combinatorial optimization ; owa operator ; robust optimization ; computational complexity ; approximation algorithms |
in reinforcement learning , an agent learns to maximize its discounted future rewards .the structure of the environment is initially unknown , so the agent must both learn the rewards associated with various action - sequence pairs and optimize its policy .a natural approach is to tackle the subproblems separately via a critic and an actor , where the critic estimates the value of different actions and the actor maximizes rewards by following the policy gradient .policy gradient methods have proven useful in settings with high - dimensional continuous action spaces , especially when task - relevant _ policy representations _ are at hand .we tackle the problem of learning actor ( policy ) and critic representations . in the supervised setting , representation or deep learning algorithmshave recently demonstrated remarkable performance on a range of benchmark problems .however , the problem of learning features for reinforcement learning remains comparatively underdeveloped .the most dramatic recent success uses -learning over finite action spaces , and essentially build a neural network critic . here, we consider _ continuous _ action spaces , and develop an algorithm that simultaneously learns the value function and its gradient , which it then uses to find the optimal policy .this paper presents value - gradient backpropagation ( ) , a deep actor - critic algorithm for continuous action spaces with compatible function approximation .our starting point is the deterministic policy gradient and associated compatibility conditions derived in . roughly speaking , the compatibility conditions are that 1. the critic approximate the gradient of the value - function and 2 .the approximation is closely related to the gradient of the policy .see theorem [ t : compat ] for details .we identify and solve two problems with prior work on policy gradients relating to the two compatibility conditions : 1 ._ temporal difference methods do not directly estimate the gradient of the value function . _ + instead , temporal difference methods are applied to learn an approximation of the form , where estimates the value of a state , given the current policy , and estimates the _ advantage _ from deviating from the current policy .although the advantage is related to the gradient of the value function , it is not the same thing .2 . _ the representations used for compatible approximation scale badly on neural networks . _+ the second problem is that prior work has restricted to advantage functions constructed from a particular state - action representation , , that depends on the gradient of the policy .the representation is easy to handle for linear policies .however , if the policy is a neural network , then the standard state - action representation ties the critic too closely to the actor and depends on the internal structure of the actor , example [ eg : deep_advantage ] . as a result, weight updates can not be performed by backpropagation , see section [ sec : problem ] .the paper makes three novel contributions .the first two contributions relate directly to problems p1 and p2 .the third is a new task designed to test the accuracy of gradient estimates .[ [ method - to - directly - learn - the - gradient - of - the - value - function . ] ] method to directly learn the gradient of the value function .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the first contribution is to modify temporal difference learning so that it directly estimates the gradient of the value - function .the _ gradient perturbation trick _ , lemma [ lem : gradient ] , provides a way to simultaneously estimate both the value of a function at a point and its gradient , by perturbing the function s input with uncorrelated gaussian noise .plugging in a neural network instead of a linear estimator extends the trick to the problem of learning a function and its gradient over the entire state - action space .moreover , the trick combines naturally with temporal difference methods , theorem [ thm : extension ] , and is therefore well - suited to applications in reinforcement learning . [ [ deviator - actor - critic - dac - model - with - compatible - function - approximation . ] ] deviator - actor - critic ( dac ) model with compatible function approximation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the second contribution is to propose the deviator - actor - critic ( dac ) model , definition [ def : beh_crit ] , consisting in three coupled neural networks and value - gradient backpropagation ( ) , algorithm [ alg : qprop ] , which backpropagates three different signals to train the three networks .the main result , theorem [ thm : main ] , is that has compatible function approximation when implemented on the dac model when the neural network consists in linear and rectilinear units .the proof relies on decomposing the actor - network into individual units that are considered as actors in their own right , based on ideas in .it also suggests interesting connections to work on structural credit assignment in multiagent reinforcement learning .[ [ contextual - bandit - task - to - probe - the - accuracy - of - gradient - estimates . ] ] contextual bandit task to probe the accuracy of gradient estimates .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a third contribution , that may be of independent interest , is a new contextual bandit setting designed to probe the ability of reinforcement learning algorithms to estimate gradients .a supervised - to - contextual bandit transform was proposed in as a method for turning classification datasets into -armed contextual bandit datasets .we are interested in the _ continuous _ setting in this paper .we therefore adapt their transform with a twist .the sarcos and barrett datasets from robotics have features corresponding to the positions , velocities and accelerations of seven joints and labels corresponding to their torques .there are 7 joints in both cases , so the feature and label spaces are 21 and 7 dimensional respectively .the datasets are traditionally used as regression benchmarks labeled sarcos1 through sarcos7 where the task is to predict the torque of a single joint and similarly for barrett .we convert the two datasets into two continuous contextual bandit tasks where the reward signal is the negative distance to the correct label 7-dimensional .the algorithm is thus `` told '' that the label lies on a sphere in a 7-dimensional space .the missing information required to pin down the label s position is precisely the gradient . for an algorithm to make predictions that are competitive with fully supervised methods ,it is necessary to find extremely accurate gradient estimates .[ [ experiments . ] ] experiments .+ + + + + + + + + + + + section [ sec : experiments ] evaluates the performance of on the contextual bandit problems described above and on the challenging octopus arm task .we show that is able to simultaneously solve seven nonparametric regression problems without observing any labels instead using the distance between its actions and the correct labels .it turns out that is competitive with recent _ fully supervised _ learning algorithms on the task .finally , we evaluate on the octopus arm benchmark , where it achieves the best performance reported to date .an early reinforcement learning algorithm for neural networks is .a disadvantage of is that the entire network is trained with a single scalar signal .our proposal builds on ideas introduced with deep -learning , such as replay .however , deep -learning is restricted to finite action spaces , whereas we are concerned with _continuous _ action spaces .policy gradients were introduced in and have been used extensively .the deterministic policy gradient was introduced in , which also proposed the algorithm .the relationship between and is discussed in detail in section [ sec : problem ] .an alternate approach , based on the idea of backpropagating the gradient of the value function , is developed in .unfortunately , these algorithms do not have compatible function approximation in general , so there are no guarantees on actor - critic interactions .see section [ sec : problem ] for further discussion .the analysis used to prove compatible function approximation relies on decomposing the actor neural network into a collection of agents corresponding to the units in the network .the relation between and the difference - based objective proposed for multiagent learning is discussed in section [ sec : local_actors ] .we use boldface to denote vectors , subscripts for time , and superscripts for individual units in a network .sets of parameters are capitalized ( , , ) when they refer to matrices or to the parameters of neural networks .this section recalls previous work on policy gradients . the basic idea is to simultaneously train an actor and a critic .the critic learns an estimate of the value of different policies ; the actor then follows the gradient of the value - function to find an optimal ( or locally optimal ) policy in terms of expected rewards .the environment is modeled as a markov decision process consisting of state space , action space , initial distribution on states , stationary transition distribution and reward function .policy _ is a function from states to actions .we will often add noise to policies , causing them to be stochastic . in this case , the policy is a function , where is the set of probability distributions on actions .let denote the distribution on states at time given policy and initial state at and let .let be the discounted future reward . define the \quad\text{and } \label{e : q } \\\text{value of a policy : } \qquad & j({{\ensuremath{\boldsymbol\mu}}_\theta } ) = \operatorname*{\mathbb e}_{{\mathbf{s}}\sim \rho^{{\ensuremath{\boldsymbol\mu } } } , { \mathbf{a}}\sim { \ensuremath{\boldsymbol\mu}}_\theta}[q^{{\ensuremath{\boldsymbol\mu}}_\theta}({\mathbf{s}},{\mathbf{a } } ) ] .\label{eq : j}\end{aligned}\ ] ] the aim is to find the policy with maximal value .a natural approach is to follow the gradient , which in the deterministic case can be computed explicitly as [ t : dpg] + under reasonable assumptions on the regularity of the markov decision process the policy gradient can be computed as .\end{aligned}\ ] ] see . since the agent does not have direct access to the value function , it must instead learn an estimate . a sufficient condition for when plugging an estimate into the policy gradient ] is _ not even _ a black - box : it can not be queried directly since it is defined as the expected discounted _ future _ reward .it is for this reason the gradient perturbation trick must be combined with temporal difference learning , see section [ sec : tdgl ] .an important insight is that the gradient of an unknown function at a specific point can be estimated by perturbing its input .for example , for small the gradient of is approximately ] , which reduces to solving the set of linear equations = ( w^j - v^j)\operatorname*{\mathbb e}[(\epsilon^j)^2]=(w^j - v^j)\cdot \sigma^2=0\qquad \text { for all }.\ ] ] the first equality holds since =0 ] consisting of only 4s .in contrast , sarcos and barrett are nontrivial benchmarks even when fully supervised .the octopus arm task is a challenging environment that is high - dimensional , sequential and highly nonlinear .[ [ desciption . ] ] desciption .+ + + + + + + + + + + the objective is to learn to hit a target with a simulated octopus arm .settings are taken from .importantly , the action - space is _ not _ simplified using `` macro - actions '' .the arm has compartments attached to a rotating base .there are state variables ( , position / velocity of nodes along the upper / lower side of the arm ; angular position / velocity of the base ) and action variables controlling the clockwise and counter - clockwise rotation of the base and three muscles per compartment . after each step , the agent receives a reward of , where is the change in distance between the arm and the target .the final reward is if the agent hits the target .an episode ends when the target is hit or after 300 steps .the arm initializes at eight positions relative to the target : .see appendix [ sec : octopus_details ] for more details .[ [ network - architectures.-1 ] ] network architectures .+ + + + + + + + + + + + + + + + + + + + + + we applied to an actor - network with hidden rectifiers and linear output units clipped to lie in $ ] ; and critic and deviator networks both with two hidden layers of and rectifiers , and linear output units .updates were computed via rmsprop with step rate of , moving average decay , with nesterov momentum penalty of and respectively , and discount rate of .the variance of the gaussian noise was initialized to .an explore / exploit tradeoff was implemented as follows .when the arm hit the target in more than 300 steps , we set ; otherwise .a hard lower bound was fixed at .we implemented copdac - q on a variety of architectures ; the best results are shown ( also please see figure 3 in ) .they were obtained using a similar architecture to , with sigmoidal hidden units and sigmoidal output units for the actor .linear , rectilinear and clipped - linear output units were also tried . as for , cloning and experience replaywere used to increase stability .[ [ performance.-1 ] ] performance .+ + + + + + + + + + + + figure [ f : oct ] shows the steps - to - target and average - reward - per - step on ten training runs . converges rapidly and reliably ( within steps ) to a stable policy that uses less than 50 steps to hit the target on average ( see supplementary video for examples of the final policy in action ) . converges quicker , and to a better solution , than .the reader is strongly encouraged to compare our results with those reported in . to the best of our knowledge, achieves the best performance to date on the octopus arm task .[ [ stability . ] ] stability .+ + + + + + + + + + it is clear from the variability displayed in the figures that both the policy and the gradients learned by are more stable than .note that the higher variability exhibited by in the right - hand panel of fig .[ f : oct ] ( rewards - per - step ) is misleading .it arises because dividing by the number of steps which is lower for since it hits the target more quickly after training inflates s apparent variability .value - gradient backpropagation ( is the first deep reinforcement learning algorithm with compatible function approximation for continuous policies .it builds on the deterministic actor - critic , , developed in with two decisive modifications .first , we incorporate an explicit estimate of the value gradient into the algorithm .second , we construct a model that decouples the internal structure of the actor , critic , and deviator so that all three can be trained via backpropagation . achieves state - of - the - art performance on two contextual bandit problems where it simultaneously solves seven regression problems without observing labels .note that is competitive with recent _ fully supervised _methods that solve a _single _ regression problem at a time .further , outperforms the prior state - of - the - art on the octopus arm task , quickly converging onto policies that rapidly and fluidly hit the target .[ [ acknowledgements . ] ] acknowledgements .+ + + + + + + + + + + + + + + + + we thank nicolas heess for sharing the settings of the octopus arm experiments in .45 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 adrian k agogino and kagan tumer .unifying temporal and structural credit assignment problems . in _ aamas _ , 2004 .adrian k agogino and kagan tumer . analyzing and visualizing multiagent rewards in dynamic and stochastic environments ._ journal of autonomous agents and multi - agent systems _ , 170 ( 2):0 320338 , 2008 .l c baird .residual algorithms : reinforcement learning with function approximation . in _ icml _, 1995 .david balduzzi . .in _ arxiv:1509.01851 _ , 2015 .david balduzzi , hastagiri vanchinathan , and joachim buhmann .kickback cuts backprop s red - tape : biologically plausible credit assignment in neural networks . in _ aaai _ , 2015 .andrew g barto , richard s sutton , and charles w anderson .neuronlike adapative elements that can solve difficult learning control problems ._ ieee trans .systems , man , cyb _ , 130 ( 5):0 834846 , 1983 .f bastien , p lamblin , r pascanu , j bergstra , i goodfellow , a bergeron , n bouchard , and y bengio . .in _ nips workshop : deep learning and unsupervised feature learning _ , 2012 .j bergstra , o breuleux , f bastien , p lamblin , r pascanu , g desjardins , j turian , d warde - farley , and yoshua bengio .theano : a cpu and gpu math expression compiler . in _ proc .python for scientific comp .( scipy ) _ , 2010 .george e dahl , tara n sainath , and geoffrey hinton .improving deep neural networks for lvcsr using rectified linear units and dropout . in _ieee int conf on acoustics , speech and signal processing ( icassp ) _ , 2013 .christoph dann , gerhard neumann , and jan peters .policy evaluation with temporal differences : a survey and comparison ._ jmlr _ , 15:0 809883 , 2014 .marc peter deisenroth , gerhard neumann , and jan peters . ._ foundations and trends in machine learning _ , 20 ( 1 - 2):0 1142 , 2011 . miroslav dudk , dumitru erhan , john langford , and lihong li . ._ statistical science _ , 290 ( 4):0 485511 , 2014. y engel , p szab , and d volkinshtein . learning to control an octopus arm with gaussian process temporal difference methods . in _ nips _ , 2005 .michael fairbank and eduardo alonso . .ieee world conference on computational intelligence ( wcci ) _ , 2012 .michael fairbank , eduardo alonso , and daniel v prokhorov . ._ ieee trans ._ , 240 ( 12):0 20882100 , 2013 .abraham flaxman , adam kalai , and h brendan mcmahan .online convex optimization in the bandit setting : gradient descent without a gradient . in _ soda _ , 2005 .xavier glorot , antoine bordes , and yoshua bengio . eep sparse rectifier neural networks . in _ proc .14th int conference on artificial intelligence and statistics ( aistats ) _ , 2011 .carlos guestrin , michail lagoudakis , and ronald parr .coordinated reinforcement learning . in _icml _ , 2002 .roland hafner and martin riedmiller .reinforcement learning in feedback control : challenges and benchmarks from technical process control ._ machine learning _ , 84:0 137169 , 2011 .g hinton , nitish srivastava , and kevin swersky .lecture 6a : overview of minibatch gradient descent .chris holmesparker , adrian k agogino , and kagan tumer .combining reward shaping and hierarchies for scaling to large multiagent systems . _ the knowledge engineering review _, 2014 .michael i jordan and r a jacobs . .in _ nips _ , 1990 .sham kakade .a natural policy gradient . in _ nips _ , 2001. vijay r konda and john n tsitsiklis .actor - critic algorithms . in _ nips _ , 2000 .samory kpotufe and abdeslam boularias .gradient weights help nonparametric regressors . in _ advances in neural information processing systems ( nips ) _ , 2013 .sergey levine , chelsea finn , trevor darrell , and pieter abbeel . ._ arxiv:1504.00702 _ , 2015 .volodymyr mnih , koray kavukcuoglu , david silver , andrei a. rusu , joel veness , marc g. bellemare , alex graves , martin riedmiller , andreas k. fidjeland , georg ostrovski , stig petersen , charles beattie , amir sadik , ioannis antonoglou , helen king , dharshan kumaran , daan wierstra , shane legg , and demis hassabis .human - level control through deep reinforcement learning ._ nature _ , 5180 ( 7540):0 529533 , 02 2015 .vinod nair and geoffrey hinton .ectified linear units improve restricted boltzmann machines . in _icml _ , 2010. a s nemirovski and d b yudin ._ problem complexity and method efficiency in optimization_. wiley - interscience , 1983 .duy nguyen - tuong , jan peters , and matthias seeger . .in _ nips _ , 2008 .jan peters and stefan schaal .policy gradient methods for robotics . in _ proc .ieee / rsj int .robots syst .. daniel v prokhorov and donald c wunsch . ._ ieee trans ._ , 80 ( 5):0 9971007 , 1997 . maxim raginsky and alexander rakhlin . ._ ieee trans .inf . theory _ , 570 ( 10):0 70367056 , 2011 . david silver , guy lever , nicolas heess , thomas degris , daan wierstra , and martin riedmiller .deterministic policy gradient algorithms . in _ icml _ , 2014 .nitish srivastava , geoffrey hinton , alex krizhevsky , ilya sutskever , and ruslan salakhutdinov .dropout : a simple way to prevent neural networks from overfitting ._ jmlr _ , 15:0 19291958 , 2014 .r s sutton and a g barto ._ reinforcement learning : an introduction_. mit press , 1998 .richard sutton , david mcallester , satinder singh , and yishay mansour .policy gradient methods for reinforcement learning with function approximation . in _ nips _ , 1999 .richard sutton , hamid reza maei , doina precup , shalabh bhatnagar , david silver , csaba szepesvri , and eric wiewiora .fast gradient - descent methods for temporal - difference learning with linear function approximation . in _ icml _ , 2009 .richard sutton , csaba szepesvri , and hamid reza maei . a convergent algorithm for off - policy temporal - difference learning with linear function approximation . in _ adv in neural information processing systems ( nips )_ , 2009 .shubhendu trivedi , jialei wang , samory kpotufe , and gregory shakhnarovich . .in _ uai _ , 2014 .john tsitsiklis and benjamin van roy .an analysis of temporal - difference learning with function approximation ._ ieee trans ., 420 ( 5):0 674690 , 1997 .niklas wahlstrm , thomas b. schn , and marc peter deisenroth . ._ arxiv:1502.02251 _ , 2015 .y wang and j si . ._ ieee trans ._ , 120 ( 2):0 264276 , 2001 .ronald j williams .simple statistical gradient - following algorithms for connectionist reinforcement learning ._ machine learning _ , 8:0 229256 , 1992 .m d zeiler , m ranzato , r monga , m mao , k yang , q v le , p nguyen , a senior , v vanhoucke , j dean , and g hinton . on rectified linear units for speech processing . in _icassp _ , 2013 .* appendices *it is instructive to describe the weight updates under more explicitly .let , and denote the weight vector of unit , according to whether it belongs to the actor , deviator or critic network .similarly , in each case or denotes the influence of unit on the network s output layer , where the influence is vector - valued for actor and deviator networks and scalar - valued for the critic network .weight updates in the deviator - actor - critic model , where all three networks consist of rectifier units performing stochastic gradient descent , are then per algorithm [ alg : pb ] .units that are not active on a round do not update their weights that round .listing 1 summarizes technical information with respect to the physical description and task setting used in the octopus arm simulator in xml format .< constants > < frictiontangential>0.4</frictiontangential > < frictionperpendicular>1</frictionperpendicular > <pressure>10</pressure > < gravity>0.01</gravity > < surfacelevel>5</surfacelevel > < buoyancy>0.08</buoyancy >< muscleactive>0.1</muscleactive > < musclepassive>0.04</musclepassive >< musclenormalizedminlength>0.1</musclenormalizedminlength > < muscledamping>-1</muscledamping > < repulsionconstant>.01</repulsionconstant > < repulsionpower>1</repulsionpower > < repulsionthreshold>0.7</repulsionthreshold >< torquecoefficient>0.025</torquecoefficient > < /constants > | this paper proposes , a deep reinforcement learning algorithm for continuous policies with compatible function approximation . the algorithm is based on two innovations . firstly , we present a temporal - difference based method for learning the _ gradient _ of the value - function . secondly , we present the deviator - actor - critic ( dac ) model , which comprises three neural networks that estimate the value function , its gradient , and determine the actor s policy respectively . we evaluate on two challenging tasks : a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients ; and the octopus arm , a challenging reinforcement learning benchmark . is competitive with _ fully supervised methods _ on the bandit task and achieves the best performance to date on the octopus arm . policy gradient , reinforcement learning , deep learning , gradient estimation , temporal difference learning |
safety critical systems , such as aircraft , satellites , and electricity grids , often rely on sensors to measure their state and their environment .the true state of the system may not be measurable , or may be corrupted by noise , as quantified by a so - called observation process .the controller must choose actions based only on the information contained in the observation process .the nature of safety critical systems dictates a need for formal methods to accurately assess the system s ability to meet rigorous safety requirements , and also to synthesize controllers that guarantee performance to a desired level ( correct by design ) .it is therefore paramount that the controller exploit information from the observation process , to obtain theoretical safety guarantees that are as accurate as possible .reachability analysis , which determines whether a system s state remains within a given safe region and/or reaches a target set within some time horizon , has been used extensively as a tool for verification and controller synthesis for hybrid systems , , and extended to stochastic hybrid systems ( shs ) , .there has been little focus , however , on reachability analysis for partially observable shs .while there has been some work on deterministic hybrid systems with hidden modes or uncertain systems with imperfect information on a partial order , reachability analysis of a partially observable shs has been approached only theoretically , .this , along with our previous work provides the first computational results for both controller synthesis and verification of safety specifications for partially observable shs .existing computational results for reachability analysis of fully observable shs are also limited . the safety problem for a discrete time shs ( dtshs ) , which considers only whether the state of the system can be controlled to remain within a safe region of the state space , can be formulated as a multiplicative cost stochastic optimal control problem , and solved in the same manner as a markov decision process ( mdp ) .unfortunately , solutions via dynamic programming require evaluation of a value function over all possible states , which is infinite when those states are continuous .discretization procedures can be employed to impose a finite number of states , as in and , which present rigorous uniform and adaptive gridding methods for verification of dtshs .similarly , approximate abstractions of the original stochastic model to an equivalent system that has the same properties are presented in , , and .even so , current applications are limited to those with only a few discrete and continuous states .the safety problem for a partially observable dtshs ( podtshs ) can similarly be formulated as a partially observable mdp ( pomdp ) .however , pomdps are plagued by dimensionality on an even greater scale than mdps .the common approach to solving pomdps is to replace the growing history of observations and actions by a sufficient statistic , often called the belief state , which , for a pomdp with an additive cost function , is the distribution of the current state conditioned on all past observations and actions .this belief state is treated as the perfectly observed true state , and mdp solution methods can then be applied . however , with a continuous state space , the belief state is a function defined over an infinite domain , and it is impossible to enumerate over all such functions .therefore the study of efficient , approximate solutions to pomdps is essential .although finding the solution to a general pomdp is hard , many algorithms for approximating solutions to finite state pomdps have been developed .these mainly rely on point - based value iteration ( pbvi ) schemes that only consider a subset of the belief space to update the value function used in the dynamic program ( for a survey of pbvi algorithms , see ) . because the value function is piecewise - linear and convex ( and so equivalently represented by a finite set of vectors ) , sampling from the belief state provides a systematic way of storing a finite subset of those vectors .such methods must be tailored to continuous state pomdps because of the dimensionality of the belief state .other than discretizing the state space and solving an equivalent finite state pomdp , many existing methods for continuous state pomdps assume the belief state is gaussian ( e.g. , ) , and represent the belief state in a parameterized form which is then discretized and solved as a discrete state mdp .when the belief state can not adequately be represented using a single gaussian , a gaussian mixture may be used instead . an equivalent point - based algorithm for continuous - state pomdps using gaussian mixturesis presented in , and demonstrated on a stochastic hybrid system with hidden modes in . the safety problem for a podtshs is further complicated because the belief state is not the conditional distribution of the current state of the system , , but must also include the distribution of a binary variable that indicates whether the state of the system has remained within a safe region up to the previous time step .this , coupled with the stochastic hybrid system dynamics , makes accurately representing the belief state as a single gaussian impossible .we formulate the safety problem for a podtshs as a pomdp , and investigate representations of the belief state in either vector or gaussian mixture form through finite- and continuous - state approximations to the podtshs .these representations allow us to exploit point - based methods developed for pomdps .this paper extends our previous work in several ways .first , we validate the use of pomdp solution techniques for reachability analysis of a podtshs , by demonstrating that the value function is convex and admits a function representation related to the piecewise - linear vector representation of a finite state pomdp .second , we present a finite state approximation to the dtshs ( presented in without proofs ) that allows the belief state to take vector form under certain conditions , and show convergence for the approximation .third , we preserve the continuity in the hybrid state space through a gaussian mixture representation for the belief state , and approximate the indicator function that represents the safe region using gaussian radial basis functions . in this case , we provide an error bound as a function of the integrated error ( 1-norm in the function space ) of the indicator function approximation .our solution method converges to the true solution from below , using either the finite or continuity - preserving belief state .we demonstrate both approaches on a temperature regulation problem .the rest of the paper is organized as follows .section [ sec : background ] relates the safety problem for a podtshs to optimal control of a pomdp .section [ sec : abstraction ] justifies the use of pomdp solution techniques , and presents the finite and gaussian mixture approximations to the safety problem for a podtshs ( as well as error bounds ) .section [ sec : pbvi ] describes the use of point - based approximation techniques , through sampling of belief states and discretization of the observations .we present a numerical example in section [ sec : example ] , and concluding remarks and directions for future work in section [ sec : conc ] .all proofs can be found in the appendix .a probability space consists of a sample space , a -algebra defined over , and a probability measure that assigns probabilities to events in . for ,we presume , the borel -algebra on .the probability measure maps elements to the interval ] is a state transition function that assigns a probability measure to state given state and control for all : 5 . ] is an initial probability measure over the state space : the state evolves stochastically and is markovian ( e.g. , the state at the next time step depends only on the current state and action ) .the information available to the controller at time is ; that is , the controller can not directly observe the state .the control input at each time step is selected according to a control policy , which maps the available information at each time onto .[ def : policy ] for a pomdp , a policy for some time horizon is a sequence of functions , , such that .we consider non - randomized policies , i.e. ones that assign a single control input to each possible , which are sufficient for the problem we consider .a control policy induces a probability space over the pomdp with state space , -algebra on , and probability measure based on , , , and .the execution of a pomdp is as follows . at time , state is produced from initial distribution . at each subsequent time , an observation is produced according to , and added to the list of past observations and control inputs to produce .the control input is chosen according to , and cost is accrued .the next state is then generated according to .the goal is to minimize the expected sum of costs accrued according to function over a time horizon by optimally choosing control actions according to the policy .\ ] ] equation can be solved using dynamic programming , much like for a markov decision process .the value function , , represents the expected sum of costs accrued from time to given that has been recorded thus far , and is computed recursively backwards in time . however , since the size of vector increases with , and is difficult to store , the optimal control input and value function can instead be expressed as a function of a belief state .the belief state is a _sufficient statistic _ for the information vector because it condenses all information necessary for making optimal decisions without sacrificing optimality . for an additive cost function , the belief state is a function that describes the probability of being in state given all past observations and actions , ] is a stochastic transition kernel that assigns a probability measure over at time given and for all : with 5 . ] is an initial probability measure over : while we presume is finite , a continuous or hybrid input set can be approximated as a finite set when computing safety probabilities and the optimal policy .the state transition kernel comprises a discrete component that governs mode updates , and a kernel for continuous state transitions . for modeling purposes , we choose to order such that the discrete mode updates first at each time step , and the subsequent mode influences the evolution of to . the functions and are also separated into discrete and continuous components , functions , , and are borel - measurable stochastic transition kernels over , and , , and are standard probability distributions over finite state spaces .we consider specifically a switched affine system , such that the continuous state evolves according to the are independent and identically distributed gaussian random variables for all , .the matrix and function change according to the mode .we assume the discrete observations depend only on , and the observations depend linearly on , corrupted by additive gaussian noise , with independent and identically distributed for all , . kernels , and admit gaussian densities , and .we also assume admits a gaussian density .for ease of notation , we let , , and for , , and . we require that the following lipschitz properties hold , which are guaranteed for , , and , given that they are gaussian densities and is finite . we define the maximum values and .while we impose assumptions of linearity and additive gaussian noise in and to facilitate subsequent derivations , these assumptions can be relaxed in certain cases , which will be highlighted where appropriate .we use stochastic optimal control to find both a control policy that maximizes the probability of the state remaining within a safe region of the state space , as well as an estimate of that probability . for a compact borel set , terminal time , and predefined policy , the objective to optimize is ] , with the indicator function if and otherwise , as shown in : .\ ] ] the maximal safety probability and optimal policy are given by in the fully observable case , gives a dynamic programming formulation for optimizing , which returns both the maximal safety probability and optimal policy .we would like to take a similar approach to find both and .formally , we would like to solve the following problem .[ probstate ] for a podtshs with a safe set and time horizon , we wish to 1 . compute the maximal probability of remaining within for time steps .2 . compute the optimal policy given by .if the maximal probability and optimal policy can not be computed exactly ( which is quite likely ) , an approximation that produces a suboptimal policy and lower bound on the maximal safety probability is desired .we exploit the pbvi method to solve problem [ probstate ] , by transforming problem [ probstate ] into an optimal control problem for a pomdp .hence we first show the safety problem for can be reduced to a dynamic program , despite a non - standard belief state .we then show that the -functions and belief states can be approximately represented in closed form and that finite collections of each may be generated and used to approximate , similar to a point - based pomdp solver .we present two approximations of problem [ probstate ] for the podtshs : the first discretizes to produce a finite state pomdp , and the second preserves continuity in by using a gaussian mixture approach , thus characterizing the podtshs by a collection of weights , means , and covariances .the multiplicative nature of the cost function for the safety problem renders the belief state for an additive cost pomdp inapplicable , and we derived a different sufficient statistic for problem [ probstate ] in .this sufficient statistic produces a modified conditional distribution of the current state that includes the probability that all past states are in the safe set .\ ] ] we define the _ information state _ as the function ( where is the space of integrable functions ) associated with the probability distribution produced by , so that for all , .note that the information state is distinct from the belief state ( e.g. the conditional distribution of the current state ) .the information state updates recursively with a bounded linear operator ( for proof see ) where is given by in comparing to , the latter integrates over the compact hybrid set , as opposed to a summation over finite set .we define a dynamic programming recursion over as \end{aligned}\ ] ] with solution .the optimal policy is , with for all ] is a discrete state transition function that assigns probabilities to elements of 5 . ] is a function that assigns probabilities to elements of at time zero we define the transition function as with , and the initial distribution on as the discrete probability space is with , the -algebra on , and the probability measure uniquely defined by , , , and a control policy , , with the set of all information states defined on at time .we further define the operator and the intermediate vector for any , , as the safety problem for is to find latexmath:[ ] for .the discrete information state represents a probability mass function over , and can be expressed as an integral over an equivalent density ( just as ) . with given by this can be verified by substituting the expression for in terms of into and using an induction argument .the value function is the maximum probability of remaining within over time steps is the safety probability for the finite state approximation converges to the true solution as grid size parameter tends to zero . to show this , we first describe the error between the continuous information state and the vector approximation .[ thm : sigdiscrete ] the density defined in satisfies for all , , and given by with ] , and any function , the associated function defined in satisfies \delta^x\ ] ] for all .the constants and are equal to and from lemmas [ lem : finiteinty ] and [ lem : finiteintx ] , respectively .we now can show convergence of the approximate safety probability over the discretized state space to the true safety probability .[ thm : valuefuncerrordisc ] for any time ] , , and , with and . to show convergence of to , we define the function which utilizes the same policy tree as for a specific .this is equivalent to , except that is defined by replacing with .\ ] ] with the optimal control input associated with and indicating that is chosen according to the indices selected by for .note that the -functions are no longer guaranteed to be equal to zero outside of , and also are not guaranteed to be bounded above by one .so long as is of bounded height , however , the -functions also remain bounded , and we write .we could also adjust the weights in so that the -functions can not exceed one , although this may increase the error .the following lemma describes the relation between and .[ lem : alphgtil ] for any ] , and any , , the error between the value function given and the value function given using the gaussian mixture approximation is bounded above by with .specifically , the safety probability for podtshs over time horizon satisfies theorem [ thm : valuefuncerrorgauss ] shows that the convergence of the gaussian mixture approximation of both and the value function depends on the integrated error between the indicator function over and the rbf approximation , rather than the pointwise error . although the pointwise error may not converge to zero for a finite number of components in the rbf , the integral of the error can be small , as we will show in section [ sec : example ] .[ rem : gauss ] the linearity and additive gaussian noise assumptions on the dynamics and , as well as assumptions [ ass : gauss1 ] - [ ass : gauss3 ] , are used in the gaussian mixture approximation to ensure that the only error in the value function and information state approximations comes from the approximation of the indicator function by a gaussian mixture . dropping these assumptionsrequires that we approximate , , and by gaussian mixtures , which is possible but introduces additional error that we have chosen not to consider .a numerical solution of problem [ probstate ] via either a discrete or gaussian mixture approximation additionally requires sets and to be finite , whereas we have sets of infinite size because of the uncountable nature of .however , a lower bound on the safety probabilities and can still be obtained , by characterizing the error that results from using and , finite collections of -functions and information states , respectively .we again exploit point - based approximation methods described in section [ sec : back_pomdp ] .we examine the generation of subsets of the information states and -functions , and prove that each guarantees a lower bound to the safety probability of whichever approximation of section [ sec : abstraction ] we choose .in contrast to most point - based solvers , we do not assume a finite set of observations , and hence discretize the observations merely for the computation of the -functions .combining belief space sampling with discretized observations assures a lower bound to the safety probability .we characterize the error from using a sampled subset of for performing backup operations ( as in ) .presume that a finite set of information states has been generated according to one of the many methods available .we generate a finite set of -functions , one for each .the convexity of the value functions guarantees that the subset provides a lower bound on .further , we can show that the error between the approximate value function ( represented by ) and the true value function ( represented by the complete set of -functions ) depends on how densely we sample .we define an intermediate value function that generates recursively from , i.e. that introduces one point - based backup from the full set .then is written as a function of rather than .the value function is formally defined as so that is characterized by the finite set at each time step .we also define an intermediate value function that generates recursively from , i.e. that introduces one point - based backup from the full set .then is written as a function of rather than . we let denote the maximum hausdorff distance over between points in and points in with respect to the metric . in the following , we do not distinguish between the vector and gaussian mixture representations of and , because the results apply to both cases .[ lemma3 ] for any ] and , the error from using point - based value iteration versus full value iteration is bounded above by thus the error between the point - based approximation and the actual value function is directly proportional to how densely is sampled , and converges to zero as approaches .the proofs of lemma [ lemma3 ] and theorem [ thm1 ] are a straightforward extension of those appearing in , and are omitted . over the uncountably infinite space , we can not calculate for all , despite a finite set of and .we therefore compute a subset of for the finite set , to approximate as .we discretize in a similar fashion to the discretization of in section [ sec : abs_finite ] . however , since is not compact , we consider an expanded set defined so that the probability of observing a value for that is outside of is approximately zero , i.e. , .the sets are divided into disjoint subsets , .we also define , such that . the partition of is denoted with .the diameter of partition is , with maximum diameter .each partition has a representative element and a set .the function maps observation to its representative value ; the set - valued function maps the point to the corresponding set .the finite observation space is . for the finite state approximation , the transition function ] and , the error between and satisfies given that the discretized observations are chosen so that and with the largest lebesgue measure of sets .lemma [ lem : discobsalpha ] requires defining the representative points so that the integral of over is greater than a piecewise constant approximation integrated over , which can be satisfied by picking the points where the gaussian density is minimized within cell . without this requirement , finding at a finite set of pointsstill guarantees a lower bound to the value function for any time , and is intuitively more accurate as .lemma [ lem : discobsalpha ] leads to the following theorem regarding the error between ( based on continuous observations ) and ( based on discretized observations ) .we again use the notation to indicate that is represented by the set of -functions calculated using the discretized observations .[ thm : discobsvf ] given discretized observation process with transition function , for any time ] and , the error between and satisfies given that the observations are chosen so that and with the largest lebesgue measure of sets .[ thm : gaussobsvf ] given discretized observation process with transition function , for any time ] does not depend on the discrete state ( so ) .we consider the probability of remaining within for time steps given initial temperature distribution normally distributed with varying mean and variance .the initial mode is given as .the finite state and gaussian mixture approximations are used in a pbvi algorithm in the style of perseus .we consider a uniform grid ( constant for all , ) over the region for the finite state approximation , with representative points at the end - point of each grid cell .for example , setting gives for and , and a total of cells .the function maps to itself , and maps to the nearest in absolute value .the gaussian mixture approximation utilizes an rbf approximation of the indicator function calculated using matlab s _ gmdistribution _ function .we used a reduction process to limit the number of components of each and for the gaussian mixture approximation .similar gaussians are combined into a single component based on the 2-norm distance between functions .each mixture was limited to 30 components to reduce overall computation time without overly sacrificing accuracy .this number can easily be changed , however , depending on the importance of speed versus accuracy .both approximations employ a sequence of sampled sets and a finite set of observations to calculate the -functions for the pbvi algorithm . to generate the sets , we initialized a set of 40 states normally distributed with variance and mean randomly chosen uniformly on .each was updated according to or with chosen randomly and sampled from the corresponding ( i.e. ) .this process was repeated times , so that for each time step we had a set of 40 sampled .the finite set of observations were produced by a uniform grid over ] .for , separate terms and apply the triangle inequality as in the proof of theorem [ thm : sigdiscrete ] .\delta^x+ n_q\left[h_q + \beta_1^xh_x^{(2 ) } + \beta_2^x\right]\delta^x\label{eq : alphatil2 } \end{aligned}\ ] ] the second term of comes from lemma [ lem : finiteinty ] and noting that represents a probability that is bounded above by one .the third term comes from lemma [ lem : finiteintx ] and the lipschitz inequality for .the term does not affect the bound , and only indicates that both and are equal to zero for . applying the induction hypothesis to gives the desired result . by construction .at any time ] , given and , we can rewrite the value function evaluated at in terms of -functions . as in the proof of theorem [ thm : valuefuncerrordisc ] , assume without loss of generality that . define such that . then \notag\\ & \hspace{10 mm } + \int_{\mathcal{y}\backslash\overline{k}}\sum_{z , z'\in k_{\delta}}\alpha_{n+1,\delta}^{*(y)}(z')\gamma(y|z')\tau_{\delta}(z'|z , u^*)\sigma_{n,\delta}(z)\,dy \notag\\ & \hspace{10 mm}- \int_{\mathcal{y}\backslash\overline{k}}\sum_{z , z'\in k_{\delta}}\alpha_{n+1,\delta}^{*(\psi_y)}(z')\gamma(y|z ' ) \tau_{\delta}(z'|z , u^*)\sigma_{n,\delta}(z)\,dy\notag\\ & \leq \int_{\overline{k}}\sum_{z , z'\in k_{\delta}}\left[\alpha_{n+1,\delta}^{*(y)}(z')\gamma(y|z')\tau_{\delta}(z'|z , u^*)\sigma_{n,\delta}(z)\,dy\right.\notag\\ & \hspace{10 mm}\left- \alpha_{n+1,\delta}^{*(\theta(y))}(z')\gamma(y|z')\tau_{\delta}(z'|z , u^*)\sigma_{n,\delta}(z)\,dy \right ] + \frac{\epsilon}{n } \label{eq : lemdiscobs2 } \end{aligned}\ ] ] note that is nonnegative , meaning that using produces a lower bound to the actual value function given by .this follows because is chosen optimally for only a subset of ( at the points ) , and for all other , is suboptimal , producing a lower value .next , we can bound from below based on how the points are defined .\,dy + \frac{\epsilon}{n}\\ & \leq n_q\overline{\lambda}h_y^{(1)}\delta^y + \frac{\epsilon}{n } \end{aligned}\ ] ] by induction . at time , since .assume for all that .then , at time , applying the induction inequality and combining terms completes the proof .m. prandini and j. hu , _stochastic reachability : theoretical foundations and numerical approximation _ ,lecture notes in control and information sciences.1em plus 0.5em minus 0.4emspringer verlag , 2006 , pp .107139 . i.mitchell and j. templeton , `` a toolbox of hamilton - jacobi solvers for analysis of nondeterministic continuous and hybrid systems , '' in _ hybrid systems : computation and control _ , 2005 , vol . 3414 , pp .480494 .r. ghaemi and d. del vecchio , `` control for safety specifications of systems with imperfect information on a partial order , '' _ ieee transactions on automatic control _59 , no . 4 , pp . 982 995 , 2014 .s. soudjani and a. abate , `` adaptive and sequential gridding procedures for the abstraction and verification of stochastic processes , '' _ siam journal on applied dynamical systems _12 , no . 2 ,pp . 921956 , 2013 .e. brunskill , l. kaelbling , t. lozano - perez , and n. roy , `` planning in partially - observable switching - mode continuous domains , '' _ annals of mathematics and artificial intelligence _ ,185216 , 2010 . | assuring safety in discrete time stochastic hybrid systems is particularly difficult when only noisy or incomplete observations of the state are available . we first review a formulation of the probabilistic safety problem under noisy hybrid observations as a dynamic program over an equivalent information state . two methods for approximately solving the dynamic program are presented . the first method approximates the hybrid system as an equivalent finite state markov decision process , so that the information state is a probability mass function . the second approach approximates an indicator function over the safe region using radial basis functions , to represent the information state as a gaussian mixture . in both cases , we discretize the hybrid observation process and generate a sampled set of information states , then use point - based value iteration to under - approximate the safety probability and synthesize a suboptimal control policy . we obtain error bounds and convergence results in both cases , assuming switched affine dynamics and additive gaussian noise on the continuous states and observations . we compare the performance of the finite state and gaussian mixture approaches on a simple numerical example . |
relativistic flow problems are important in many astrophysical phenomena including gamma - ray burst ( grb ) , active galactic nuclei ( agn ) , as well as microquasar and pulsar wind nebulae , among others .apparent superluminal motion is observed in many jets of extragalactic radio sources associated with agn . according to the currently accepted standard model ,this implies the jet flow velocities as large as of the speed of light .similar phenomena are also seen in microquasars such as grs 1915 + 105 and gro j1655 - 40 thus by similar arguments relativistic flows are thought to play a role . in the case of grb ,the observed non - thermal spectrum implies that the source must be optically thin , which can be used to put a limit on the minimum lorentz factor within those bursts .this argument shows that the source of grb must be in highly relativistic motion .this conclusion is further confirmed by the rapidly increasing number of grb afterglow observations by the swift satellite . to understand physical processes in those phenomena quantitatively , high resolution multi - dimensional simulations are crucial .jim wilson and collaborators pioneered the numerical solution of relativistic hydrodynamics equations .starting with these earliest papers this has typically been done in the context of general relativistic problems such as accretion onto black holes and supernovae explosions .the problem was recognized to be difficult to solve when the lorentz factor becomes large and a solution with an implicit adaptive scheme was demonstrated in one dimension .unfortunately , this approach is not generalizable to multi - dimensions .however , in the past two decades accurate solvers based on godunov s scheme have been designed that have adopted shock - capturing schemes for newtonian fluid to the relativistic fluid equations in conservation form ( for a review see * ? ? ?such schemes , called high - resolution shock - capturing ( hrsc ) methods , have been proven to be very useful in capturing strong discontinuities with a few numerical zones without serious numerical oscillations .we will discuss a number of them in section 3 .studies involving astrophysical fluid dynamics in general are benefiting tremendously from using spatial and temporal adaptive techniques. smoothed particle hydrodynamics is a classic example by being a lagrangian method .increasingly also variants of s structured adaptive mesh technique are being implemented .this is also true in relativistic hydrodynamics where certainly the work of showed that a serial amr code could solve problems even highly efficient parallel fixed grid codes would have difficulty with . in this paperwe discuss our implementation of different hydrodynamics solvers with various reconstruction schemes as well as different time integrators on top of the _ enzo _ framework previously developed for cosmology .this new code we call _ r_enzo _ _ is adaptive in time and space and is a dynamically load balanced parallel using the standard message passing interface . in the following webriefly summarize the equations being solved before we give details on the different solvers we have implemented .section 4 discusses the adaptive mesh refinement strategy and implementation .we then move on to describe various test problems for relevant combinations of solvers , reconstruction schemes in one , two and three dimensions , with and without amr .section 6 presents an application of our code to three - dimensional relativistic and supersonic jet propagation problem .section 7 discusses 3d grb jet simulation .we summarize our main conclusions in section 8 .the basic equations of special relativistic hydrodynamics ( srhd ) are conservation of rest mass and energy - momentum : and where is the rest mass density measured in the fluid frame , is the fluid four - velocity ( assuming the speed of light ) , is the lorentz factor , is the coordinate three - velocity , is the energy - momentum tensor of the fluid and semicolon denotes covariant derivative . for a perfect fluidthe energy - momentum tensor is where is the relativistic specific enthalpy , is the specific internal energy , is the pressure and is the spacetime metric .srhd equations can be written in the form of conservation laws where the conserved variable is given by and the fluxes are given by has shown that system ( [ 2.4 ] ) is hyperbolic for causal eos , i.e. , those satisfying where the local sound speed is defined as .\label{cs}\ ] ] the eigenvalues and left and right eigenvectors of the characteristic matrix , which are used in some of our numerical schemes , are given by .the conserved variables are related to the primitive variables by where .the system ( [ 2.4 ] ) are closed by an equation of state ( eos ) given by . for an ideal gas ,the eos is , where is the adiabatic index .we use method of lines to discretize the system ( [ 2.4 ] ) spatially , where refers to the discrete cell index in directions , respectively . , and are the fluxes at the cell interface . as discussed by ,if using a high order scheme to reconstruct flux spatially , one must also use the appropriate multi - level total variation diminishing ( tvd ) runge - kutta schemes to integrate the ode system ( [ ode ] ) .thus we implemented the second and third order tvd runge - kutta schemes coupled with amr .the second order tvd runge - kutta scheme reads , and the third order tvd runge - kutta scheme reads , where is the final value after advancing one time step from . for an explicit timeintegration scheme , the time step is constrained by the courant - friedrichs - lewy ( cfl ) condition .the time step is determined as where is a parameter called cfl number and is the local largest speed of propagation of characteristics in the direction whose explicit expression can be found in . generally speaking , there are two classes of spatially reconstruction schemes ( see e.g. leveque 2002 ) .one is reconstructing the unknown variables at the cell interfaces and then use exact or approximate riemann solver to compute the fluxes .another is direct flux reconstruction , in which we reconstruct the flux directly from fluxes at the cell centers . to explore the coupling of different schemes with amr as well as exploringwhich method is most suitable for a specific astrophysical problem , we implement several different schemes in both classes . to reconstruct unknown variables , we have implemented piecewise linear method ( plm , van leer 1979 ) , piecewise parabolic method ( ppm , colella & woodward 1984 , mart & mller 1996 ) , the third - order convex essentially non - oscillatory scheme ( ceno , liu & osher 1998 ) .these are used to reconstruct the primitive variables since reconstructing the conserved variables can produce unphysical values in srhd . furthermore , unphysical values of three - velocities may arise during the reconstruction especially for ultrarelativistic flows .so we either use to do the reconstruction or we also reconstruct the lorentz factor and use it to renormalize the reconstructed three - velocity when they are unphysical . for direct flux reconstruction ,we have implemented plm and the third and fifth order weno scheme of .direct flux reconstruction using weno was first used to solve srhd problems by .they showed that the fifth order weno scheme works well with the third order runge - kutta time integration . in our implementation, we followed their description closely . for the plm and ceno schemes, we used a generalized minmod slope limiter . for given , , where . for reduces to the monotonized central - difference limiter of .we found that this generalized minmod slope limiter behaves much better than a traditional minmod limiter especially for strong shear flows . in our calculation , is used by default .for the ppm scheme , we used the parameters proposed by for all the test problems .for weno , we used the parameters suggested in the original paper of . in the first class of reconstruction methods ,given the reconstructed left and right primitive variables at interfaces , the flux across each interface is calculated by solving the riemann problem defined by those two states .an exact riemann solver is quite expensive in srhd .thus we have implemented several approximate riemann solvers including hll , hllc , local lax - friedrichs ( llf , kurganov & tadmor 2000 ) and the modified marquina flux .the hllc scheme is an extension of the hll solver developed by for newtonian flow which was extended to two - dimensional relativistic flows by .the improvement of hllc over hll is restoring the full wave structure by constructing the two approximate states inside the riemann fan .the two states can be found by the rankine - hugoniot conditions between those two states and the reconstructed states . with this modification , hllc indeed behaves better than other riemann solvers in some 1d ( 5.1.7 ) and 2d ( 5.2 ) test problems .but when we apply hllc to three dimensional jet simulation , we found that hllc suffers from the so called carbuncle " artifact well - known in the computational fluid dynamics literature .we have used hllc to run many other two - dimensional test problems designed to detect the carbuncle artifact and confirmed this shortcoming .we found that the hllc solver is unsuitable for many multi - dimensional problems .the discussion of these problems will be presented elsewhere . in this work, we will only apply hllc to two test problems showing that the hllc solver has less smearing at contact discontinuities than other schemes .has compared the hll scheme , the llf scheme and the modified marquina flux formula using 1d and 2d test problems .they found those three schemes give similar results for all their test problems . in our tests , we found similar results .however , the modified marquina flux formula is not as stable as hll in problems with strong transverse flows and llf is more diffusive than hll .so in the following discussion we will only show the results using hll in most of the tests if there is no difference among those three schemes . in the following discussion, we will denote a specific hydro algorithm by x - y where x is the flux formula and y is the reconstruction scheme .for example , f - weno5 denotes direct flux reconstruction using fifth order weno .we used the third order runge - kutta method for all the tests in this work .since primitive variables are needed in the reconstruction process , after every rk time step , we need to convert conserved variables to primitive variables . while conserved variables can be computed directly from primitive variables using eqs .( [ d ] ) , ( [ sj ] ) and ( [ tau ] ) , the inverse operation is not straightforward .one needs to solve a quartic equation for the ideal gas eos and a nonlinear equation for more complicated eos .iteration methods are used even for ideal gas eos , because computing the solution of a quartic is expensive .following , we have used a newton - raphson ( nr ) iteration to solve a nonlinear equation for pressure to recover primitive variables from conserved variables .typically , the nr iteration needs only to steps to converge .we have also implemented cylindrical and spherical coordinates following the description of zhang & macfadyen ( 2006 ) .this affects three parts of the code .firstly , the geometric factors are incorporated into the flux when updating the conserved variables .secondly , there will be geometric source terms .thirdly , the flux correction in amr ( 4.4 ) is modified by geometric factors .structured adaptive mesh refinement ( amr ) was developed by and to achieve high spatial and temporal resolution in regions where fixed grid resolution is insufficient . in structured amr , a subgrid will be created in regions of its parent grid needing higher resolution .the hierarchy of grids is a tree structure .each grid is evolved as a separate initial boundary value problem , while the whole grid hierarchy is evolved recursively . _r_enzo _ _ is built on top of the amr framework of _ enzo _ ._ enzo _ s implementation of amr follows closely the berger & colella paper and has been shown to be very efficient for very high dynamic range cosmological simulations ( see e.g. abel et al .the pseudocode of the main loop for the second order runge - kutta method reads , the rebuildhierarchy function called at the end of every time step is at the heart of amr .its pseudocode as implemented originally in _ enzo _ reads , when a new subgrid is created , the initial values on that grid are obtained by interpolating spatially from its parent grid . in this case , we apply the conservative second order order interpolation routine provided by _ enzo _ to conserved variables . but in this process sometimes the interpolated values can violate the constraint . if this happens , we will then use first order method for that subgrid . before the first runge - kutta step for a grid at level ( in the following discussion, we use the convention that top grid has level ) , we will need the boundary condition at time , which is derived by interpolating from its parent grid .then at the later steps of runge - kutta scheme , one needs the boundary condition at time . since the variables of its parent grid has already been evolved to time , which is greater than time , we can obtain the boundary conditions at time for a grid at level by interpolating both temporally and spatially from its parent grid .there are two exceptions to this procedure .first , if a cell of fine grid abuts the box boundary , then we just use the specified boundary condition for that cell .second , if a cell abuts another grid at the same level , we copy the value from that grid . because of the above mentioned problem for interpolating conserved variables , when interpolating boundary values , we apply the second order interpolation to primitive variables .since for ultrarelativistic flows spatially interpolating three - velocity can lead to unphysical values , we also interpolate the lorentz factor and then use it to renormalize the interpolated three - velocity . in the test problems discussed in section [ sec : test ] , we mainly used two general purpose refinement criteria that have been widely used in amr code .in the first one , we compute the slope where is typically density , pressure and velocities , is a small number typically taken to be .when is larger than a minimum slope , typically , a cell will be flagged for refinement . in the second one ,for every cell we compute which is the ratio of the second and first derivatives with a safety factor in the denominator .unless otherwise stated , we use .when is larger than a critical value , a cell will be flagged for refinement .typically we use . to fully exploit amr ,it is desirable to design more specific refinement criterions that are most efficient for a specific astrophysical problems .when a cell is overlayed by a finer level grid , then the coarse grid value is just the conservative average of the fine grid values .on the other hand , when a cell abuts a fine grid interface but is not itself covered by any fine grid , we will do flux correction for that cell , i.e. we will use the fine grid flux to replace the coarser grid flux in the interface abutting the fine grid.(see berger & colella 1989 for more detailed description of flux correction ) . for this purpose , note that the second order runge - kutta method can be rewritten as and the third order runge - kutta method can be rewritten as thus for example , when we do flux correction in the -direction for interface , we will use and to correct the coarser grid conserved variables for the second and third order runge - kutta method , respectively . _r_enzo _ _ uses the _ enzo _ parallel framework which uses dynamically load balancing using the message passing interface ( mpi ) library . at run time, the code will move grids among processors according to the current load of every processor to achieve a balanced distribution of computational load among processors .the computational load of a processor is defined as the total number of active cells on that processor and level .relativistic riemann problems have analytical solutions , thus they are ideal for tesing srhd codes . in the following discussion , subscript and refer to the left and right initial states , respectively .the initial discontinuity is always at . we will report the error between numerical solutions and analytical solutions using norm defined as where is the numerical solution, is the analytical solution and is the cell width . this test and the following one are fairly standard and all modern srhd codes can match the analytical solution quite well ( see martt & mller 2003 for a summary of different codes performance on those two tests ) .the initial left and right states for this problem are and , , .the gas is assumed to be ideal with an adiabatic index .the initial discontinuity gives rise to a transonic rarefaction wave propagating left , a shock wave propagating right and a contact discontinuity in between .this problem is only mildly relativistic with a post - shock velocity and shock velocity 0.83 .the results using four hydro solvers are shown in fig .[ fig : riemann1 ]. the cfl number used is .the errors are shown in fig .[ fig : l1_riemann1 ] , from which we can see the four schemes behave essentially identical for this problem .the order of global convergence rate is about 1 for all four schemes , which is consistent with the fact that there are discontinuities in the problem .the initial left and right states for this problem are and , , .the gas is assumed to be ideal with an adiabatic index .this test is more relativistic than the previous one . while the wave structure is the same , the thermodynamically relativistic initial left state gives rise to a relativistic shock propagating at a lorentz factor and a very thin dense shell behind the shock with width at .the cfl number used is .the results using four hydro algorithms are shown in fig .[ fig : riemann2 ] .the errors are shown in fig .[ fig : l1_riemann2 ] .we can see that for this problem ppm and weno have smaller errors than plm and ceno .this is due to their ability to better resolve the thin shell .the initial left and right states for this test are and , , .the gas is assumed to be ideal with an adiabatic index .this test mimics the interaction of a planar jet head with the ambient medium .the decay of the initial discontinuity gives rise to a strong reverse shock propagating to the left , a forward shock propagating to the right and a contact discontinuity in between .the results are shown in fig .[ fig : riemann3 ] .the cfl number is .the errors are shown in fig .[ fig : l1_riemann3 ] . as can be seen in figs .[ fig : riemann3 ] & [ fig : l1_riemann3 ] , for this problem ppm and weno behave better than plm and ceno : there is almost no oscillation behind the reverse shock and they capture the contact discontinuity with fewer cells .especially ppm captures the contact discontinuity with only cells and has the smallest error . for this and the following two problems, we will consider non - zero transverse velocities in the initial states .the initial state is identical to blast wave problem ii except the presence of transverse velocities .those problems were first discussed analytically by . sincethen various groups have shown that when transverse velocities are non - zero , in some cases those problems become very difficult to solve numerically unless very high spatial resolution is used .in realistic astrophysical phenomena transverse velocities are usually very important ( see e.g. aloy & rezzolla 2006 ) , thus solving those problems accurately is of great importance . as an easy first case, we will consider non - zero transverse velocity only in the low pressure region .the initial left and right states are and .the gas is assumed to be ideal with an adiabatic index .the results are shown in fig . [fig : transverse_c ] .the cfl number is .the errors are shown in fig .[ fig : l1_bt1 ] .we can see that all four hydro algorithms behaves similarly well , except that plm and ceno shows some small oscillations around the contact discontinuity .next , we consider non - zero transverse velocity in the high pressure region . in this case , the problem becomes more difficult to solve numerically .the initial left and right states for this problem are and .the gas is assumed to be ideal with an adiabatic index .the high pressure region is connected to the intermediate state by a rarefaction wave .since the initial normal velocity in the high pressure region is zero , the slope of the adiabat increases rapidly with transverse velocity , thus a large initial transverse velocity will lead to a small intermediate pressure and a small mass flux .the results using a uniform grid and two amr runs are shown in fig .[ fig : transverse_d ] .the hydro solver used for this figure is hll - plm .the cfl number is .the error are shown in fig .[ fig : l1_transverse_d ] .it can be seen that for the run with 400 uniform grid cells , the numerical solution is inadequate , as previously found by .this is mainly due to the poor capture of the contact discontinuity .we have tried to run this problem with various algorithms but only obtained accurate solutions by dramatically increasing the resolution .cccc 400 & 400 & 400 + 800 & 448 & 421 + 1600 & 455 & 442 + 3200 & 470 & 468 + 6400 & 501 & 473 + 12800 & 518 & 476 + 25600 & 562 & 520 now we introduce transverse velocity in both region .the initial left and right states for this problem are and .the gas is assumed to be ideal with an adiabatic index .this problem is more difficult than the previous one due to the formation of an extremely thin shell between the rarefaction wave tail and the contact discontinuity .the results with a uniform grid and two amr runs are shown in fig .[ fig : transverse_e ] .the hydro solver used for this run is hll - plm .the cfl number used is .the errors are shown in fig .[ fig : l1_transverse_e ] . table 1 shows the equivalent resolution and the actual number of cells used for this and the previous tests .it can be seen for the highest resolution calculation our code uses about four hundred times less grid cells than the corresponding uniform grid calculation .thus amr allows us to achieve very high resolution while significantly reducing the computational cost . for this testwe set up a one - dimensional riemann problem that mimics the interaction of jet with an overpressured cocoon .the initial left and right states for this problem are and .the gas is assumed to be ideal with .those values mimic the conditions of the jet - cocoon boundary in model c2 of .the result using four different riemann solver with plm on uniform grid are shown in fig .[ fig : jetcocoon400 ] .it can be seen that solutions use hll , llf and direct flux reconstruction have large positive fluctuations in the normal velocity at the rarefaction wave .it is interesting to note that only hllc does not suffer from this shortcoming which is probably due to the ability of hllc to resolve the contact discontinuity compared to the other riemann solvers in the code .if those fluctuations also happen in higher dimensional jet simulation , then one would expect that the normal velocity fluctuation seen in this test would lead to an artificially extended cocoon . in fig .[ fig : jet1d_amr ] the result of using hll - plm and hllc - plm with amr is shown .it can been seen that the fluctuation in the hll scheme becomes smaller with higher resolution .[ fig : l1_jet1d ] shows the error for those two schemes with different levels of refinement . to test the code in higher dimension ,we first study the two - dimensional shock tube problem suggested by and latter also used by various groups ( see e.g. zhang & macfadyen 2006 , mignone & bodo 2005a , lucas - serrano et al .this test is done in a two - dimensional cartesian box divided into four equal - area - constant states : where ne means northeast corner and so on .the grid is uniform .the gas is assumed to be ideal with an adiabatic index .we use outflow boundary conditions in all four directions and the cfl number is . the results are shown in fig .[ fig : shocktube ] for four schemes .this problem does not have analytical solutions to compare with , but comparing our result with other groups result shows good agreement .the cross in the lower left corners of ( a ) hll - plm and ( d ) f - weno is a numerical artifact due to the inability to maintain a contact discontinuity perfectly , which are absent in the results using the hllc solver ( c ) .this agreed with the result of that the hllc solver behaves better in this problem than other riemann solvers because of its ability of resolve contact discontinuities .we first study a test problem with uniform grid in cartesian coordinate that has been used by several groups to test the symmetric properties of a three - dimensional srhd code .the initial setup of this test consists of a cold spherical inflow with initially constant density and constant velocity colliding at the box center .this problem is run using three - dimensional cartesian coordinates so it allows one to evaluate the symmetry properties of the code . when gas collides at the center , a reflection shock forms . behind the shock, the kinetic energy will be converted completely into internal energy .thus the downstream velocity is zero and the specific internal energy is given by the upstream specific kinetic energy using the shock jump condition , the compression ratio and the shock velocity can be found to be in the unshocked region ( ) , the gas flow will develop a self - similar density distribution , the initial state are .we chose a small value for pressure because a grid - based code can not handle zero pressure .a cfl number is used for this problem , as other groups .we chose to use llf - plm for this problem because this turns out to be the most stable solver for this problem .[ fig : rssr_1d ] shows the one - dimensional cut though axis and diagonal direction and fig . [fig : rssr_2d ] shows a contour through plane , both at .it can be seen from those plots that our code keeps the original spherical symmetry quite well . since in a cartesian box the simple outflow boundary condition is inconsistent with the initial spherical inflow setup , we evolve this problem only to , at which point all the mass in the original box has just entered the shocked region ( see fig . [fig : rssr_1d ] ) . after that time, the evolution would be affected by the unphysical boundary condition . in this test, we study a spherical blast wave in three - dimensional cartesian coordinates .there is no analytical solution for this problem .thus for the sake of comparison , we set up the same problem as other groups .the center of the blast wave source is located at the corner of the box .the initial conditions are where is the distance to the center .an ideal gas with an adiabatic index of is assumed .the left boundaries at x , y , z directions are reflecting while others are outflow . we use a top grid of zones with two more levels of refinement and a refinement factor of 2 for this calculation ( equivalent resolution ) .f - plm is used for the result shown and the cfl number is . the results are given in fig .[ fig : blast3dcorner_1 ] which shows the cut along x - axis and diagonal direction . for comparison, we run a high resolution one - dimensional simulation using spherical coordinates .the three - dimensional run in cartesian coordinates agrees with the one - dimensional high resolution run .furthermore , it can be seen that the spherical symmetry of the initial condition is preserved rather well in the three - dimensional cartesian case .finally we study another blast wave problem for which the center of the blast wave source is located at the box center .this problem also does not have analytical solution but it has been studied by so our result can be compared to theirs .the initial conditions are an ideal gas eos with an adiabatic index of is used .we stop the run at , roughly the same ending time of .a top grid of zones with four levels of refinement and a refinement factor of two is used ( equivalent resolution ) .thus our resolution is roughly times of .we used hll - plm and a cfl number of for this calculation .[ fig : blast3d_1 ] plots the numerical solution for all cells centered on the highest level in the two dimensional slice at at as a function of radius from the center ( 0.5,0.5,0.5 ) .the position and amplitude of the high density shell agrees with the calculation of . andit can be seen that the spherical symmetry is preserved rather well in our code .having validated our code using various test problems , we can apply it to astrophysical problems .we will study two typical astrophysical relativistic flow problems in this work , agn jets and grb jets .both topics have been studied extensively with 2d simulations before , but very few 3d calculations have been done in both cases . consequently we will focus on 3d simulations here . in this section ,we study a relativistic supersonic jet in three dimensions .we set up the problem using the same parameters as model c2 of .this model has also been studied in two dimensions by and in three dimensions by .the jet parameters are , and .the jet has a classical mach number the pressure is .the parameters for the medium are , and .the eos is assumed to be ideal with .the jet is injected from the low - z boundary centered at with radius .outflow boundary conditions are used at other part of the boundary .figure [ fig : jet ] shows the result at for the three runs : hll - ppm with three levels of refinement ( hll - ppm - l3 ) , hll - plm with three levels of refinement ( hll - plm - l3 ) and hll - plm with four levels of refinement ( hll - plm - l4 ) .the top grid resolution is zones .thus the first two runs have an equivalent resolution of zones while the last one has zones .for the first two , turbulence in cocoon is not fully developed so the cocoon is still symmetric even in 3d .the hll - ppm - l3 run has slightly more turbulent cocoon due to the higher spatial reconstruction order of ppm .thus the hll - ppm - l3 jet propagates slightly slower than the hll - plm - l3 jet . on the other hand , for the hll - plm - l4 jet ,the resolution is cells per beam radius , comparable to the resolution used in the two - dimensional study by .the cocoon turbulence is much more developed in this case , as in the 2d case .consequently , the hll - plm - l4 jet propagates slower than the two lower resolution ones .furthermore , the hll - plm - l4 case does not show axisymmetry because instability quickly develops in the lateral motion and consequently lateral motion also becomes turbulent . since we found the jet - cocoon structure differs significantly at higher resolution run and consequently the jet propagation speed decreases ,we conclude that even at effective resolution of our three dimensional jet simulations the correct solution remains elusive .moreover , different solvers give disparate answers .the idea that long - soft " gamma - ray bursts are associated with the deaths of massive stars has been supported by various observations recently ( see e.g. woosley & bloom ( 2006 ) for a recent review ) .so it is of great interest to calculate how a relativistic jet propagate through and break out of a massive star .there has been extensive calculations in 2d recently .but very few 3d calculation have been reported in the literature so far .since there are significant turbulent motion and mixing in the jet - cocoon system in 2d calculation , it is important to model this process in 3d .we use the projenitor he16a of , which is a stripped - down helium core with initial mass m .[ star ] shows the density and pressure profiles for the progenitor model .the radius of the star at onset of collapse is cm . in our simulation , the mass inside cmis removed and replaced by a point mass of m .the jet has power erg s , initial lorentz factor , the ratio of its total energy ( excluding rest mass energy ) to its kinetic energy .this corresponds to a a jet with initial density g and pressure erg .the jet is injected parallel to the axis with an initial radius cm .those model parameters are similar to previous 2d calculations .we use a simulation box of cm in order to follow the propagation of the jet after break out .we use hll - plm as this should be the most stable and reliable scheme to carry out resolution study .the top grid resolution is .we run the simulation using both 4 and 5 refinement levels , which correspond to resolution of 5.6 and 11 cells per jet beam radius , respectively . in order to always have high resolution for the jet material , we designed a color field refinement strategy in addition to the standard refinement criterion designed for discontinuities .more specifically , we use two color fields to keep track of the injected jet material and the stellar material . those two color fields give us a fraction of jet material at every cell .then whenever a cell contains more than 0.1 percent of jet material , we flag that cell for refinement .this ensures that we also have high resolution when mixing between jet material and star material happens .the results for those two runs are show in fig .[ grbjet ] .it can been seen that the high resolution run gives qualitatively different jet dynamics .while in the low resolution run the jet break out of the star successfully , in the high resolution run the jet head bifurcate at the stellar edge .interestingly , this behavior has also been seen in 2d calculation .but since we did not get convergent behavior so far , we can not conclude at this stage whether this behavior is physical or purely numerical .however , it is safe to conclude that much higher resolution will be needed to model the jet break out in 3d .in this paper , we have described a new code that solves the special relativistic hydrodynamics equations with both spatially and temporally adaptive mesh refinement .it includes direct flux reconstruction and four approximate riemann solvers including hllc , hll , llf and modified marquina flux formula .it contains several reconstruction routines : plm , ppm , third order ceno , and third and fifth order weno schemes .a modular code structure makes it easy to include more physics modules and new algorithms . from our test problems and two astrophysical applications ,it is clear that relativistic flow problems are more difficult than the newtonian case .one key reason is that in the presence of utrarelativistic speed , nonlinear structures such as shocked shells are typically much thinner and thus requires the use of very high spatial resolution .srhd problems also become difficult to solve accurately when significant transverse velocities are present in the problem as we have shown using several one dimensional problems .one reason for this difficulty is that in srhd velocity components are coupled nonlinearly via the lorentz factor . in studying astrophysical jet problems ,we have demonstrated the need of both high resolution achievable only through amr and careful choice of hydrodynamic algorithms .in addition to validate our amr code , the most important implications of the calculations we have done is that _ in relativisitic flow simulations , resolution studies are crucial_. we thank greg bryan and michael norman for sharing _ enzo _ with the astrophysical communitywithout which this study could have not been carried out .we would also like to thank miguel aloy for very helpful comment on the draft , roger blandford and lukasz stawarz for helpful discussions .furthermore , we also thank ralf kehler for help with optimizing the code and slac s computing servers for maintaining an sgi altix super computer on which the reported calculations were carried out .this work was partially supported by nsf career award ast-0239709 and grant no . phy05 - 51164 from the national science foundation .w. acknowledges support by the stanford graduate fellowship and kitp graduate fellowship .w. z. has been supported by nasa through chandra postdoctoral fellowship pf4 - 50036 awarded by the _ chandra x - ray observatory _ center , and the doe program for scientific discovery through advanced computing ( scidac ) .oshea , b. w. , bryan , g. , bordner , j. , norman , m. l. , abel , t. , harkness , r. , & kritsuk , a.2004 , in `` adaptive mesh refinement - theory and applications '' , eds .t. plewa , t. linde & v. g. weirs , springer lecture notes in computational science and engineering , 2004 | astrophysical relativistic flow problems require high resolution three - dimensional numerical simulations . in this paper , we describe a new parallel three - dimensional code for simulations of special relativistic hydrodynamics ( srhd ) using both spatially and temporally structured adaptive mesh refinement ( amr ) . we used the method of lines to discretize the srhd equations spatially and a total variation diminishing ( tvd ) runge - kutta scheme for time integration . for spatial reconstruction , we have implemented piecewise linear method ( plm ) , piecewise parabolic method ( ppm ) , third order convex essentially non - oscillatory ( ceno ) and third and fifth order weighted essentially non - oscillatory ( weno ) schemes . flux is computed using either direct flux reconstruction or approximate riemann solvers including hll , modified marquina flux , local lax - friedrichs flux formulas and hllc . the amr part of the code is built on top of the cosmological eulerian amr code _ enzo_. we discuss the coupling of the amr framework with the relativistic solvers . via various test problems , we emphasize the importance of resolution studies in relativistic flow simulations because extremely high resolution is required especially when shear flows are present in the problem . we also present the results of two 3d simulations of astrophysical jets : agn jets and grb jets . resolution study of those two cases further highlights the need of high resolutions to calculate accurately relativistic flow problems . |
it is widely believed that the origin of life required the formation of sets of molecules able to collectively self - replicate , as well as of compartments able to undergo fission and proliferate , i.e._protocells _ .in particular , in order to observe a lifelike behavior it was necessary that some of the chemical reactions were coupled to the rate of proliferation of the compartments .several protocell architectures have been proposed , most of them identifying the compartment with a lipid vesicle that may spontaneously fission under suitable circumstances . on the other hand , many distinct models were proposed to describe sets of reactions involving randomly generated molecules . in many cases , although this is not in principle required , it is assumed that only _ catalyzed _reactions take place at a significant rate , therefore these sets are also termed _ catalytic reaction sets _ ( briefly , crss ) .it is worth noting that the appearance of new molecules implies the appearance of new reactions involving those new molecules , so that both the set of molecular types and the set of reactions change in time .hence , it is possible that at a certain time a set of molecules able to catalyze each other s formation emerges , and we will refer to it as an _ autocatalytic set _ ( acs ) .it can be noticed that a crs can contain one or more acss , or none . + even though some models of protocell actually describe the coupling between reaction networks and the dynamics of a lipid container , they consider only a fixed set of molecular species and reactions , hence providing an incomplete representation of this complex interplay .conversely , while there are several studies on collectively self replicating sets of molecules in a _ continuously stirred open - flow tank reactor _, cstr including our own , they provide only limited information about the behavior of a protocell .therefore , in order to develop a framework that may unify the crss and the protocell modeling approaches , it is necessary i ) to analyze the behavior of crss in a vesicle , and ii ) to investigate the coupling of the evolving chemical population with the growth of the lipid container and its fission . in this paper we propose a step towards the first goal , while deferring the second one to a further work .in particular , we here analyze the behavior of a dynamical model of crss in a simplified model of a non - growing vesicle . to the best of our knowledgethis is a novel approach .+ a few important remarks .let us first observe that the cstr is not an _ a - priori _ good model of a protocell for at least two reasons : in general , in protocells there is no constant inflow and protocells have semipermeable membranes , which allow the inflow / outflow of some , but not all , molecular types .on the contrary , in open flow reactors all that is contained in the inflow enters the reactor and all that is dissolved in the reactor can be washed out in the outflow .another important limit of the cstr concerns its evolvability .it has been argued that the presence of different asymptotic dynamical states and the ability to shift between them may be essential to achieve the viable evolution of the first forms of life .recent works have found that , in models of catalytic reaction networks in cstrs , generally only one of these states is found , apart from fluctuations .+ furthermore , in order to accomplish the goal of this work , we need to better specify both the model of catalytic reactions sets and that of the protocell .as far as the former is concerned , we have studied the dynamics of random sets of molecules by revisiting a model by kauffman , who proposed an interesting way to build new molecular species from the existing ones ( see section [ sec : model ] for a description ) . the original version of the model relied on purely graph - theoretical arguments , which are important , but fail to appreciate the effects of the dynamics , including noise , fluctuations and small - number effects .the dynamics has been later introduced by farmer et al . , who described the kinetics by using ordinary differential equations .however , this formalism does not account forthe chance of a species to become extinct in a finite amount of time , as it may instead well happen ( so the reaction graph may grow but never shrinks ) . in order to overcome these limitations ,bagley proposed an empirical correction by setting to zero the concentration values that happen to fall below a certain threshold . in our workswe rather use from the very beginning a stochastic approach to analyze the dynamics , the well - known gillespie algorithm , in order to deal in a rigorous way with low concentrations and with their fluctuations .+ note that the kauffman model largely relies upon randomness .in particular , every polymer in the system has a fixed probability ( that may vanish ) to catalyze any possible reaction .therefore , in different simulations the same species can catalyze different reactions leading to the formation of different _chemistries_. thus , this is exactly the language we choose : a set of tuples , where the species catalyzes the reaction , will be called chemistry , because it describes a possible artificial world .we can then simulate different chemistries and look for generic properties of the set of chemistries ; but in a different series of experiments we can also keep the chemistry fixed , and simulate various time histories . in principle , these may differ , since the discovery of a given catalyst at an early phase in a finite system might channel the following evolution in a way or another . since the number of molecules of some species may be very small ,it is not in principle legitimate to ignore this aspect , and our stochastic model is particularly well suited to analyze it , as it will be shown in section [ sec : model ] .of course , there can be conditions where all the simulations of a given chemistry converge asymptotically to the same chemical mixture . + moving now to the protocell model , note that they are usually based on lipid vesicles , i.e. approximately spherical structures with an aqueous interior and a membrane composed by a lipid bilayer , which spontaneously form when lipids are mixed with water under certain conditions .even though different protocell architectures have been proposed , we will here consider the simplest model , namely that in which all the key reactions take place in the aqueous phase _ inside _ the protocell .it would be indeed straightforward to model the coupling between some of these molecules and the growth of the protocell following an approach similar to that of our previous studies .yet , the main objective of the present work is that of studying the dynamics of crss embedded in a vesicle , so we will simplify our treatment by ignoring the growth dynamics of the protocell , and keeping its volume fixed .this implies that our study will be limited to time intervals that are short with respect to those describing the growth of the whole protocell .+ the selective character of membranes is a key ingredient of our model : we will suppose for simplicity that all ( and only ) the molecules that are shorter than a certain length can cross the membrane .the transmembrane motion of the permeable species is here supposed to be driven by the difference of their concentrations in the internal aqueous volume of the protocell and in the external aqueous environment .we will assume that transmembrane diffusion is extremely fast , so that there is always equilibrium between the concentrations of the species that can cross the membrane ; this adiabatic hypothesis could be easily relaxed in the future .furthermore , we assume that protocells are turgid , so that the constant - volume approximation implies that we will also neglect issues related to osmotic pressure .another related aspect of the model is that , since it is assumed that the permeable species are at equilibrium , while the non - permeable ones never cross the barriers , infinite concentration growth is possible ; this is obviously a nonphysical behavior , so the model validity is limited in time .all these simplifications , which will be removed in subsequent studies , are also justified by the fact that our main goal is that of studying how the dynamics of crss are affected by being embedded in a vesicle .+ this model can be used in order to investigate the behavior of the system in different conditions and to address some important questions .the first and perhaps most important one is the reason why compartments seem to be necessary for life .indeed , the very first studies on self - replicating molecules were not interested in this aspect , so the crss were supposed to exist , e.g. , in a pond or in a beaker . yet life seems to require compartments , that are ubiquitous .it is then important to understand whether there are major differences between what may happen in a protocell and what happens in the bulk phase. it would be unconvincing to postulate _ a priori _ that the internal and external environments are different .it is indeed more likely to assume that the vesicles form in a pre - existing aqueous environment , so the average internal milieu is essentially the same as the external one .then , if a membrane surrounds a portion of the fluid , what can happen that makes a difference ?+ let us first observe that protocells are small ( their typical linear dimensions ranging from to ) .if we imagine that a population of protocells exists , and they are not overcrowed , their total internal volume will typically be much smaller than the total external volume ( this is _ a fortiori _true for an isolated one ) . moreover ,every point in the interior of a protocell is not allowed to be far away from the surface of the protocell that contains it .these observations imply that the effect of surfaces will be much larger within protocells then outside them .suppose for example that the membrane hosts some catalytic activities , so that important molecules are synthesized close to its boundaries , both inside and outside , and diffuse freely .if the membrane width is much smaller than the protocell radius , then the internal and external surface areas are very close to each other , but the external volume is much larger than the external one : therefore the internal concentrations will be much higher than those in the external environment . in this case , the system behavior in the interior can be significantly different from the external one .note also that this effect may be different for different molecules : the formation of some of them might be catalyzed by the membrane , while others might be unaffected : so even the relative concentrations of different chemicals may differ in the two cases .+ indeed , there are important protocell models that are based on such an active catalytic role of the membrane . in these casesit is easy to understand what the role of the protocell is , since it provides essential catalysts and a way to keep their products closer . but protocells might be able to give rise to an internal environment different from the bulk even if the catalytic activity is absent .the reason for this seemingly counterintuitive behavior is , once again , the smallness of the protocells .note that we are considering a case where new molecules are formed ( from those that are already in the interior of the protocell plus those that can cross the semipermeable membrane ) .if the concentrations are not too high , it is likely that the total numbers of newly formed molecules are quite low , so that different protocells might host different groups of molecules .it might even happen that a molecular type is present in some protocells and not in others .+ in order to get a feeling for this possibility , let us provide some realistic estimates of the number of molecules of different types that can be present in a protocell .let us consider typical vesicles ( linear dimension about ) and small ones ( ) .typical concentrations of macromolecules may be in the millimolar to nanomolar range ; the excepted numbers of molecules in a single protocell are therefore given in table [ tab : expnumofmols ] ..excepted number of molecules of a given species in a given protocell ; rows refer to protocell volumes , columns to concentrations .[ cols="^,^,^,^,^,^",options="header " , ]in this paper we introduced a simplified model of a non - growing protocell and we investigated the behavior of a stochastic model of catalytic reaction networks in such an environment . to the best of our knowledgethis is a novel approach. the crucial importance of the small size of the protocell has been stressed , and the effects of the fact that some chemicals can be present in low numbers have been investigated . while a broader analysis is ongoing , we have here shown that it is possible to reach different compositions of the chemical species , in the particular case in which some species are present in the bulk at low concentrations .we have also shown that there are two different , possibly overlapping reasons for this diversity : the random sequence of molecular events involving those species and the random differences in their initial concentrations .we have also stressed the importance of raf sets in influencing the overall dynamics .there are several ways in which this work might seed further research .the most obvious is that of relaxing the physical limitations that have been considered , e.g. infinitely fast diffusion , yet we do not except that this may change the major conclusions summarized above . obviously , a very interesting direction is that of considering a protocell that is able to grow and divide .the processes involved in protocell growth and replication are indeed complex and , in particular , a necessary condition for its existence and replication is the coupling between the rates of molecules replication and cell growth .we have shown elsewhere that the very existence of this coupling suffices to guarantee ( under very general conditions ) that , in the long run , the rate of cell division and that of duplication of the replicating molecules converge to the same value , thereby allowing sustainable growth of a population of protocells . however, those results were achieved by supposing a fixed set of genetic memory molecules , with some possible extinction .it could be sound to extend this approach to the case where there are evolving chemical reaction sets and to verify whether synchronization occurs .an important aspect to be addressed in the case of growing vesicles is also the effect of volume growth on the concentrations of the various chemicals : a preliminary investigation can be found in .besides , we have not here explicitly considered the possibly catalytic role of membranes that , as it has been discussed in section [ sec : intro ] , might be a major cause of the difference between the intracellular environment(s ) and the bulk . in a fixed volume model this effect can be lumped in the effective reaction rates , but if we consider a growing protocell we have to take into account the differences between surfaces and volumes .this might also lead to interesting phenomena that will be analyzed in future developments . to conclude, different protocells may host different mixtures of molecular species , even if they share the same chemistry ( i.e. , they inhabit the same world ) .it might be extremely interesting to model the behavior of populations of different protocells of this kind , which may show different growth rates , but may also undergo phenomena like coalescence , exchange of material , etc . thus , further investigations will be indeed necessary to assess different generations of protocell populations and their possible evolution pathways .+ last but not least , it will be interesting to extend these studies to other protocell architectures like e.g. the los alamos bug , gard models , and others .stuart kauffman , norman packard and wim hordijk kindly shared with us their deep understanding of autocatalytic sets in several useful discussions .useful discussions with ruedi fchslin , davide de lucrezia , timoteo carletti , andrea roli and giulio caravagna are also gratefully acknowledged .the authors are also grateful to giulia begal for kindly drawing the image of fig .[ fig : protocell ] .c.d . wishes to acknowledge the project sysbionet ( 12 - 4 - 5148000 - 15 ; imp .611/12 ; cup : h41j12000060001 ; u.a .53 ) for the financial support of the work .+ the final publication is available at springer via http://dx.doi.org/10.1007/s11047-014-9445-6 + +simulations were performed with the carness simulator developed by the research group . + in the following ,the baseline setting of the system used in the simulations is reported ( for the parameters that were variated in the different experiments please refer to the text ) : alphabet : a , b , volume = , average catalysis probability = 1 catalyzed reaction for species , maximum length of the species , , , monomers and dimers do not catalyze , , , , .a filisetti , a graudenzi , r serra , m villani , d de lucrezia , and i poli . . in lenaerts t , giacobini m ,h bersini , p bourgine , m dorigo , and r doursat , editors , _ advances in artificial life , ecal 2011 proceedings of the eleventh european conference on the synthesis and simulation of living systems _ , pages 227234 .mit press , cambridge , ma , 2011 .roberto serra , timoteo carletti , irene poli , marco villani , and alessandro filisetti . .in _ in j. jost and d. helbing ( eds ) : proceedings of eccs07 : european conference on complex systems .cd - rom , paper n.68 _ , 2007 . | protocells are supposed to have played a key role in the self - organizing processes leading to the emergence of life . existing models either describe protocell architecture and dynamics , given the existence of sets of collectively self - replicating molecules for granted , or describe the emergence of the aforementioned sets from an ensemble of random molecules in a simple experimental setting ( e.g. a closed system or a steady - state flow reactor ) that does not properly describe a protocell . in this paper we present a model that goes beyond these limitations by describing the dynamics of sets of replicating molecules within a lipid vesicle . we adopt the simplest possible protocell architecture , by considering a semi - permeable membrane that selects the molecular types that are allowed to enter or exit the protocell and by assuming that the reactions take place in the aqueous phase in the internal compartment . as a first approximation , we ignore the protocell growth and division dynamics . the behavior of catalytic reaction networks is then simulated by means of a stochastic model that accounts for the creation and the extinction of species and reactions . while this is not yet an exhaustive protocell model , it already provides clues regarding some processes that are relevant for understanding the conditions that can enable a population of protocells to undergo evolution and selection . * keywords : * autocatalytic sets of molecules , catalytic reaction sets , origin of life , stochastic simulations , protocell |
although machine learning approaches have achieved success in many areas of natural language processing , researchers have only recently begun to investigate applying machine learning methods to discourse - level problems ( litman 1994 , andernach 1996 , reithinger & klesen 1997 , wiebe et al .1997 , dieugenio , moore , & paolucci 1997 ) .an important task in discourse understanding is to interpret an utterance s * dialogue act * , which is a concise abstraction of the speaker s intention ; figure [ ex - das ] presents a hypothetical dialogue that has been labeled with dialogue acts . recognizing dialogue acts is critical for discourse - level understanding and can also be useful for other applications , such as resolving ambiguity in speech recognition .however , computing dialogue acts is a challenging task , because often a dialogue act can not be directly inferred from a literal interpretation of an utterance . [ cols="^,^,<,^",options="header " , ] the effect of this modified filter varies dramatically , removing 23% ( 3224 ) to 72% ( 10,237 ) of the 14,231 phrases , as shown in figure [ filter - results ] .however , figure [ modified - filter ] shows that , as expected , using the filter does not cause the accuracy to decrease .in addition , it allows the system to maintain a high accuracy with fewer phrases .in particular , dcp s accuracy is significantly higher than all s accuracy when using only 5% ( 712 ) of the phrases in all .this suggests that the filter is effectively removing redundant phrases , to produce a more parsimonious set of phrases .this paper presented an investigation of various methods for selecting useful phrases .we argued that the traditional method of selecting phrases , in which a human researcher analyzes discourse and chooses general cue phrases by intuition , could miss useful phrases .to address this problem , we introduced _automatic _ methods that use a tagged training corpus to select phrases , and our experimental results demonstrated that these methods can outperform the manual approach .another advantage of automatic methods is that they can be easily transferred to another tagged corpus .our experiments also showed that the effectiveness of different methods on the dialogue act tagging task varied significantly , when using relatively small sets of phrases .the method that used our new metric , dcp , produced significantly higher accuracy scores than any of the baselines or traditional metrics that we analyzed .in addition , we hypothesized that repetitive phrases should be eliminated in order to produce a more concise set of phrases .our experimental results showed that our modified lexical filter can eliminate many redundant phrases without compromising accuracy , enabling the system to label dialogue acts effectively using only 5% of the phrases .there are a number of research areas that we would like to investigate in the future , including the following : we intend to experiment with different weightings of unsoundness and incompleteness in the dcp metric ; we believe that the simple lexical filter presented in this paper can be enhanced to improve it ; we would like to study the merits of enforcing frequency thresholds for methods that have a frequency bias ; for the semantic - clustering technique , we selected the clusters of words by hand , but it would be interesting to see how a taxonomy , such as wordnet , could be used to automate this process ; since all of the experiments in this paper were run on a single corpus , in order to show that these results may generalize to other tasks and domains , it would be necessary to run the experiments on different corpora .the members of the verbmobil research group at dfki in germany , including norbert reithinger , jan alexandersson , and elisabeth maier , generously granted us access to the verbmobil corpora .this work was partially supported by the nsf grant # ger-9354869 . , b. , j. moore , and m. paolucci .1997 . learning features that predict cue usage . in _ proceedings of the 35th annual meeting of the association for computational linguistics and the 8th conference of the european chapter of the association for computational linguistics_. 8087 .madrid , spain . , p. , d. byron , and j. allen .1998 . identifying discourse markers in spoken dialog . in _ applying machine learning to discourse processing :papers from the 1998 american association for artificial intelligence spring symposium_. 4451 .stanford , california . ,d. 1994 . classifying cue phrases in text and speech using machine learning . in _ proceedings of the twelfth national conference of the american association for artificial intelligence_. 806813 .seattle , washington . , l. and m. marcus .1994 . exploring the statistical derivation of transformation rule sequences forpart - of - speech tagging . in _ proceedings of the 32nd annual meeting of the association for computational linguistics_. 8695 .las cruces , new mexico . balancing act workshop . , k. , s. carberry , and k. vijay - shanker .1998a . computing dialogue acts from features with transformation - based learning . in _ applying machine learning to discourse processing : papers from the 1998 american association for artificial intelligence spring symposium_. 9097 .stanford , california . , k. , s. carberry , and k. vijay - shanker .dialogue act tagging with transformation - based learning . in _ proceedings of the 17th international conference on computational linguistics and the 36th annual meeting of the association for computational linguistics_. 11501156 .montral , qubec , canada . , j. , t. ohara , k. mckeever , and t. hrstroem - sandgren .an empirical approach to temporal reference resolution . in _ proceedings of the second conference on empirical methods in natural language processing_. 174186 .providence , rhode island . ,i. and j. pearl .comprehension - driven generation of meta - technical utterances in math tutoring . in _ proceedings of the sixth national conference of the american association for artificial intelligence_. philadelphia , pennsylvania . | we present an empirical investigation of various ways to _ automatically _ identify phrases in a tagged corpus that are useful for dialogue act tagging . we found that a new method ( which measures a phrase s deviation from an optimally - predictive phrase ) , enhanced with a lexical filtering mechanism , produces significantly better cues than manually - selected cue phrases , the exhaustive set of phrases in a training corpus , and phrases chosen by traditional metrics , like mutual information and information gain . |
some random phenomena occur at discrete times or locations , with the individual events largely identical , such as a sequence of neural action potentials .a stochastic point process is a mathematical construction which represents these events as random points in a space .fractal stochastic point processes exhibit scaling in all of the statistics considered in this paper ; fractal - rate stochastic point processes do so in some of them . in this workwe consider the simulation and estimation of fractal and fractal - rate stochastic point processes on a line , which model a variety of observed phenomena in the physical and biological sciences .this work provides an extension and generalization of an earlier paper along these lines .figure [ basics ] shows several representations that are useful in the analysis of point processes .figure [ basics](a ) demonstrates a sample function of a point process as a series of impulses occurring at specified times . since these impulses have vanishing width , they are most rigorously defined as the derivative of a well - defined counting process [ fig .[ basics](b ) ] , a monotonically increasing function of , that augments by unity when an event occurs . accordingly, the point process itself is properly written as , since it is only strictly defined within the context of an integral .the point process is completely described by the set of event times , or equivalently by the set of interevent intervals .however , the sequence of counts depicted in fig . [ basics](c ) also contains much information about the process . here the time axis is divided into equally spaced contiguous counting windows of duration sec to produce a sequence of counts , where - n[kt] ] denotes the occurrence of at least one event of the point process in the interval and is the delay time . for an ideal fractal or fractal - rate stochastic point process with a fractal exponent in the range , the coincidence rate assumes the form , \label{fractalcr}\ ] ] where is the mean rate of the process , denotes the dirac delta function , and is a constant representing the fractal onset time .a stationary , regular point process with a cr following this form for _ all _ delay times exhibits infinite mean .further , statistics of fractal data sets collected from experiments exhibit scaling only over a finite range of times and frequencies , as determined by the resolution of the experimental apparatus and the duration of the experiment . nevertheless , in much of the following we employ eq .( [ fractalcr ] ) without cutoffs since we find that the cutoffs do not significantly affect the mathematical results in many cases .we employ similar ideal forms for other second - order measures defined later in this paper for the same reasons .the coincidence rate can be directly estimated from its definition . however , in practice the cr is a noisy measure , since its definition essentially involves a double derivative .furthermore , for fspps and frspps typical of physical and biological systems , the cr exceeds its asymptotic value by only a small fraction at any large but practical value of , so that determining the fractal exponent with this small excess presents serious difficulties .therefore we do not specifically apply this measure to the lgn data , although the formal definition of coincidence rate plays a useful role in developing other , more reliable measures .the power spectral density ( psd ) is a familiar and well - established measure for continuous - time processes . for pointprocesses the psd and the cr introduced above form a fourier transform pair , much like the psd and the autocorrelation function do for continuous - time processes .the psd provides a measure of how the power in a process is concentrated in various frequency bands . for an ideal fractal point process, the psd assumes the form , \label{fractalpsd}\ ] ] for relevant time and frequency ranges , where is the mean rate of events and is a cutoff radian frequency . the psd of a point process can be estimated using the periodogram ( pg ) of the sequence of counts , rather than from the point process itself .this method introduces a bias at higher frequencies , since the fine time resolution information is lost as a result of the minimum count - window size . nevertheless ,since estimating the fractal exponent principally involves lower frequencies where this bias is negligible , and employing the sequence of counts permits the use of vastly more efficient fast fourier transform methods , we prefer this technique . alternate definitions of the psd for point processes ( and thus for the pg used to estimate them ) exist ; for example , a different psd may be obtained from the real - valued discrete - time sequence of the interevent intervals .however , features in this psd can not be interpreted in terms of temporal frequency .figure [ vispp ] displays the pg for the visual system lgn data calculated using the count - based approach . throughout the text of this paperwe employ radian frequency ( radians per unit time ) to simplify the analysis , while figures are plotted in common frequency ( cycles per unit time ) in accordance with common usage . for low frequencies ,the pg decays as , as expected for a fractal point process . fitting a straight line ( shown as dotted ) to the doubly logarithmic plot of the pg , over the range from 0.002 hz to 1 hz , provides an estimate .another measure of correlation over different time scales is provided by the fano factor ( ff ) , which is the variance of the number of events in a specified counting time divided by the mean number of events in that counting time .this measure is sometimes called the index of dispersion of counts . in terms of the sequence of counts illustrated in fig .[ basics](c ) , the fano factor is simply the variance of divided by the mean of , i.e. , -{\mbox{\rm e}}^{2}[z_{k}]}{{\mbox{\rm e}}[z_{k}]},\ ] ] where ] may be recast as the variance of the integral of the point process under study multiplied by the following function : equation ( [ haar ] ) defines a scaled wavelet function , specifically the haar wavelet .this can be generalized to any admissible wavelet ; when suitably normalized , the result is a wavelet allan factor ( waf ) .this generalization enables an increased range of fractal exponents to be estimated , at the cost of reducing the range of times over which the waf varies as .in particular , for a particular wavelet with regularity ( number of vanishing moments ) , fractal exponents can be reliably estimated .for the haar basis , , while all other wavelet bases have .thus the waf employing bases other than the haar should prove useful in connection with frspps for which ; for processes with , however , the af appears to be the best choice .for an ideal fspp or frspp with , any one of eqs .( [ fractalcr ] ) , ( [ fractalpsd ] ) , ( [ fractalff ] ) , and ( [ fractalaf ] ) implies the other three , with for larger values of ,the ff and cr do not scale , although the psd and af do ; thus eqs .( [ fractalpsd ] ) , and ( [ fractalaf ] ) imply each other for these frspps .therefore , over the range of for which the af exhibits scaling in principle , any of the these statistics could be solved for when it is known that , although best results obtain from the pg and the af . since the gd of a fractal can not exceed the euclidean dimension , which assumes a value of unity for one - dimensional point processes, fspps can not exist for ; only frspps are possible at these values of the fractal exponent .in previous work we defined an fspp and several frspps and derived a number of their statistics .we include brief summaries of these here , as well as of the clustered poisson point - process model of grneis and colleagues .we also introduce two new fractal - rate processes ( fractal lognormal noise and fractal exponential noise ) and define two new methods for generating a point processes from a rate function : integrate - and - fire and controlled variability integrate - and - fire .these prove useful in isolating the effects of fractal components from those of nonfractal noise introduced through the mechanics of generating the point process .for all of the processes considered , the scaling relation in eq .( [ scale0 ] ) holds for a number of statistical measures .the one - dimensional homogeneous poisson point process ( hpp ) is perhaps the simplest nonfractal stochastic point process .the hpp is characterized by a single constant quantity , its rate , which is the number of events expected to occur in a unit interval .a fundamental property of the hpp is that it is memoryless ; given this rate , knowledge of the entire history and future of a given realization of a hpp yields no information about the behavior of the process at the present .the hpp belongs to the class of renewal point processes ; times between events are independent and identically distributed .the hpp is the particular renewal point process for which this distribution follows a decaying exponential form .we now turn to a renewal process that is fractal : the standard fractal renewal process ( frp ) . in the standard frp ,the times between adjacent events are independent random variables drawn from the same fractal probability distribution .in particular , the interevent - interval probability density function decays essentially as a power law function where is the fractal exponent , and are cutoff parameters , and is a normalization constant. the frp exhibits fractal behavior over time scales lying between and .this process is fully fractal for : the power spectral density , coincidence rate , fano factor , allan factor , and even the interevent - time survivor function all exhibit scaling as in eq .( [ scale0 ] ) with the same power - law exponent or one simply related to it .further , for this process the capacity or box - counting dimension assumes the value ; since the frp is ergodic , the generalized fractal dimension becomes independent of the index , so for all , and all fractal dimensions coincide .a different ( nonrenewal ) point process results from the superposition of a number of independent frps ; however , for this combined process as becomes large , and indeed for any frspp , the interevent - time probability density function no longer scales , and the generalized dimensions no longer equal , although the pg and af ( and the ff and cr , for ) retain their scaling behavior .as the number of such processes increases , and for certain ranges of parameters , this superposition ultimately converges to the fractal gaussian - noise driven poisson process ( see also secs . [ fgn ] and [ fdspp ] ) .the standard frp described above is a point process , consisting of a set of points or marks on the time axis as shown in fig .[ frp_diag](a ) ; however , it may be recast as a real - valued process which alternates between two values , for example zero and unity .this alternating frp would then start at a value of zero ( for example ) , and then switch to a value of unity at a time corresponding to the first event in the standard frp . at the second such event in the standard frp , the alternating frp would switch back to zero , and would proceed to switch back and forth at every successive event of the standard frp .thus the alternating frp is a bernoulli process , with times between transitions given by the same interevent - interval probability density as in the standard frp , as portrayed in fig .[ frp_diag](b ) .many point processes derive from a continuous - time function which serves as the stochastically varying rate of the point process .these are known as fractal - rate stochastic processes ( frspps ) .we confine our discussion to rate processes ( and therefore frspps ) which are ergodic .perhaps the simplest means for generating a point process from a rate is the integrate - and - fire ( if ) method . in this model ,the rate function is integrated until it reaches a fixed threshold , whereupon a point event is generated and the integrator is reset to zero .thus the occurrence time of the event is implicitly obtained from the first occurrence of with such a direct conversion from the rate process to the point process , any measure applied to these point processes will return results closely related to the fractal structure of the underlying rate process . in particular , over small frequencies , the theoretical psds of the rate process and of the resulting point process coincide .we now turn to methods for generating several different kinds of fractal rate functions .more complex methods for point - process generation , as delineated in secs .[ fjif ] and [ fdspp ] , can also be applied to these same rate functions .gaussian processes are ubiquitous in nature and are mathematically tractable .the values of these processes at any number of fixed times form a jointly gaussian random vector ; this property , the mean , and the spectrum completely define the process . for use as a rate process, we choose a stationary process with a mean equal to the expected rate of events , and a spectrum of the form of eq .( [ fractalpsd ] ) with cutoffs .fgn properly applies only for ; the range is generated by fractal brownian motion .however , in the interest of simplicity , we employ the term fractal gaussian noise for all values of .a number of methods exist for generating fgn : typically the result is a sampled version of fgn with equal spacing between samples .interpolation between these samples is usually achieved by selecting the most recently defined value . with a rate process described by fgn serving as the input to an if process , the fractal - gaussian - noise driven integrate - and - fire process ( fgnif ) results .generally , we require that the mean of the rate process be much larger than its standard deviation , so that the times when the rate is negative ( during which no events may be generated ) remain small .the fgnif has successfully been used to model the human heartbeat ( see sec . [ heartbeat ] ) .all of the rate processes considered in the following subsections converge to fgn under appropriate circumstances , and thus the point processes they yield through the if construct converge to the fgnif . a related process results from passing fgn through a memoryless exponential transform .since the exponential of a gaussian is lognormal , we call this process fractal lognormal noise ( fln ) .if denotes an fgn process with mean ] , and autocovariance function , then ] and autocovariance function \{\exp[k_x(\tau ) ] - 1\} ] , as encountered in modeling exocytosis ( for which \le 0.6 ] , the exponential operation approaches a linear transform , and the rate process becomes fgn .other nonlinear transforms may be applied to fgn , yielding other fractal rate processes ; we consider one which generates an exponentially distributed process . if and denote two independent and identically distributed fgn processes with zero mean , variance ] and autocovariance function .the rate turns out to be exponentially distributed .if scales as in eq .( [ fractalcr ] ) with an exponent in the range , then so will , but with an exponent .this process may prove useful in the study of certain kinds of thermal light .such light has an electric field which comprises two independent components , with gaussian amplitude distributions .the light intensity is the sum of the squares of the two fields , and therefore has an exponential amplitude distribution .a number of independent , identical alternating fractal renewal processes ( see sec . [ frp_def ] ) may be added together , yielding a binomial process with the same fractal exponent as each individual alternating frp .this binomial process can serve as a rate process for an integrate - and - fire process ; the fractal - binomial - noise driven integrate - and - fire ( fbnif ) results .it is schematized in fig .[ fbndp_dg ] .as the number of constituent processes increases , the fbn rate converges to fgn ( see sec .[ fgn ] ) with the same fractal exponent . though the hpp itself is not fractal , linearly filtered versions of it , denoted shot noise , can exhibit fractal characteristics . in particular , if the impulse response function of the filter has a decaying power - law ( fractal ) form , the result is fractal shot noise ( fsn ) . if fsn serves as the rate for an if process , the fractal - shot - noise driven integrate - and - fire process ( fsnif ) results ; fig .[ fsndp_dg ] schematically illustrates the fsndp as a two - stage process .the first stage produces a hpp with constant rate .its output becomes the input to a linear filter with a power - law decaying impulse response function where is a fractal exponent between zero and two , and are cutoff parameters , and is a normalization constant .this filter produces fractal shot noise at its output , which then becomes the time - varying rate for the last stage , an integrate - and - fire process .the resulting point process reflects the variations of the fractal - shot - noise driving process . under suitable conditions, fsn converges to fgn as provided by the central limit theorem ; the result is then the fgnif .frspps based on an integrate - and - fire substrate have only one source of randomness , which is the rate process . those based on a poisson - process substrate ( see sec .[ fdspp ] ) have a second source which depends explicitly on the rate process .we now generalize the family of frspps to include those which have a second source of randomness that may be specified independently of the first .this new process differs from the simple if process based on , by the imposition of a specific form of jitter on the interevent times . after generating the time of the event in accordance with eq .( [ ifdef ] ) , the interevent time is multiplied by a gaussian - distributed random variable with unit mean and variance .thus is replaced by where is a zero - mean , unity - variance gaussian random variable .the result is the jittered - integrate - and - fire ( jif ) family of processes .employing the fractal - rate functions delineated in sec .[ fif ] yields the fractal - gaussian - noise driven jittered - integrate - and - fire process ( fgnjif ) ( sec .[ fgn ] ) , the fractal - lognormal - noise driven jittered - integrate - and - fire process ( flnjif ) ( sec .[ fln ] ) , the fractal - exponential - noise driven jittered - integrate - and - fire process ( fenjif ) ( sec . [ fen ] ) , the fractal - binomial - noise driven jittered - integrate - and - fire process ( fbnjif ) ( sec .[ fbn ] ) , and the fractal - shot - noise driven jittered - integrate - and - fire process ( fsnjif ) ( sec . [ fsn ] ) .in these point processes , the standard deviation of the gaussian jitter is a free parameter that controls the strength of the second source of randomness .two limiting situations exist . in the limit the second source of randomness is absent and the jif processes reduce to the simple if process considered in sec .[ fif ] . the opposite limit , leads to a homogeneous poisson process ; none of the fractal behavior in the rate process appears in the point process , as if the rate were constant . between these two limits , as increases from zero , the fractal onset times of the resulting point processesincrease , and the fractal characteristics of the point processes are progressively lost , first at higher frequencies ( shorter times ) and subsequently at lower frequencies ( longer times ) . finally , we note that another generalization of the integrate - and - fire formalism is possible . the threshold for firing neednot remain constant , but can itself be a random variable or stochastic process .randomness imparted to the threshold can then be used to represent variability in other elements of a system , such as the amount of neurotransmitter released per exocytic event or the duration of ion channel openings .the homogeneous poisson process ( hpp ) with rate would then form a special case within this construct , generated when the rate is fixed and constant at , and is an independent , exponentially distributed unit mean random variable associated with each interevent interval . if the rate process is also random , a doubly stochastic poisson point process results instead of the hpp .we have seen that an integrate - and - fire process ( with or without jitter ) converts a fractal rate process into an frspp .a poisson process substrate may be employed instead of the if substrate , in which case the outcome is the family of doubly stochastic poisson point processes ( dspps ) .dspps display two forms of randomness : one associated with the stochastically varying rate , and the other associated with the underlying poisson nature of the process even if the rate were fixed and constant . with fractal rate processes, a fractal - rate dspp ( frdspp ) results .again , simple relations exist between measures of the rate and the point process .in particular , for the theoretical psds we have + s_{\lambda}(\omega),\ ] ] where is the psd of the point process and is the psd of the rate . as with the fractal if processes considered above , the amount of randomness introduced by the second source in fractal dspps ( the poisson transform ) may also be adjusted .however , in this case the amount introduced depends explicitly on the rate , rather than being independent of it as with the jif family of frspps .for example , for a particular rate function suppose that its integral between zero and assumes the value : then given , the number of events generated in the interval ] must assume a constant value independent of ; thus the af does not exhibit bias caused by finite data duration .finally , since the expected value of the af estimate is unbiased , the expected value of the estimate of itself is expected to have negligible bias .this is a simple and important result .estimates of the ordinary variance and fano factor , in contrast , do depend on the duration . in this casewe have & = & ( n-1)^{-1}\sum_{k=0}^{n-1}(z_{k}-\widehat{\rm e}[z])^2 \nonumber\\ & = & ( n-1)^{-1}\sum_{k=0}^{n-1 } \left(z_{k}-n^{-1}\sum_{l=0}^{n-1}z_{l}\right)^2 \nonumber\\ & = & ( n-1)^{-1}\left[\sum_{k=0}^{n-1}z_k^2 - 2n^{-1}\sum_{k=0}^{n-1}\sum_{l=0}^{n-1 } z_k z_l + n^{-2}\sum_{k=0}^{n-1}\sum_{l=0}^{n-1}\sum_{m=0}^{n-1 } z_l z_m\right ] \nonumber\\ & = & ( n-1)^{-1}\sum_{k=0}^{n-1}z_k^2 - n^{-1}(n-1)^{-1}\sum_{k=0}^{n-1}\sum_{l=0}^{n-1 } z_k z_l \nonumber\\ & = & n^{-1}\sum_{k=0}^{n-1}z_k^2 - n^{-1}(n-1)^{-1}\sum_k^{n-1}\sum_{l\neq k } z_k z_l.\end{aligned}\ ] ] these cross terms , which do depend on the number of samples , lead to an estimated fano factor with a confounding linear term \approx 1 + ( t / t_0)^\alpha - t\left/\left(t_0^{\alpha } l^{1-\alpha}\right)\right . \label{ffbias}\ ] ] for . the last term on the right - hand - side of eq .( [ ffbias ] ) leads to bias in estimating the fractal exponent ; for this and other reasons , we do not employ the ff in fractal - exponent estimation .computation of the periodogram ( estimated psd ) proves tractable only when obtained from the series of counts , rather than from the entire point process .this , and more importantly the finite length of the data set , introduce a bias in the estimated fractal exponent as shown below .we begin by obtaining the discrete - time fourier transform of the series of counts , where : the periodogram then becomes with an expected value & = & m^{-1 } \sum_{k=0}^{m-1 } \sum_{m=0}^{m-1 } e^{j2\pi ( k - m)n / m } { \mbox{\rm e}}\left[z_k z_m\right ] \nonumber\\ & = & m^{-1 } \sum_{k=0}^{m-1 } \sum_{m=0}^{m-1 } e^{j2\pi ( k - m)n / m } { \mbox{\rm e}}\left[\int_{s=0}^t\int_{t=0}^t\ , dn(s+kt)dn(t+mt)\right ] \nonumber\\ & = & m^{-1 } \sum_{k=0}^{m-1 } \sum_{m=0}^{m-1 } e^{j2\pi ( k - m)n / m } \int_{s=0}^t\int_{t=0}^t g_n[s - t+(k - m)t]\ , ds\ , dt \nonumber\\ & = & m^{-1 } \sum_{k=0}^{m-1 } \sum_{m=0}^{m-1 } e^{j2\pi ( k - m)n / m } \int_{u =- t}^t\int_{v=|u|}^{2t-|u| } g_n[u+(k - m)t]\ , { \frac{\displaystyle { du\ , dv}}{\displaystyle { 2 } } } \nonumber\\ & = & m^{-1 } \sum_{k=0}^{m-1 } \sum_{m=0}^{m-1 } e^{j2\pi ( k - m)n / m } \int_{u =- t}^t(t-|u|)\ , g_n[u+(k - m)t]\ , du \nonumber\\ & = & m^{-1 } \sum_{k=0}^{m-1 } \sum_{m=0}^{m-1 } e^{j2\pi ( k - m)n / m } \int_{u =- t}^t(t-|u| ) \nonumber\\ & & \qquad \times \int_{\omega=-\infty}^\infty s_n(\omega)\ , e^{j\omega [ u + ( k - m)t]}\ , { \frac{\displaystyle { d\omega}}{\displaystyle { 2 \pi}}}\ , du \nonumber\\ & = & ( 2\pi m)^{-1 } \int_{\omega=-\infty}^\infty s_n(\omega ) \left|\sum_{k=0}^{m-1 } e^{jk(2\pi n / m + \omega t)}\right|^2 \int_{u =- t}^t(t-|u|)\ , e^{j\omega u } \ , du\ , d\omega \nonumber\\ & = & ( 2\pi m)^{-1 } \int_{\omega=-\infty}^\infty s_n(\omega ) { \frac{\displaystyle { \sin^2(\pi n + m\omega t/2)}}{\displaystyle { \sin^2(\pi n / m + \omega t/2 ) } } } { \frac{\displaystyle { 4\sin^2(\omega t/2)}}{\displaystyle { \omega^2}}}\ , d\omega \nonumber\\ & = & { \frac{\displaystyle { t}}{\displaystyle { \pi m } } } \int_{-\infty}^\infty s_n(2x / t ) { \frac{\displaystyle { \sin^2(mx)\sin^2(x)}}{\displaystyle { x^2\sin^2(x + \pi n / m)}}}\ , dx.\end{aligned}\ ] ] for a general fractal stochastic point process , where the psd follows the form of eq .( [ fractalpsd ] ) , we therefore have & = & { \frac{\displaystyle { \lambda t}}{\displaystyle { \pi m } } } \int_{-\infty}^\infty \left[1 + ( \omega_0 t/2)^\alpha |x|^{-\alpha}\right ] { \frac{\displaystyle { \sin^2(mx)\sin^2(x)}}{\displaystyle { x^2\sin^2(x + \pi n / m)}}}\ , dx . \label{fractalpgex}\end{aligned}\ ] ] focusing on the smaller values of useful in estimating fractal exponents permits the use of two approximations in eq .( [ fractalpgex ] ) .for the values of used in this paper ( ) , the integrand of eq .( [ fractalpgex ] ) will only be significant near , yielding & \approx & { \frac{\displaystyle { \lambda t}}{\displaystyle { \pi m } } } \int_{-\infty}^\infty \left[1 + ( \omega_0 t/2)^\alpha |x|^{-\alpha}\right ] { \frac{\displaystyle { m\pi\delta(x + \pi n / m)\sin^2(x)}}{\displaystyle { x^2}}}\ , dx \nonumber\\ & = & \lambda t \left[1 + ( 2\pi n / \omega_0 m t)^{-\alpha}\right ]{ \frac{\displaystyle { \sin^2(\pi n / m)}}{\displaystyle { ( \pi n / m)^2 } } } \nonumber\\ & \approx & \lambda t \left[1 + ( 2\pi n / \omega_0 m t)^{-\alpha}\right],\end{aligned}\ ] ] which is of the same form as eq . ( [ fractalpsd ] ) . improvement of this estimation procedure appears to require numerical integration of eq .( [ fractalpgex ] ) , which proves nontrivial since the integrand exhibits oscillations with a small period .fortuitously , for the parameter values employed in this paper the integrand appears peaked near , so that not too many ( ) oscillations need be included in the calculations . indeed , numerical results employing this method , and with for which eq .( [ fractalpgex ] ) is known to assume the simple value , agree within .these results form an extension of earlier approaches which ignored the effects of imposing periodic boundary conditions on the fourier transform , and of binning the events .numerical integration of eq .( [ fractalpgex ] ) followed by a least - squares fit on a doubly logarithmic plot leads to results for the expected bias of the psd - based estimate of the fractal exponent , shown in table [ tabgau02 ] .other methods exist for compensating for finite data length in power spectral estimation , such as maximizing the entropy subject to the correlation function over the available range of times ( see , e.g. , ) .having considered various possible theoretical sources of error in simulating frspps with a desired fractal exponent , we now proceed to investigate their effects upon fractal exponent estimates based on the power spectral density and allan factor measures . to this endwe employ simulations of three of the frspps outlined above : fgn driving a deterministic integrate - and - fire process ( fgnif ) , fgn driving an if process followed by gaussian jitter of the interevent intervals ( fgnjif ) , and fgn driving a poisson process ( fgndp ) .we choose these three processes from the collection presented in sec .[ mathform ] for a number of reasons .first , the fgnif , fgnjif , and fgndp derive from a common continuous - time rate process , fractal gaussian noise ( fgn ) , and thus differences among these point processes must lie in the point - process generation mechanism rather than in the fractal rate processes .second , fgn admits fractal exponent of any value , while the fractal exponents can not exceed two for fractal binomial noise and fractal shot noise .third , of all the frspps examined in sec .[ mathform ] , those based on the fgn appear to suffer the least from the effects of cutoffs , so that expected values of the pg and the af most closely follow the pure power - law forms of eqs .( [ fractalpsd ] ) and ( [ fractalaf ] ) respectively . to generate the rate function with spectral properties , we computed discrete - time approximations to fractal gaussian noise ( fgn ) by forming a conjugate - symmetric sequence k=0 1\leq k \leq m/2 ] is the desired average value of the fgn process , and is a constant determining the relative strength of the fgn .the phases of ] and ] was obtained by taking the inverse discrete fourier transform of ] of length points and kept the first samples as ] remained positive for all different values of ] was taken to represent equally spaced samples of a continuous - time process ; to simplify subsequent analysis , we chose the duration of each sample to be one sec , without loss of generality .varying the mean event production rate while keeping all other parameters fixed ( or in fixed relation to , as with the ff intercept time and psd corner frequency ) resulted in point processes which contained varying expected numbers of events .in particular , we employed values of and events per sample of ] as samples of a rate function , taken to be constant over each sample of $ ] ( 1 sec ) .this rate function is integrated until the result attains a value of unity , whereupon an output point is generated and the integrator is reset to zero . the generation of the continuous rate function from the piecewise - constant fgn does not significantly change the observed fractal structure of the resulting point process . for frequencies less than the inverse of the sampling time ( 1 hz in this example ) , the psds of the piecewise - constant and exact versions of fgn differ by a factor of ; fractal behavior depends on low frequencies ( ) where the above factor is essentially unity . since the fgnif process has just one source of randomness , namely the underlying fgn , the estimators for the fractal exponent should therefore display only the ( fractal ) behavior of the fgn itself , along with finite - size effects .bias or variance due to a second source of randomness , such as the poisson transform in the fgndp , will not be present .thus for our purposes the fgnif process serves as a benchmark for the accuracy of fractal exponent estimators , with a bias and variance depending only on finite data size effects and not on the rate .the results for the fgnif process with a rate and three different fractal exponents ( =0.2 , 0.8 and 1.5 ) are summarized for the af in table [ tabgau01 ] and for the pg in table [ tabgau02 ] . .af - based fractal exponent estimates for simulated fgnif processes with a rate , for different time ranges .the governing rate processes have theoretical fractal exponents of , , and .the estimated fractal exponents were obtained from straight - line fits on doubly logarithmic coordinates to an average curve of 100 independent simulations ( fit of avg . ) and from averages of the slopes of the individual curves ( avg . of fits ) .the deviations are the standard deviations obtained from the second averaging procedure . [cols="^,^,<,<,<",options="header " , ] for both the fgnjif and the fgndp processes , estimating fractal exponents over larger ranges of time or frequency leads to increased bias , particularly for large values of and small values of . since the results depart from those obtained with the fgnif , the additional randomness as decreases , or as increases , leads to a dilution of the power - law behavior of the af and pg , changing its onset to at larger times and smaller frequencies . in the limits and , the result is essentially white noise and the estimated fractal exponent will be zero .best results for the fgnif were obtained with the af - based fractal - exponent estimator using counting times in the range ; this yielded root - mean - square error results which were accurate to within for all design values of ( tables [ tabgau01 ] and [ rmse ] ) .for both the af- and pg - based estimators , performance degraded at longer times and smaller frequencies ( tables [ tabgau01 ] , [ tabgau02 ] , and [ rmse ] ) .finite - duration effects in the pg and relatively few counting windows in the af produce significant error over these scales .the rms error remained near for both estimators , over all design values of the fractal exponent , and over all time and frequency ranges examined .the fgnjif and fgndp processes impose a second source of randomness on the fractal behavior of the fgn kernel .this additional randomness is essentially uncorrelated , and thus may be modeled as the linear superposition of white noise on the original fractal signal ; this affects the shapes of both the af and the pg . as this white - noise level increases, it dilutes the fractal structure present in the fgn process , beginning at the shortest time scales and moving progressively towards longer ones .this effect reduces the range of times and frequencies over which fractal behavior appears , resulting in a decreased estimated fractal exponent .this proved most significant for larger design values of , and for the estimator based on the pg .indeed , magnitudes of the bias exceeded in one case .the af - based estimator fared significantly better , although in this case it employed counting times which were five times the inverse of the frequencies used for the pg .these results compare favorably to earlier studies using the same number of events , for which inaccuracies of or larger were obtained for a variety of fractal stochastic point processes , all of which included a second source of randomness .this study considered data sets with an average of points ; experimental data may contain far fewer points , and thus yield poorer fractal - exponent estimation statistics .in particular , finite - size effects , which lead to increased bias for the pg - based estimator and increased variance for the af - based estimator will reduce the ranges of times and frequencies over which reliable estimation can be achieved .similarly , data sets with larger numbers of points ( computer network traffic , for example ) would yield superior performance .in addition , if several independent data sets are available , known to be from identical experiments , then the af plots or the resulting estimates could be averaged , reducing the effective variance . in this case , the reduced bias in the af - based estimator would render it far superior to that based on the pg .finally we mention that there exist several methods for reducing the bias of the fractal exponent estimates if one has some _ a priori _ knowledge of the process of interest .such an approach however does not follow the philosophy of estimating a completely unknown signal and we therefore did not attempt to compensate for the bias by the use of such methods .we have investigated the properties of fractal and fractal - rate stochastic point processes ( fspps and frspps ) , focusing on the estimation of measures that reveal their fractal exponents .the fractal - gaussian - noise driven integrate - and - fire process ( fgnif ) is unique as a point process in that it exhibits no short - term randomness ; we have developed a generalization which includes jitter , the fgnjif , for which the short - term randomness is fully adjustable .in addition to the randomness contributed by the fractal nature of the rate , all other frspps appear to exhibit additional randomness associated with point generation , which confounds the accuracy of fractal - exponent estimation .the fgnjif proved crucial in elucidating the role that such randomness plays in the estimation of fractal exponents in other frspps .we presented analytical results for the expected biases of the pg- and af - based fractional exponent estimators due to finite data length .we followed this theoretical work with a series of simulations of frspps for three representative fractal exponents : , , and . using these simulations together with the analytical predictions , we delineated the sources of error involved in estimating fractal exponents . in particular, using the fgnjif we were able to separate those factors intrinsic to estimation over finite frspp data sets from those due to the particular form of frspp involved .we conclude that the af - based estimate proves more reliable in estimating the fractal exponent than the pg - based method , yielding rms errors of for data segments of frspps with points over all values of examined .finally , we note that wavelet generalizations of the af appear to yield comparable results , suggesting that the af estimate may be optimal in some applications .we are grateful to m. shlesinger for valuable suggestions .this work was supported by the austrian science foundation under grant no .s-70001-mat ( st , mf , hgf ) , by the office of naval research under grant no .n00014 - 92-j-1251 ( sbl , mct ) , by the whitaker foundation ( sbl ) , and by the joint services electronics program through the columbia radiation laboratory ( mct ) .equation ( [ fractalpgex ] ) yields predictions for the pg - based fractal exponent estimation bias for a general fractal - rate stochastic point process ( frspp ) .more accurate results may be obtained for particular frspps , taking into account the structure of these individual processes .we consider the case of the fgnif process , with the further specialization that the original fgn array is obtained by direct fourier synthesis with half of the array discarded , and that the analyzing periodogram subsequently doubles the number of points , yielding the same number as in the original array .we begin with a discrete - time power spectral density which is periodic with period , and with . for simulation purposeswe define a conjugate symmetric sequence based on the square root of : where are independent and uniformly distributed in , represents the signum function , and denotes complex conjugation .the corresponding time - domain signal may be obtained by inverse fourier transformation : we next define a subsampled sequence , also of length , by for which the corresponding sequence in the frequency domain becomes \nonumber\\ & = & 2 e^{-j\pin / m } \cos(\pi n / m ) \sum_{l=0}^{m/2 - 1 } x_l e^{-j4\pi ln / m } \nonumber\\ & = & 2 m^{-1 } e^{-j\pi n / m } \cos(\pi n / m ) \sum_{l=0}^{m/2 - 1 } e^{-j4\pi ln / m } \sum_{q=1}^{m-1 } e^{j2\pi ql / m } \tilde x_q.\end{aligned}\ ] ] finally , we compute the expected value of the periodogram of & \equiv & m^{-1 } { \mbox{\rm e}}\left[\tilde z_n \right]^2 \nonumber\\ & = & 4 m^{-3 } \cos^2(\pi n / m ) \sum_{l=0}^{m/2 - 1 } e^{-j4\pi ln / m } \sum_{k=0}^{m/2 - 1 } e^{j4\pi kn / m } \nonumber\\ & & \qquad \times \sum_{q=1}^{m-1 } e^{j2\pi ql / m }\sum_{r=1}^{m-1 } e^{-j2\pi rk / m } { \mbox{\rm e}}\left[\tilde x_q \tilde x^*_r \right ] \nonumber\\ & = & 4 m^{-2 } \cos^2(\pi n / m ) \sum_{l=0}^{m/2 - 1 } e^{-j4\pi ln / m } \sum_{k=0}^{m/2 - 1 } e^{j4\pi kn / m } \sum_{q=1}^{m-1 } e^{j2\pi ql / m } e^{-j2\pi qk / m } s_x(q ) \nonumber\\ & = & 4 m^{-2 } \cos^2(\pi n / m ) \sum_{q=1}^{m-1 } s_x(q ) \left| \sum_{l=0}^{m/2 - 1 } e^{-j2\pi ( 2n - q)l / m}\right|^2 \nonumber\\ & = & \cos^2(\pi n / m ) \left\{s_x(2n ) + 4 m^{-2 } \sum_{\stackrel{\scriptstyle q=1 } { q \mbox{\scriptsize \hspace{2pt } odd}}}^{m-1 } \csc^2\left[\pi(2n - q)/m\right ] s_x(q ) \right\ } .\label{app_psd_sum}\end{aligned}\ ] ] numerical computation based on eq .( [ app_psd_sum ] ) yields results substantially in agreement with those of eq .( [ fractalpgex ] ) .99 d. r. cox and v. isham , _ point processes _ ( chapman and hall , london , 1980 ) . s. b. lowen and m. c. teich , `` estimation and simulation of fractal stochastic point processes , '' _ fractals _ * 3 * , 183 - 210 ( 1995 ) .w. rudin , _ principles of mathematical analysis , _( mcgraw hill , new york , 1976 ) , p. 197 .m. f. shlesinger and b. j. west , `` complex fractal dimension of the bronchial tree , '' _ phys .lett . _ * 67 * , 2106 - 2108 ( 1991 ) .b. b. mandelbrot , _ the fractal geometry of nature _ ( w. h. freeman , new york , 1983 ). h. e. hurst , `` long term storage capacity of reservoirs , '' _ trans .. civil eng . _ * 116 * , 770808 ( 1951 ) .p. flandrin , `` on the spectrum of fractional brownian motions , '' _ ieee trans .theory _ * 35 * , 197199 ( 1989 ) .p. flandrin , `` wavelet analysis and synthesis of fractional brownian motion , '' _ ieee trans .theory _ * 38 * , 910917 ( 1992 ) .p. flandrin , `` time scale analyses and self - similar stochastic processes , '' _ proc .nato advanced study institute on wavelets and their applications _( il ciocco , italy , 1992 ) . g. w. wornell and a. v. oppenheim , `` estimation of fractal signals from noisy measurements using wavelets , '' _ ieee trans . sig ._ * 40 * , 611623 ( 1992 ). r. j. barton and h. v. poor , `` signal detection in fractional gaussian noise , '' _ ieee trans . inform .theory _ * 34 * , 943959 ( 1988 ) .w. h. press , `` flicker noises in astronomy and elsewhere , '' _ comm .astrophys . _ * 7 * ( no . 4 ) , 103119 ( 1978 ) .m. j. buckingham , _ noise in electronic devices and systems _( wiley - halsted , new york , 1983 ) , ch .m. b. weissman , `` noise and other slow , nonexponential kinetics in condensed matter , '' _ rev ._ * 60 * , 537571 ( 1988 ). j. b. bassingthwaighte , l. s. liebovitch , and b. j. west , _ fractal physiology _( american physioloical society , new york , 1994 ) . b. j. west and w. deering , `` fractal physiology for physicists : lvy statistics , '' _ phys . rep ._ * 246 * , 1100 ( 1994 ) . c. m. anderson and a. j. mandell , `` fractal time and the foundations of consciousness : vertical convergence of phenomena from ion channels to behavioral states , '' in _ fractals of brain , fractals of mind _ ( advances in consciousness research * 7 * ) , eds .e. maccormac and m. stamenov ( john benjamin , amsterdam , 1996 ) , pp .. g. pfister and h. scher , `` dispersive ( non - gaussian ) transient transport in disordered solids , '' _ adv ._ * 27 * , 747798 ( 1978 ) . j. orenstein , m. a. kastner , and v. vaninov , `` transient photoconductivity and photo - induced optical absorption in amorphous semiconductors , '' _ phil . mag .b _ * 46 * , 2362 ( 1982 ) .m. a. kastner , `` the peculiar motion of electrons in amorphous semiconductors , '' in _ physical properties of amorphous materials , _ eds .d. alser , b. b. schwartz , and m. c. steele , ( plenum , new york , 1985 ) , pp .. t. tiedje and a. rose , `` a physical interpretation of dispersive transport in disordered semiconductors , '' _ solid state commun . _, 4952 ( 1980 ) . m. f. shlesinger , `` fractal time and noise in complex systems , '' _ ann . new york acad .* 504 * , 214228 ( 1987 ) .s. b. lowen and m. c. teich , `` fractal renewal processes generate noise , '' _ phys .e _ * 47 * , 9921001 ( 1993 ) .s. b. lowen and m. c. teich , `` fractal renewal processes as a model of charge transport in amorphous semiconductors , '' _ phys .b _ * 46 * , 18161819 ( 1992 ) . v. k. bhatnagar and k. l. bhatia , `` frequency dependent electrical transport in bismuth - modified amorphous germanium sulfide semiconductors , ''_ j. non - cryst .sol . _ * 119 * , 214231 ( 1990 ) .w. tomaszewicz , `` multiple - trapping carrier transport due to modulated photogeneration , '' _ phil . mag .* 61 * , 237243 ( 1990 ) .j. m. berger and b. b. mandelbrot , `` a new model for the clustering of errors on telephone circuits , '' _ibm j. res .dev . _ * 7 * , 224236 ( 1963 ) . b. b. mandelbrot , `` self - similar error clusters in communication systems and the concept of conditional stationarity , '' _ ieee trans .comm . tech .* 13 * , 7190 ( 1965 ) .w. e. leland , m. s. taqqu , w. willinger , and d. v. wilson , `` on the self - similar nature of ethernet traffic , '' in _ proc .acm sigcomm 1993 _ , ( 1993 ) , pp. 183193 .a. erramilli and w. willinger , `` fractal properties in packet traffic measurements , '' in _ proc .petersburg reg ._ , ( st . petersburg , russia , 1993 ) , pp . 144 - 158 . b. k. ryu and h. e. meadows , `` performance analysis and traffic behavior of xphone videoconferencing application on an ethernet , '' in _ proc .3rd int .conf . comp .w. liu , ( 1994 ) , pp .w. e. leland , m. s. taqqu , w. willinger , and d. v. wilson , `` on the self - similar nature of ethernet traffic ( extended version ) , '' _ ieee / acm trans .* 2 * , 115 ( 1994 ) .j. beran , r. sherman , m. s. taqqu , and w. willinger , `` long - range dependence in variable - bit - rate video traffic , '' _ ieee trans .comm . _ * 43 * , 15661579 ( 1995 ) .b. k. ryu and s. b. lowen , `` modeling self - similar traffic with the fractal - shot - noise - driven poisson process , '' cent . for telecomm .res . , tech . rep.392 - 94 - 39 , ( columbia university , new york , 1994 ) .b. sakmann and e. neher , _ single - channel recording _( plenum , new york , 1983 ) .l. j. defelice and a. isaac , `` chaotic states in a random world : relationship between the nonlinear differential equations of excitability and the stochastic properties of ion channels , '' _ j. stat .phys . _ * 70 * , 339354 ( 1993 ) .p. luger , `` internal motions in proteins and gating kinetics of ionic channels , '' _ biophys .j. _ * 53 * , 877884 ( 1988 ) .g. l. millhauser , e. e. salpeter , and r. e. oswald , `` diffusion models of ion - channel gating and the origin of power - law distributions from single - channel recording , '' _ proc .( usa ) * 85 * , 15031507 ( 1988 ) .l. s. liebovitch and t. i. tth , `` using fractals to understand the opening and closing of ion channels , '' _ ann. biomed .eng . _ * 18 * , 177194 ( 1990 ) .m. c. teich , `` fractal character of the auditory neural spike train , '' _ ieee trans. biomed .eng . _ * 36 * , 150160 ( 1989 ) .s. b. lowen and m. c. teich , `` fractal renewal processes , '' _ ieee trans .theory _ * 39 * , 16691671 ( 1993 ) .s. b. lowen and m. c. teich , `` fractal auditory - nerve firing patterns may derive from fractal switching in sensory hair - cell ion channels , '' in _ noise in physical systems and fluctuations _( aip conference proceedings * 285 * ) , eds .h. handel and a. l. chung , ( american institute of physics , new york , 1993 ) , pp .. b. katz , _ nerve , muscle , and synapse _( mcgraw - hill , new york , 1966 ) .p. fatt and b. katz , `` spontaneous subthreshold activity at motor nerve endings , ''_ j. physiol ._ ( london ) * 117 * , 109128 ( 1952 ). j. del castillo and b. katz , `` quantal components of the end - plate potential , '' _ j. physiol . _ ( london ) * 124 * , 560573 ( 1954 ) .s. b. lowen , s. s. cash , m - m .poo , and m. c. teich , `` neuronal exocytosis exhibits fractal behavior , '' in _ computational neuroscience , _ ed .j. m. bower ( plenum , new york , 1997 ) , in press .s. b. lowen , s. s. cash , m - m .poo , and m. c. teich , quantal neurotransmitter secretion rate exhibits fractal behavior , " _ j. neurosci ._ , in press .t. musha , y. kosugi , g. matsumoto , and m. suzuki , `` modulation of the time relation of action potential impulses propagating along an axon , '' _ ieee trans .eng . _ * bme-28 * , 616623 ( 1981 ) .t. musha , h. takeuchi , and t. inoue , `` fluctuations in the spontaneous spike discharge intervals of a giant snail neuron , '' _ ieee trans. biomed .eng . _ * bme-30 * , 194197 ( 1983 ) .s. b. lowen and m. c. teich , `` auditory - nerve action potentials form a non - renewal point process over short as well as long time scales , '' _j. acoust .* 92 * , 803806 ( 1992 ) .m. c. teich , d. h. johnson , a. r. kumar , and r. g. turcott , `` rate fluctuations and fractional power - law noise recorded from cells in the lower auditory pathway of the cat , '' _ hear .res . _ * 46 * , 4152 ( 1990 ) .m. c. teich , r. g. turcott , and s. b. lowen , `` the fractal doubly stochastic poisson point process as a model for the cochlear neural spike train , '' in _ the mechanics and biophysics of hearing ( lecture notes in biomathematics , vol .87 ) , _ eds .p. dallos , c. d. geisler , j. w. matthews , m. a. ruggero , and c. r. steele ( springer - verlag , new york , 1990 ) , pp .354361 . n. l. powers , r. j. salvi , and s. s. saunders , `` discharge rate fluctuations in the auditory nerve of the chinchilla , '' in _ abstracts of the fourteenth midwinter research meeting of the association for research in otolaryngology , _ ed . d. j. lim ( association for research in otolaryngology , des moines , ia ) , abstract 411 , p. 129 .n. l. powers and r. j. salvi , `` comparison of discharge rate fluctuations in the auditory nerve of chickens and chinchillas , '' in _ abstracts of the fifteenth midwinter research meeting of the association for research in otolaryngology , _ ed .d. j. lim ( association for research in otolaryngology , des moines , ia , 1992 ) , abstract 292 , p. 101 .m. c. teich , `` fractal neuronal firing patterns , '' in _ single neuron computation , _ eds .t. mckenna , j. davis , and s. zornetzer ( academic , boston , 1992 ) , pp .a. h. kumar and d. h. johnson , `` analyzing and modeling fractal intensity point processes , '' _ j. acoust .* 93 * , 33653373 ( 1993 ) .m. c. teich and s. b. lowen , `` fractal patterns in auditory nerve - spike trains , '' _ ieee eng . med .biol . mag ._ * 13 * ( no . 2 ) , 197202 ( 1994 ) . o. e. kelly , `` analysis of long - range dependence in auditory - nerve fiber recordings , '' master s thesis , rice university ( 1994 ) . o. e. kelly , d. h. johnson , b. delgutte , and p. cariani , `` fractal noise strength in auditory - nerve fiber recordings , '' _ j. acoust. soc ._ * 99 * , 22102220 ( 1996 ). s. b. lowen and m. c. teich , `` the periodogram and allan variance reveal fractal exponents greater than unity in auditory - nerve spike trains , '' _ j. acoust .soc . am _ * 99 * , 35853591 ( 1996 ) . s. b. lowen and m. c. teich , `` refractoriness - modified fractal stochastic point processes for modeling sensory - system spike trains , '' in _ computational neuroscience , _ ed .j. m. bower ( academic , new york , 1996 ) , pp. 447 - 452 .s. b. lowen and m. c. teich , `` estimating scaling exponents in auditory - nerve spike trains using fractal models incorporating refractoriness , '' in _ diversity in auditory mechanics , _ eds .e. r lewis , g. r. long , r. f. lyon , p. m. narins , c. r. steele , and e. hecht - pointar ( world scientific , singapore , 1997 ) , pp .197 - 204 .m. c. teich , c. heneghan , s. b. lowen , t. ozaki , and e. kaplan , `` fractal character of the neural spike train in the visual system of the cat , '' _ j. opt. soc .a _ * 14 * , 529546 ( 1997 ) . j. b. troy and j. g. robson , `` steady discharges of x and y retinal ganglion cells of cat under photopic illuminance , '' _ visual neuroscience _ * 9 * , 535553 ( 1992 ) . m. c. teich , r. g. turcott , and r. m. siegel , `` temporal correlation in cat striate - cortex neural spike trains , '' _ ieee eng .biol . mag ._ * 15 * ( no . 5 ) , 7987 ( 1996 ) .r. g. turcott , p. d. r. barker , and m. c. teich , `` long - duration correlation in the sequence of action potentials in an insect visual interneuron , '' _j. statist .simul . _ * 52 * , 253271 ( 1995 ) .m. e. wise , `` spike interval distributions for neurons and random walks with drift to a fluctuating threshold , '' in _ statistical distributions in scientific work , vol .6 _ , eds . c. taillie , g. p. patil , and b. a. baldessari ( d. reidel , hingham , ma , 1981 ) , pp. 211231. m. yamamoto , h. nakahama , k. shima , t. kodama , and h. mushiake , `` markov - dependency and spectral analyses on spike - counts in mesencephalic reticular formation during sleep and attentive states , '' _ brain research _ * 366 * , 279289 ( 1986 ) .f. grneis , m. nakao , m. yamamoto , t. musha , and h. nakahama , `` an interpretation of fluctuations in neuronal spike trains during dream sleep , '' _ biol .cybern _ * 60 * , 161169 ( 1989 ) f. grneis , m. nakao , y. mizutani , m. yamamoto , m. meesman , and t. musha , `` further study on fluctuations observed in central single neurons during rem sleep , '' _ biol .cybern . _ * 68 * 193198 ( 1993 ) .t. kodama , h. mushiake , k. shima , h. nakahama , and m. yamamoto , `` slow fluctuations of single unit activities of hippocampal and thalamic neurons in cats . i. relation to natural sleep and alert states , '' _ brain research _ * 487 * , 2634 ( 1989 ) . m. yamamoto and h. nakahama , `` stochastic properties of spontaneous unit discharges in somatosensory cortex and mesencephalic reticular formation during sleep waking states , '' _ j. neurophysiology _ * 49 * , 11821198 ( 1983 ) .m. kobayashi and t. musha , `` fluctuations of heartbeat period , '' _ ieee trans . biomed .eng . _ * bme-29 * , 456457 ( 1982 ) .r. d. berger , s. akselrod , d. gordon , and r. j. cohen , `` an efficient algorithm for spectral analysis of heart rate variability , '' _ ieee trans .* bme-33 * , 900904 ( 1986 ) .r. g. turcott and m. c. teich , `` long - duration correlation and attractor topology of the heartbeat rate differ for healthy patients and those with heart failure , '' _ proc .spie _ * 2036 * ( chaos in biology and medicine ) , 2239 ( 1993 ) . r. g. turcott and m. c. teich , `` interevent - interval and counting statistics of the human heartbeat recorded from normal subjects and patients with heart failure , '' _ ann. biomed .* 24 * , 269293 ( 1996 ) .h. e. schepers , j. h. g. m. van beek , and j. b. bassingthwaighte , `` four methods to estimate the fractal dimension from self - affine signals , '' _ ieee eng . med .biol . mag ._ * 11 * , 5764 ( 1992 ). j. beran , `` statistical methods for data with long - range dependence , '' _ statistical science _ * 7 * , 404427 ( 1992 ) .h. g. e. hentschel and i. procaccia , `` the infinite number of generalized dimensions of fractals and strange attractors , '' _ physica d _ * 8 * , 435444 ( 1983 ) .p. grassberger , `` generalized dimensions of strange attractors , '' _ phys .lett . a _ * 97 * , 227 - 230 ( 1983 ) . j. theiler , `` estimating fractal dimension , '' _ j. opt. soc . am . a _ * 7 * , 10551073 ( 1990 ) . k. matsuo , b. e. a. saleh , and m. c. teich , `` cascaded poisson processes , '' _ j. math .* 23 * , 23532364 ( 1982 ) .s. b. lowen and m. c. teich , `` estimating the dimension of a fractal point processes , '' _ proc .spie _ * 2036 * ( chaos in biology and medicine ) , 6476 ( 1993 ) .d. w. allan , `` statistics of atomic frequency standards , '' _ proc .ieee_ * 54 * , 221 - 230 ( 1966 ) .j. a. barnes and d. w. allan , `` a statistical model of flicker noise , '' _ proc .ieee _ * 54 * , 176178 ( 1966 ) .m. c. teich , c. heneghan , s. b. lowen , and r. g. turcott , `` estimating the fractal exponent of point processes in biological systems using wavelet- and fourier - transform methods , '' in _ wavelets in medicine and biology _ , eds .a. aldroubi and m. unser ( crc press , boca raton , fl , 1996 ) , pp .383412 . c. heneghan , s. b. lowen , and m. c. teich , `` wavelet analysis for estimating the fractal properties of neural firing patterns , '' in _ computational neuroscience _j. m. bower , ( academic press , san diego , 1996 ) , pp .. s. b. lowen , `` refractoriness - modified doubly stochastic poisson point process , '' _ center for telecommunications research , technical report _ * 449 - 96 - 15 * ( columbia university , new york , 1996 ) .s. b. lowen and m. c. teich , `` doubly stochastic poisson point process driven by fractal shot noise , '' _ phys .a _ * 43 * , 41924215 ( 1991 ) . f. grneis and h .- j .baiter , `` more detailed explication of a number fluctuation model generating pattern , '' _ physica _ * 136a * , 432452 ( 1986 ) .f. a. haight , _ handbook of the poisson distribution _( wiley , new york , 1967 ) .b. b. mandelbrot , `` a fast fractional gaussian noise generator , '' _ water resources res . _* 7 * , 543553 ( 1971 ) .t. lundahl , w. j. ohley , s. m. kay , and r. siffert , `` fractional brownian motion : a maximum likelihood estimator and its application to image texture , '' _ ieee trans . med .imag . _ * 5 * , 152161 ( 1986 ) .m. a. stoksik , r. g. lane , and d. t. nguyen , `` accurate synthesis of fractional brownian motion using wavelets , '' _ electron .lett . _ * 30 * , 383384 ( 1994 ) .b. e. a. saleh , _ photoelectron statistics _ ( springer verlag , berlin , 1978 ) .s. b. lowen and m. c. teich , `` generalised shot noise , '' _ electron .lett . _ * 25 * , 10721074 ( 1989 ) . s. b. lowen and m. c. teich , `` fractal shot noise , '' _ phys .lett . _ * 63 * , 17551759 ( 1989 ) . s. b. lowen and m. c. teich , `` power - law shot noise , '' _ ieee trans .theory _ * 36 * , 13021318 ( 1990 ) .d. r. cox , `` some statistical methods connected with series of events , '' _ j. roy .b _ * 17 * , 129164 ( 1955 ) .f. grneis , `` a number fluctuation model generating pattern , '' _ physica _ * 123a * , 149160 ( 1984 ) . f. grneis and t. musha , `` clustering poisson process and noise , '' _ jpn .j. appl .phys . _ * 25 * , 15041509 ( 1986 ) .j. skilling , ed . , _ maximum entropy and bayesian methods _ ( kluwer , boston , 1988 ) . | fractal and fractal - rate stochastic point processes ( fspps and frspps ) provide useful models for describing a broad range of diverse phenomena , including electron transport in amorphous semiconductors , computer - network traffic , and sequences of neuronal action potentials . a particularly useful statistic of these processes is the fractal exponent , which may be estimated for any fspp or frspp by using a variety of statistical methods . simulated fspps and frspps consistently exhibit bias in this fractal exponent , however , rendering the study and analysis of these processes non - trivial . in this paper , we examine the synthesis and estimation of frspps by carrying out a systematic series of simulations for several different types of frspp over a range of design values for . the discrepancy between the desired and achieved values of is shown to arise from finite data size and from the character of the point - process generation mechanism . in the context of point - process simulation , reduction of this discrepancy requires generating data sets with either a large number of points , or with low jitter in the generation of the points . in the context of fractal data analysis , the results presented here suggest caution when interpreting fractal exponents estimated from experimental data sets |
physical theories are merely approximations to the natural world and the physical constants involved can not be known without some degree of uncertainty .properties of a model that are sensitive to small changes in the model , in particular changes in the values of the parameters , are unlikely to be observed .it can thus be reasoned that one should search for physical theories which do not change in a qualitative matter under a small change of the parameters .such theories are said to be physically _stable_. this concept of the physical stability of a theory can be given a mathematical meaning as follows .a mathematical structure is said to be mathematically stable for a class of deformations if any deformation in this class leads to an isomorphic structure .more precisely , a lie algebra is said to be stable if small perturbations in its structure constants lead to isomorphic lie algebras .the idea of mathematical stability provides insight into the validity of a physical theory or the need for a generalization of the theory .if a theory is not stable , one might choose to deform it until a stable theory is reached .such a stable theory is likely to be a generalization of wider validity compared to the original unstable theory .lie algebraic deformation theory has been historically successful .snyder in 1947 showed that the assumption that spacetime be a continuum is not required for lorentz invariance .framework however leads to a lack of translational invariance , which later in the same year , yang showed can be corrected if one allows for spacetime to be curved .yang , in the same paper also presented the complete lie algebra associated with the suggested corrections .it was mendes who in the last decade concluded that when one considers the poincare and heisenberg algebras together , the resultant poincare - heisenberg algebra is not a stable lie algebra .mendes showed however that the algebra can be stabilized , requiring two additional length scales .the stabilized algebra is the same as the algebra obtained by yang in 1947 .it was faddeev and mendes who noted that , in hindsight , stability considerations could have predicted the relativistic and quantum revolutions of the last century .chryssomalakos and okon showed that by a suitable identification of the generators , triply special relativity proposed by kowalski - glikman and smolin can be brought to a linear form and that the resulting algebra is again the same as yang s algebra .more recently , stability considerations have led to the stabilized poincare - heisenberg algebra ( spha ) as the favorite candidate for the lie algebra describing physics at the interface of gr and qm .chryssomalakos and okon showed uniqueness of the spha .incorporating gravitational effects in quantum measurement of spacetime events renders spacetime non - commutative and leads to modifications in the fundamental commutators . in 2005 , ahluwalia - khalilova showed that the fact that the heisenberg fundamental commutator , =i\hbar ] .the scalar element of the clifford algebra commutes with every element leaving us with 15 generators. we will start with , , and , where , and calculate their commutators .&=&2\omega_{3}^2(\eta_{\nu\rho}e_{\mu\sigma}+\eta_{\mu\sigma}e_{\nu\rho}-\eta_{\mu\rho}e_{\nu\sigma}-\eta_{\nu\sigma}e_{\mu\rho})\\ \notag[\omega_3 e_{\mu\nu},\omega_2 ee_{\rho}]&=&2\omega_2 \omega_3(\eta_{\nu\rho}ee_{\mu}-\eta_{\mu\rho}ee_{\nu})\\ & = & 2\omega_3(\eta_{\nu\rho}(\omega_2 ee_{\mu})-\eta_{\mu\rho}(\omega_2 ee_{\nu}))\\ \notag[\omega_3 e_{\mu\nu},\omega_1 e_{\rho}]&=&2\omega_1 \omega_3(\eta_{\nu\rho}e_{\mu}-\eta_{\mu\rho}e_{\nu})\\ & = & 2\omega_2(\eta_{\nu\rho}(\omega_1 e_{\mu})-\eta_{\mu\rho}(\omega_1 e_{\nu}))\end{aligned}\ ] ] &=&2\omega_{2}^2e_{\mu\nu}\\ & = & 2\frac{\omega_{2}^2}{\omega_{3}}(\omega_3e_{\mu\nu})\\ \notag[\omega_1e_{\mu},\omega_1e_{\nu}]&=&2\omega_{1}^2e_{\mu\nu}\\ & = & 2\frac{\omega_{1}^2}{\omega_{3}}(\omega_3e_{\mu\nu})\\ \notag[\omega_2ee_{\mu } , \omega_1e_{\nu}]&=&2\omega_1\omega_2\eta_{\mu\nu}e\\ & = & 2\eta_{\mu\nu}\frac{\omega_1\omega_2}{\omega_4}(\omega_4e)\\ \notag[\omega_2ee_{\mu},\omega_4e]&=&2\omega_2\omega_4e_{\mu}\\ & = & 2\frac{\omega_2\omega_4}{\omega_1}(\omega_1e_{\mu})\\ \notag[\omega_1e_{\mu } , \omega_4e]&=&-2\omega_1\omega_4ee_{\mu}\\ & = & -2\frac{\omega_1\omega_4}{\omega_2}(\omega_2ee_{\mu})\\ \label{last}[\omega_3e_{\mu\nu},\omega_4e]&=&0\end{aligned}\ ] ] by defining the 15 operators we obtain the stabilized poincare - heisenberg algebra for the special case where is equal to zero .( we discuss this point in the next section . )we call , , and above the clifford generators of the stabilised poincare - heisenberg algebra . as in , we interpret to be the position vectors , to be the momenta and to be the rotations and boosts .in this section we will transform the clifford representation in such a way that the transformed clifford generators will generate the entire stabilized poincare - heisenberg algebra rather than just the special case where is equal to zero .the physical interpretation of this transformation will be discussed in the following section and gives rise to a new concept in physics .we start by defining by redefining as giving and therefore =2(a^2+b^2)\,e_{\mu\nu},\qquad \mu\neq\nu\end{aligned}\ ] ] what we require is ; =i\,q\,\alpha_2\,j_{\mu\nu}\ ] ] to get this we must thus define ; similarly , define as giving and =2(c^2+d^2)\,e_{\mu\nu},\qquad \mu\neq\nu\end{aligned}\ ] ] we want this to be equal to =i\,q\,\alpha_1\,j_{\mu\nu}\ ] ] which requires note that consistency in the definition for implies that the above transformed definitions for and can be written in the form &= & \left[\begin{matrix}a&b\\c&d\end{matrix}\right]\ , \left[\begin{matrix}e_\mu\\ee_\mu\end{matrix}\right]\end{aligned}\ ] ] and therefore &=&\frac{1}{ad - bc } \left[\begin{matrix}d&-b\\-c&a\end{matrix}\right]\ , \left[\begin{matrix}x_\mu\\p_\mu\end{matrix}\right]\end{aligned}\ ] ] or where is the determinant of the matrix . working out the commutator ] .we have &=&\left[dee_\mu+ce_\mu,\frac{2(ad - bc)}{q}\,e\right]\\ \notag&=&\frac{2(ad - bc)}{q}\left[dee_\mu+ce_\mu , e\right]\\ \notag&=&\frac{2(ad - bc)}{q}(dee_\mu e+ce_\mu e - de^2e_\mu - cee_\mu)\\ \notag&=&\frac{2(ad - bc)}{q}(de_\mu - ce_\mu e - de_\mu - cee_\mu)\\ \notag&=&\frac{4(ad - bc)}{q}(de_\mu - cee_\mu)\\ \notag&=&\frac{4\delta}{q}\left(\frac{d}{\delta}(dx_\mu - bp_\mu)-\frac{c}{\delta}(-cx_\mu+ap_\mu)\right)\\ \notag&=&\frac{4}{q}(d^2x_\mu - bdp_\mu+c^2x_\mu - acp_\mu)\\ & = & -\frac{4}{q}(ac+bd)p_\mu+\frac{4}{q}(c^2+d^2)x_\mu\end{aligned}\ ] ] for this to be equal to =\alpha_3 p_\mu-\alpha_1 x_\mu\ ] ] we must have and the consistency conditions must now be extended to read we also need to check =\alpha_2p_\mu-\alpha_3x_\mu$ ] .we have &=&\left[ae_\mu+bee_\mu,\frac{2(ad - bc)}{q}\,e\right]\\ \notag&=&\frac{2(ad - bc)}{q}(ae_\mu e+bee_\mu e - aee_\mu - be^2e_\mu)\\ \notag&=&\frac{2(ad - bc)}{q}(-2aee_\mu+2be_\mu)\\ \notag&=&\frac{4\delta}{q}(be_\mu - aee_\mu)\\ \notag&=&\frac{4\delta}{q}\left(\frac{b}{\delta}(dx_\mu - bp_\mu)-\frac{a}{\delta}(-cx_\mu+ap_\mu)\right)\\ \notag&=&\frac{4}{q}(bdx_\mu - b^2p_\mu+acx_\mu - a^2p_\mu)\\ & = & -\frac{4}{q}(a^2+b^2)p_\mu+\frac{4}{q}(ac+bd)x_\mu\end{aligned}\ ] ] so we need which hold by the consistency equations . to satisfy the consistency equation ( [ consistency ] ), we may define and and since also , we obtain comparing these results with those of ahluwalia - khalilova , we find that ) above , there are two more conversions , one between and and one between and .a comparison reveals that consistency between and requires that ; ] this in turn tells us that where . in this sectionwe have transformed the clifford generators of the representation to obtain a representation where is not equal to zero .this is going in the opposite direction to chryssomalakos and okon who begin with a representation where is not necessarily zero and then show that there always exists a representation in the - plane with equal to zero by performing a linear redefinition of the generators .the clifford algebra approach is thus consistent with .chryssomalakos and okon , , comment that physicists in particular may frown upon the idea of working with arbitrary linear combinations of momenta and positions . for this reason it is important that we interpret what the transformation in the previous section might mean physically .the transformation and is equivalent to adding some momentum to the position vector and vice versa .the magnitude of the vector is invariant .the transformation looks like a rotation in the position - momentum plane however are rotated by different angles . what is the physical interpretation of this transformation ? in a newtonian mindset we consider time and space to be disjoint and one can determine the absolute time and position of an event .switching to a relativistic mindset , we know that we can no longer treat space and time separately but that in fact we have to consider them together in what we call spacetime .it is no longer possible to determine the time and position of an event absolutely .similarly , in a newtonian mindset , we can treat position and momentum separately and thus the above transformation may seem unphysical . in a quantum mechanical frame of mind however , we can not measure position and momentum with absolute certainty . the more accurately we know one , the less accurately we know the other as is described by the heisenberg uncertainty relationship .this suggests that we can not just think about position or momentum without considering the other .furthermore , making a measurement of the position of some particle will in itself affect the position .the photon used to measure the particle s position will give the particle some momentum . on the quantum scaletherefore a linear combination of position and momentum does make sense and in fact treating position and momentum separately as in newtonian physics may no longer be desirable . at the interface of gr andqm one combines quantum mechanics and relativity and therefore should consider spacetime - momentum instead of spacetime alone .the authors of this paper wish to thank and acknowledge dharamvir ahluwalia - khalilova , benjamin martin and adam gillard for the many discussions . in particular we wish to thank dharamvir ahluwalia - khalilova for suggesting that at the interface of gr and qm one should consider spacetime - momentum instead of spacetime alone . c. n. yang , `` on quantized space - time , '' phys . rev .* 72 * ( 1947 ) 874 .r. vilela mendes , `` deformations , stable theories and fundamental constants , '' j. phys .a * 27 * ( 1994 ) 8091 .l. d. faddeev 1989 , `` mathematician s view on the development of physics , frontiers in physics : high technology and mathematics '' ed h. a. cerdeira and s. lundqvist ( singapore : word scientific , 1989 ) pp .238 - 46 c. chryssomalakos and e. okon , `` generalized quantum relativistic kinematics : a stability point of view , '' int .j. mod .d * 13 * ( 2004 ) 2003 [ arxiv : hep - th/0410212 ] . c. chryssomalakos and e. okon , `` linear form of 3-scale relativity algebra and the relevance of stability , '' int. j. mod .d * 13 * ( 2004 ) 1817 [ arxiv : hep - th/0407080 ] .d. v. ahluwalia - khalilova , `` a freely falling frame at the interface of gravitational and quantum realms , '' class .quant . grav .* 22 * ( 2005 ) 1433 [ arxiv : hep - th/0503141 ] .d. v. ahluwalia - khalilova , `` minimal spatio - temporal extent of events , neutrinos , and the cosmological constant problem , '' int .j. mod .d * 14 * ( 2005 ) 2151 [ arxiv : hep - th/0505124 ] .d. v. ahluwalia , `` quantum measurements , gravitation , and locality , '' phys .b * 339 * ( 1994 ) 301 [ arxiv : gr - qc/9308007 ] .a. nijenhuis and r. w. richardson 1967 `` deformations of lie algebra structures '' j. math . 89 v. v. khruschev and a. n. leznov , `` relativistically invariant lie algebras for kinematic observables in quantum space - time , '' grav .cosmol .* 9 * ( 2003 ) 159 [ arxiv : hep - th/0207082 ] . c. doran and a. lasenby 2003 `` geometric algebra for physicists '' ( cambridge university press ) p. butler and l. mcaven 1998 `` space : the anti - euclidean metric from the structure of rotations '' , in proceedings of the xii international colloquium on group theoretical methods in physics , 494498 ( international press ) g. amelino - camelia , `` proposal of a second generation of quantum - gravity - motivated lorentz - symmetry tests : sensitivity to effects suppressed quadratically by the planck scale , '' int .j. mod .d * 12 * ( 2003 ) 1633 [ arxiv : gr - qc/0305057 ] .j. kowalski - glikman and l. smolin , `` triply special relativity , '' phys .d * 70 * ( 2004 ) 065020 [ arxiv : hep - th/0406276 ] .s. judes and m. visser , `` conservation laws in doubly special relativity , '' phys .d * 68 * ( 2003 ) 045001 d. grumiller , w. kummer and d. v. vassilevich , `` a note on the triviality of kappa - deformations of gravity , '' ukr .j. phys .* 48 * ( 2003 ) 329 [ arxiv : hep - th/0301061 ] .r. schutzhold and w. g. unruh , `` problems of doubly special relativity with variable speed of light , '' jetp lett .* 78 * ( 2003 ) 431 [ pisma zh .* 78 * ( 2003 ) 899 ] [ arxiv : gr - qc/0308049 ] .j. christian , `` absolute being vs relative becoming , '' arxiv : gr - qc/0610049 .d. v. ahluwalia - khalilova 2006 _ private communications _ | the stabilized poincare - heisenberg algebra ( spha ) is the lie algebra of quantum relativistic kinematics generated by fifteen generators . it is obtained from imposing stability conditions after attempting to combine the lie algebras of quantum mechanics and relativity which by themselves are stable , however not when combined . in this paper we show how the sixteen dimensional clifford algebra can be used to generate the spha . the clifford algebra path to the spha avoids the traditional stability considerations , relying instead on the fact that is a semi - simple algebra and therefore stable . it is therefore conceptually easier and more straightforward to work with a clifford algebra . the clifford algebra path suggests the next evolutionary step toward a theory of physics at the interface of gr and qm might be to depart from working in space - time and instead to work in space - time - momentum . |
jakob bernoulli s _ ars conjectandi _ established the field of probability theory , and founded a long and remarkable mathematical development of deducing patterns to be observed in sequences of random events .the theory of statistical inference works in the opposite direction , attempting to solve the inverse problem of deducing plausible models from a given set of observations .laplace pioneered the study of this inverse problem , and indeed he referred to his method as that of inverse probability .the likelihood function , introduced by , puts this inversion front and centre , by writing the probability model as a function of unknown parameters in the model .this simple , almost trivial , change in point of view has profoundly influenced the development of statistical theory and methods . in the early days ,computing data summaries based on the likelihood function could be computationally difficult , and various _ ad hoc _ simplifications were proposed and studied . by the late 1970s , however , the widespread availability of computing enabled a parallel development of widespread implementation of likelihood - based inference .the development of simulation and approximation methods that followed meant that both bayesian and non - bayesian inferences based on the likelihood function could be readily obtained . as a result, construction of the likelihood function , and various summaries derived from it , is now a nearly ubiquitous starting point for a great many application areas .this has a unifying effect on the field of applied statistics , by providing a widely accepted standard as a starting point for inference . with the explosion of data collection in recent decades, realistic probability models have continued to grow in complexity , and the calculation of the likelihood function can again be computationally very difficult .several lines of research in active development concern methods to compute approximations to the likelihood function , or inference functions with some of the properties of likelihood functions , in these very complex settings . in the following section, i will summarize the standard methods for inference based on the likelihood function , to establish notation , and then in section [ sec3 ] describe some aspects of more accurate inference , also based on the likelihood function . in section [ sec4] , i describe some extensions of the likelihood function that have been proposed for models with complex dependence structure , with particular emphasis on composite likelihood .suppose we have a probability model for an observable random vector of the form , where is a vector of unknown parameters in the model , and is a density function with respect to a dominating measure , usually lebesgue measure or counting measure , depending on whether our observations are discrete or continuous .typical models used in applications assume that could potentially be any value in a set ; sometimes is infinite - dimensional , but more usually .the inverse problem mentioned in section [ sec1 ] is to construct inference about the value or values of that could plausibly have generated an observed value .this is a considerable abstraction from realistic applied settings ; in most scientific work such a problem will not be isolated from a series of investigations , but we can address at least some of the main issues in this setting . the likelihood function is simply i.e. , there is an equivalence class of likelihood functions , and only relative ratios are uniquely determined . from a mathematical point of view , ( [ likelihood ] ) is a trivial re - expression of the model ; the re - ordering of the arguments is simply to emphasize in the notation that we are more interested in the -section for fixed than in the -section for fixed .used directly with a given observation , provides a ranking of relative plausibility of various values of , in light of the observed data. a form of direct inference can be obtained by plotting the likelihood function , if the parameter space is one- or two - dimensional , and several writers , including fisher , have suggested declaring values of in ranges determined by likelihood ratios as plausible , or implausible ; for example , suggested that values of for which , be declared ` implausible ' , where is the maximum likelihood estimate of , i.e. , the value for which the likelihood function is maximized , over , for a given .in general study of statistical theory and methods we are usually interested in properties of our statistical methods , in repeated sampling from the model , where is the notional ` true ' value of that generated the data .this requires considering the distribution of , or relative ratios such as . to this end, some standard summary functions of are defined .writing we define the _ score function _ , and the observed and expected fisher information functions : if the components of are independent , then is a sum of independent random variables , as is , and under some conditions on the model the central limit theorem for leads to the following asymptotic results , as : where we suppress the dependence of each derived quantity on ( and on ) for notational convenience .these results hold under the model ; a more precise statement would use the true value in , , and above , and the model .however , the quantities , and , considered as functions of both and , are approximate _ pivotal quantities _ , i.e. , they have a known distribution , at least approximately . for we could plot , for example , as a function of , where is the standard normal distribution function , and obtain approximate -values for testing any value of for fixed .the approach to inference based on these pivotal quantities avoids the somewhat artificial distinction between point estimation and hypothesis testing .when , an approximately standard normal pivotal quantity can be obtained from ( [ lrt ] ) as ^{1/2 } { \mathop{\longrightarrow}^{\mathcal{l}}}n(0 , 1).\ ] ] the likelihood function is also the starting point for bayesian inference ; if we model the unknown parameter as a random quantity with a postulated prior probability density function , then inference given an observed value is based on the posterior distribution , with density bayesian inference is conceptually straightforward , given a prior density , and computational methods for estimating the integral in the denominator of ( [ bayes ] ) , and associated integrals for marginal densities of components , or low - dimensional functions of , have enabled the application of bayesian inference in models of considerable complexity .two very useful methods include laplace approximation of the relevant integrals , and markov chain monte carlo simulation from the posterior .difficulties with bayesian inference include the specification of a prior density , and the meaning of probabilities for parameters of a mathematical model .one way to assess the influence of the prior is to evaluate the properties of the resulting inference under the sampling model , and under regularity conditions similar to those needed to obtain ( [ score ] ) , ( [ mle ] ) and ( [ lrt ] ) , a normal approximation to the posterior density can be derived : implying that inferences based on the posterior are asymptotically equivalent to those based on .this simple result underlines the fact that bayesian inference will in large samples give approximately correct inference under the model , and also that to distinguish between bayesian and non - bayesian approaches we need to consider the next order of approximation .if , then ( [ score])([lrt ] ) can be used to construct confidence regions , or to test simple hypotheses of the form , but in many settings can usefully be separated into a parameter of interest , and a nuisance parameter , and analogous versions of the above limiting results in this context are where is the profile log - likelihood function , is the constrained maximum likelihood estimate of the nuisance parameter when is fixed , is the dimension of , and is the fisher information function based on the profile log - likelihood function .the third result ( [ lrt2 ] ) can be used for model assessment among nested models ; for example , the exponential distribution is nested within both the gamma and weibull models , and a test based on of , say , a gamma model with unconstrained shape parameter , and one with the shape parameter set equal to 1 , is a test of fit of the exponential model to the data ; the rate parameter is the nuisance parameter .the use of the log - likelihood ratio to compare two non - nested models , for example a log - normal model to a gamma model , requires a different asymptotic theory ( , , ch .a related approach to model selection is based on the akaike information criterion , where is the dimension of .just as only differences in log - likelihoods are relevant , so are differences in : for a sequence of model fits the one with the smallest value of is preferred .the criterion was developed in the context of prediction in time series , but can be motivated as an estimate of the kullback - leibler divergence between a fitted model and a notional ` true ' model .the statistical properties of as a model selection criterion depend on the context ; for example for choosing among a sequence of regression models of the same form , model selection using is not consistent ( , , ch .several related versions of model selection criterion have been suggested , including modifications to , and a version motivated by bayesian arguments , where is the sample size for the model with parameters .the approximate inference suggested by the approximate pivotal quantities ( [ score2 ] ) , ( [ mle2 ] ) and ( [ lrt2 ] ) is obtained by treating the profile log - likelihood function as if it were a genuine log - likelihood function , i.e. as if the true value of were .this can be misleading , because it does not account for the fact that the nuisance parameter has been estimated .one familiar example is inference for the variance in a normal theory linear regression model ; the maximum likelihood estimate is which has expectation , where is the dimension of .although this estimator is consistent as with fixed , it can be a poor estimate for finite samples , especially if is large relative to , and the divisor is used in practice .one way to motivate this is to note that is unbiased for ; an argument that generalizes more readily is to note that the likelihood function can be expressed as where is proportional to the density of and is the marginal density of or equivalently .the unbiased estimate of maximizes the second component , which is known as the restricted likelihood , and estimators based on it often called `` reml '' estimators .higher order asymptotic theory for likelihood inference has proved to be very useful for generalizing these ideas , by refining the profile log - likelihood to take better account of the nuisance parameter , and has also provided more accurate distribution approximations to pivotal quantities .perhaps most importantly , for statistical theory , higher order asymptotic theory helps to clarify the role of the likelihood function and the prior in the calibration of bayesian inference .these three goals have turned out to be very intertwined . to illustrate some aspects of this , consider the marginal posterior density for , where : laplace approximation to the numerator and denominator integrals leads to where is the block of the observed fisher information function corresponding to the nuisance parameter , has been computed using the partitioned form to give the second expression in ( [ laplace2 ] ) , and in the third expression when renormalized to integrate to one , this laplace approximation has relative error in independent sampling from a model that satisfies various regularity conditions similar to those needed to show the asymptotic normality of the posterior .these expressions show that an adjustment for estimation of the nuisance parameter is captured in , and this adjustment can be included in the profile log - likelihood function , as in the third expression in ( [ laplace2 ] ) , or tacked onto it , as in the second expression .the effect of the prior is isolated from this nuisance parameter adjustment effect , so , for example , if , and the priors for and are independent , then the form of the prior for given does not affect the approximation .the adjusted profile log - likelihood function is the simplest of a number of modified profile log - likelihood functions suggested in the literature for improved frequentist inference in the presence of nuisance parameters , and was suggested for general use in , after reparametrizing the model to make and orthogonal with respect to expected fisher information , i.e. , .this reparameterization makes it at least more plausible that and could be modelled as _ a priori _ independent , and also ensures that , rather than the usual .a number of related , but more precise , adjustments to the profile log - likelihood function have been developed from asymptotic expansions for frequentist inference , and take the form where ; see , for example , ( ) and .the change from to is related to the orthogonality conditions ; in ( [ mpl ] ) orthogonality of parameters is not needed , as the expression is parameterization invariant .inferential statements based on approximations from ( [ score2])([lrt2 ] ) , with or substituting for the profile log - likelihood function , are still valid and are more accurate in finite samples , as they adjust for errors due to estimation of .they are still first - order approximations , although often quite good ones .one motivation for these modified profile log - likelihood functions , and inference based on them , is that they approximate marginal or conditional likelihoods , when these exist .for example , if the model is such that then inference for can be based on the marginal likelihood for based on , and the theory outlined above applies directly .this factorization is fairly special ; more common is a factorization of the form : in that case to base our inference on the likelihood for from would require further checking that little information is lost in ignoring the second term. arguments like these , applied to special classes of model families , were used to derive the modified profile log - likelihood inference outlined above .a related development is the improvement of the distributional approximation to the approximate pivotal quantity ( [ root ] ) .the laplace approximation ( [ laplace2 ] ) can be used to obtain the bayesian pivotal , for scalar , where ^{1/2 } , \\\label{qb}q_b(\psi ) & = & -\ell_{\mathrm{p}}'(\psi ) j_{\mathrm{p}}^{-1/2}(\hat\psi ) \biggl\{\frac{|j_{\lambda\lambda}(\psi,\hat\lambda_\psi ) |}{|j_{\lambda\lambda}(\hat\psi,\hat\lambda)| } \biggr \}^{1/2}\frac{\pi(\hat\psi,\hat\lambda)}{\pi(\psi,\hat\lambda_\psi)}\end{aligned}\ ] ] and the approximation in ( [ rstarb ] ) is to the posterior distribution of , given , and is accurate to .there is a frequentist version of this pivotal that has the same form : where is given by ( [ root2 ] ) , but the expression for requires additional notation , and indeed an additional likelihood component .in the special case of no nuisance parameters in ( [ qf1 ] ) , we have assumed that there is a one - to - one transformation from to , and that we can write the log - likelihood function in terms of and then differentiate it with respect to , for fixed .expression ( [ qf2 ] ) is equivalent , but expresses this sample space differentiation through a data - dependent reparameterization , where the derivative with respect to is a directional derivative to be determined .the details are somewhat cumbersome , and even more so for the case of nuisance parameters , but the resulting approximate pivotal quantity is readily calculated in a wide range of models for independent observations .detailed accounts are given in , , , and ( , ch .8.6 ) ; the last emphasizes implementation in a number of practical settings , including generalized linear models , nonlinear regression with normal errors , linear regression with non - normal errors , and a number of more specialized models . from a theoretical point of view ,an important distinction between and is that the latter requires differentiation of the log - likelihood function on the sample space , whereas the former depends only on the observed log - likelihood function , along with the prior .the similarity of the two expressions suggests that it might be possible to develop prior densities for which the posterior probability bounds are guaranteed to be valid under the model , at least to a higher order of approximation than implied by ( [ bayes2 ] ) , and there is a long line of research on the development of these so - called `` matching priors '' ; see , for example , .while the asymptotic results of the last section provide very accurate inferences , they are not as straightforward to apply as the first order results , especially in models with complex dependence . they do shed light on many aspects of theory , including the precise points of difference , asymptotically , between bayesian and nonbayesian inference . andthe techniques used to derive them , saddlepoint and laplace approximations in the main , have found application in complex models in certain settings , such as the integrated nested laplace approximation of . a glance at any number of papers motivated by specific applications , though , will confirm that likelihood summaries , and in particular computation of the maximum likelihood estimator , are often the inferential goal , even as the models become increasingly high - dimensional .this is perhaps a natural consequence of the emphasis on developing probability models that could plausibly generate , or at least describe , the observed responses , as the likelihood function is directly obtained from the probability model .but more than this , inference based on the likelihood function provides a standard set of tools , whose properties are generally well - known , and avoids the construction of _ ad hoc _ inferential techniques for each new application .for example , write `` the likelihood framework is an efficient way to extract information from a neural spike train we believe that greater use of the likelihood based approaches and goodness - of - fit measures can help improve the quality of neuroscience data analysis '' . a number of inference functions based on the likelihood function , or meant to have some of the key properties of the likelihood function ,have been developed in the context of particular applications or particular model families . in some cases the goal is to find ` reasonably reliable ' estimates of a parameter , along with an estimated standard error ; in other cases the goal is to use approximate pivotal quantities like those outlined in section [ sec2 ] in settings where the likelihood is difficult to compute .the goal of obtaining reliable likelihood - based inference in the presence of nuisance parameters was addressed in section [ sec3 ] . in some settings ,families of parametric models are too restrictive , and the aim is to obtain likelihood - type results for inference in semi - parametric and non - parametric settings . in many applications with longitudinal , clustered , or spatial data , the starting point is a generalized linear model with a linear predictor of the form , where and are and , respectively , matrices of predictors , and is a -vector of random effects .the marginal distribution of the responses requires integrating over the distribution of the random effects , and this is often computationally infeasible .many approximations have been suggested : one approach is to approximate the integral by laplace s method , leading to what is commonly called penalized quasi - likelihood , although this is different from the penalized versions of composite likelihood discussed below .the term quasi - likelihood in the context of generalized linear models refers to the specification of the model through the mean function and variance function only , without specifying a full joint density for the observations .this was first suggested by , and extended to longitudinal data in and later work , leading to the methodology of generalized estimating equations , or gee . compared penalized quasi - likelihood to pairwise likelihood , discussed in section [ sec4.3 ] , in simulations of multivariate probit models for binary data with random effects . in generalpenalized quasi - likelihood led to estimates with larger bias and variance than pairwise likelihood .a different approach to generalized linear mixed models has been developed by lee and nelder ; see , for example , and , under the name of -likelihood .this addresses some of the failings of the penalized quasi - likelihood method by modelling the mean parameters and dispersion parameters separately .the -likelihood for the dispersion parameters is motivated by reml - type arguments not unrelated to the higher order asymptotic theory outlined in the previous section .there are also connections to work on prediction using likelihood methods .likelihood approaches to prediction have proved to be somewhat elusive , at least in part because the ` parameter ' to be predicted is a random variable , although bayesian approaches are straightforward as no distinction is made between parameters and random variables .composite likelihood is one approach to combining the advantages of likelihood with computational feasibility ; more precisely it is a collection of approaches .the general principle is to simplify complex dependence relationships by computing marginal or conditional distributions of some subsets of the responses , and multiplying these together to form an inference function . as an _ ad hoc _ solution it has emerged in several versions and in several contexts in the statistical literature ;an important example is the pseudo - likelihood for spatial processes proposed in ( ) . in studies of large networks ,computational complexity can be reduced by ignoring links between distant nodes , effectively treating sub - networks as independent . in gaussian process models with high - dimensional covariance matrices , assumingsparsity in the covariance matrix is effectively assuming subsets of variables are independent .the term composite likelihood was proposed in , where the theoretical properties of composite likelihood estimation were studied in some generality .we suppose a vector response of length is modelled by . given a set of events , the composite likelihood functionis defined as and the composite log - likelihood function is because each component in the sum is the log of a density function , the resulting score function has expected value , so has at least one of the properties of a genuine log - likelihood function . relatively simple and widely used examples of composite likelihoods include independence composite likelihood , pairwise composite likelihood and pairwise conditional composite likelihood where and are the marginal densities for a single component and a pair of components of the vector observation , and the density in ( [ clcond ] ) is the conditional density of one component , given the remainder .many similar types of composite likelihood can be constructed , appropriate to time series , or spatial data , or repeated measures , and so on , and the definition is usually further extended by allowing each component event to have an associated weight . indeed one of the difficulties of studying the theory of composite likelihood is the generality of the definition . inference based on composite likelihood is constructed from analogues to the asymptotic results for genuine likelihood functions .assuming we have a sample of independent observations of , the composite score function , is used as an estimating function to obtain the maximum composite likelihood estimator , and under regularity conditions on the full model , with and fixed , we have , for example , where is the godambe information matrix , and are the variability and sensitivity matrix associated with .the analogue of ( [ lrt ] ) is where are the eigenvalues of .neither of these results is quite as convenient as the full likelihood versions , and in particular contexts it may be difficult to estimate accurately , but there are a number of practical settings where these results are much more easily implemented than the full likelihood results , and the efficiency of the methods can be quite good . a number of applied contexts are surveyed in . as just one example , developed subsequently , investigate pairwise composite likelihood for max - stable processes , developed to model extreme values recorded at a number of spatially correlated sites .although the form of the -dimensional density is known , it is not computable for , although expressions are available for the joint density at each pair of sites .composite likelihood seems to be particularly important for various types of spatial models , and many variations of it have been suggested for these settings . in some applications , particularly for timeseries , but also for space - time data , a sample of independent observations is not available , and the relevant asymptotic theory is for , where is the dimension of the single response .the asymptotic results outlined above will require some conditions on the decay of the dependence among components as the ` distance ' between them increases .asymptotic theory for pairwise likelihood is investigated in for linear time series , and in for max - stable processes in space and time .composite likelihood can also be used for model selection , with an expression analogous to , and for bayesian inference , after adjustment to accommodate result ( [ clrt ] ) ._ statistica sinica _ * 21 * , # 1 is a special issue devoted to composite likelihood , and more recent research is summarized in the report on a workshop at the banff international research station . in some applications, a flexible class of models can be constructed in which the nuisance ` parameter ' is an unknown function .the most widely - known example is the proportional hazards model of for censored survival data ; but semi - parametric regression models are also widely used , where the particular covariates of interest are modelled with a low - dimensional regression parameter , and other features expected to influence the response are modelled as ` smooth ' functions . developed inference based on a partial likelihood , which ignored the aspects of the likelihood bearing on the timing of failure events , and subsequent theory based on asymptotics for counting processes established the validity of this approach .in fact , s partial likelihood can be viewed as an example of composite likelihood as described above , although the theory for general semi - parametric models seems more natural . showed that partial likelihood can be viewed as a profile likelihood , maximized over the nuisance function , and discussed a class of semi - parametric models for which the profile likelihood continues to have the same asymptotic properties as the usual parametric profile likelihood ; the contributions to the discussion of their results provide further insight and references to the extensive literature on semi- and non - parametric likelihoods .there is , however , no guarantee that asymptotic theory will lead to accurate approximation for finite samples ; it would presumably have at least the same drawbacks as profile likelihood in the parametric setting .improvements via modifications to the profile likelihood , as described above in the parametric case , do not seem to be available in these more general settings .some semi - parametric models are in effect converted to high - dimensional parametric models through the use of linear combinations of basis functions ; thus the linear predictor associated with a component might be , or .the log - likelihood function for models such as these is often regularized , so that is replaced by , where is a penalty function such as or , and a tuning parameter .many of these extensions , and the asymptotic theory associated with them , are discussed in ( , ch .penalized likelihood using squared error is reviewed in ; the penalty has been suggested as a means of combining likelihood inference with variable selection ; see , for example , . penalized composite likelihoodshave been proposed for applications in spatial analysis ( , ; , ; , ) , gaussian graphical models , and clustered longitudinal data .the difference between semi - parametric likelihoods and nonparametric likelihoods is somewhat blurred ; both have an effectively infinite - dimensional parameter space , and as discussed in and the discussion , conditions on the model to ensure that likelihood - type asymptotics still hold can be quite technical .empirical likelihood is a rather different approach to non - parametric models first proposed by ; a recent discussion is .empirical likelihood assumes the existence of a finite - dimensional parameter of interest , defined as a functional of the distribution function for the data , and constructs a profile likelihood by maximizing the joint probability of the data , under the constraint that this parameter is fixed .this construction is particularly natural in survey sampling , where the parameter is often a property of the population ( , ; , ) .distribution theory for empirical likelihood more closely follows that for usual parametric likelihoods .simulation of the posterior density by markov chain monte carlo methods is widely used for bayesian inference , and there is an enormous literature on various methods and their properties . some of these methods can be adapted for use when the likelihood function itself can not be computed , but it is possible to simulate observations from the stochastic model ; many examples arise in statistical genetics .simulation methods for maximum likelihood estimation in genetics was proposed in ; more recently sequential monte carlo methods ( see , for example , ) and abc ( approximate bayesian computation ) methods ( , ; , ) are being investigated as computational tools .a reviewer of an earlier draft suggested that a great many applications , especially involving very large and/or complex datasets , take more algorithmic approaches , often using techniques designed to develop sparse solutions , such as wavelet or thresholding techniques , and that likelihood methods may not be relevant for these application areas .certainly a likelihood - based approach depends on a statistical model for the data , and for many applications under the general rubric of machine learning these may not be as important as developing fast and reliable approaches to prediction ; recommender systems are one such example .there are however many applications of ` big data ' methods where statistical models do provide some structure , and in these settings , as in the more classical application areas , likelihood methods provide a unifying basis for inference .this research was partially supported by the natural sciences and engineering research council .thanks are due to two reviewers for helpful comments on an earlier version . | i review the classical theory of likelihood based inference and consider how it is being extended and developed for use in complex models and sampling schemes . |
markov chain monte carlo ( mcmc ) algorithms are often used to generate samples distributed according to non - trivial densities in high dimensional spaces .many algorithms have been developed that allow mcmcs to produce samples from an unnormalized target density : in many applications , it is desirable or even necessary to be able to normalize the target density. i.e. , to calculate where is the support of . this integral can be computationally very costly or impossible to perform with standard techniques if the volume where the target is non - negligible occupies a very small part of the total volume of .an important area where such integration is necessary is for bayesian data analysis .bayes formula reads , for a given model , where here are the parameters of the model and the data are used to extract probabilities for possible values of .the denominator is usually expanded using the law of total probability and written in the form and goes by the names ` evidence ' , or ` marginal likelihood ' , and is the type of integral that we want to be able to calculate ( here the data are fixed and ) .an example use of is for the calculation of bayes factors in the comparison of two models : another application where the calculation of a normalization can be very important is in the parallelization of the mcmc algorithm . while the mcmc approach has very attractive features , it is often slow in its execution due to the nature of the algorithm .a goal is therefore to parallelize the computations needed to map out the target density .this looks at first sight difficult since the mcmc algorithms are by construction serial .a parallelization of the calculations can however be achieved via a partitioning of the support .i.e. , we partition into sub volumes with and we run a separate mcmc sampling for each sub volume . in order to have a final set of samples representing the target density over the full support, we need to know the relative probabilities for the different sub volumes .i.e , we need the samples in the different regions are then given weights with the number of samples from in and .a variety of techniques to calculate the evidence in bayesian calculations have been successfully developed .a summary can be found in , where a number of mcmc related techniques are reviewed , including laplace s method , harmonic mean estimation , chib s method , annealed importance sampling techniques , nested sampling and thermodynamic integration methods .we are here specifically interested in testing techniques directly applicable in an mcmc setting , and which is independent of the specific mcmc algorithm .we assume that the mcmc algorithm has been successfully run to extract samples according to the target density , and the goal is to provide an algorithm for calculating the normalization ( or evidence ) .given our requirements , only arithmetic mean estimation ( ame ) , harmonic mean estimation ( hme ) and laplace methods are directly applicable . using ame and hme methodsdirectly is known to fail in many situations , and the laplace method is only applicable if the target density is gaussian .we introduce the use of a reduced integration volume and normalization using the mcmc output to improve the ame and hme performance . after a description of the techniques, we report on numerical investigations of the different approaches using samples from the mcmc code bat . assuming the mcmc has been successfully run to extract samples according to , one of the quantities directly retrievable from the mcmc outputis an estimate of the parameter values at the global mode : is in the neighborhood of .i.e. , we know approximately where the integrand in eq .[ eq : integral ] has its maximum .we note that with a sub support of is directly estimated from the mcmc output by counting the fraction of samples falling within , ( the reason for this notation will become clear below ) .i.e. , the task of evaluating reduces to integrating the function over a well - chosen region - presumably a small region around and dividing by .this integral can be much simpler to evaluate than the integral over the full support . in the following , we use a simple hypercube for our integration region .from the mcmc samples , we can construct the marginalized distributions along each of the dimensions .we define an interval along each dimension centered at with width which is a multiple of the standard deviation ( we use the symbol to represent this factor ) .the optimum value of depends on the dimensionality of the problem as described below .another option would be to produce a covariance matrix of the for sampling using a multivariate normal distribution if desired , but this was not found necessary in the examples we have studied .the integral in the numerator in eq .[ eq : scheme ] can presumably be determined in a straightforward way since now we are focusing on a small volume with significant mass .the standard importance sampling approximation is given by where our sampling probability density is given by . is the number of samples used in the calculation .if we choose for a uniform distribution in the hypercube , then we have the well - known sample mean result with the volume of the hypercube .our estimator for is then we will use this simplest version of the estimator for our examples below . assuming unbiased gaussian distributions for and about their true values, we can estimate the uncertainty for with where the effective sample size is defined here as with the autocorrelation function at defined for our mcmc sample as in these equations , the subscript labels the component of , while the index labels the iteration in the mcmc .the uncertainty from the sample mean integration is estimated by separating the sample mean calculation of into batches and looking at the variance of these calculations : with these definitions , we are able to report both an estimate for our integral and an uncertainty . these will be compared to accurately calculated values for the chosen examples in the following sections . the hme value for can be calculated as follows :{\hat{f}(\lambda ) } & = & \int_{\omega } \frac{1}{f(\lambda ) } \cdot \frac{f(\lambda)}{i } d\lambda \\ & = & \frac{v}{i } \end{aligned}\ ] ] where is the normalised target density and is the total volume of the support .the hme estimator is then this calculation is performed directly from the mcmc output from which the samples as well as are available , and does not require an extra sample mean calculation as in the ame scheme . however , it can be unstable because of samples occurring ( or missing ) in regions where is small ( relative to other regions ) .we can improve the estimation , as originally noted in , by limiting ourselves to a small volume around the mode . using the same notation as above ,we can write where now only the samples in the restricted support are used . the uncertainty in the estimate is calculated by separating the mcmc samples included in our integration region into batches and looking at the variation of these estimates . in this approach, the target distribution is assumed to be represented by a ( multivariate ) gaussian distribution .the estimator for the normalization is then where the target density is evaluated at the mode returned from the mcmc and is the determinant of the covariance matrix evaluated numerically from the samples .this method is clearly only expected to work in cases where the assumption of normality is valid .we start with a simple example - the target density is the product of a number of gauss functions depending on only one parameter - to describe our testing procedures in detail .we then move on to more complicated examples in multivariate spaces , including functions with degenerate modes .all mcmc calculations were performed using the bat program , with samples from the target density taken after convergence of the mcmc algorithm .we start with the following target function : this type of function could , e.g. , be the likelihood function constructed for producing an estimate of a quantity , , given measurements , , with a sampling distribution modeled by a gaussian probability distribution of fixed width .the normalization integral for the target can be performed analytically assuming the volume of interest extends well beyond the extreme values of the . for the more general case of the product of -dimensional uncorrelated gaussian functions with known variances , the integralis given by d\vec{\mu } \\ & = & \frac{1}{(2\pi)^{d\cdot ( m-1)/2}\vert\sigma\vert^{(m-1)/2 } m^{d/2 } } \exp\left(-\sum_{j=1}^d \frac{{\rm var}[x_j]}{2\sigma_j^2 } \right ) \end{aligned}\ ] ] where is the ( diagonal ) covariance matrix .for our concrete example , we take and generate random values of from a gauss distribution of mean zero and unit standard deviation , and we find for the generated values . in evaluating the integral , we take for the support .we then use samples from the mcmc output to find an estimate for the mode of and to calculate the standard deviation for .the distribution of samples from from the mcmc are displayed in fig .[ fig : mcmc1d](left ) .the mode of the samples is found at and the standard deviation is found to be .the effective sample size for this set of samples is . the dependence of on the chosen value of is also shown in fig .[ fig : mcmc1d](right ) for 500 values of ranging from to in steps of . for a one - dimensional gaussian target density , which is what we have here, the expectation is that 68 % of mcmc samples occur within and 95 % occur within , and this is indeed what is found . as a function of the value of ( in units of the standard deviation of the distribution).,title="fig:",width=264 ] as a function of the value of ( in units of the standard deviation of the distribution).,title="fig:",width=264 ]we then perform a sample mean calculation with samples for each of the different choices of . for each calculation, we extract a value of as described in section [ sec : sm ] as well as an estimate of the uncertainty .the extracted values of ( divided by the true value ) are shown as a function of in fig .[ fig : evidence1d](left ) .the error bars are the estimated one standard deviation uncertainties .we observe small systematic deviations of the results for small values of resulting from the inaccurate determination of from the mcmc samples ( note that the mcmc was only run once , so that the values are correlated ) . as a function of , scaled by the true value .the error bars correspond to the estimated uncertainty .right ) the actual error ( red ) , the estimated uncertainty ( black ) and the total estimated uncertainty ( blue ) as a function of ., title="fig:",width=264 ] as a function of , scaled by the true value .the error bars correspond to the estimated uncertainty .right ) the actual error ( red ) , the estimated uncertainty ( black ) and the total estimated uncertainty ( blue ) as a function of ., title="fig:",width=264 ] to study the uncertainty estimation , we compare to at each value of .the results are shown in fig .[ fig : evidence1d](right ) . in this figure ,the red points indicate the absolute value of , the black points the estimated uncertainty coming from the sample mean calculation , , and the blue points the total estimated uncertainty , .we observe that our estimated uncertainty is accurate , and that there is a minimum of the uncertainty around .the location of the minimum clearly depends on the number of samples chosen for the mcmc and sample mean calculations , but it is important that we can accurately estimate the uncertainty . in this case , the arithmetic mean calculation is quite accurate even at large values of since we are only working in one - dimension .we now evaluate the harmonic mean estimate for as described in section [ sec : hme ] .the estimate as well as the absolute deviation from as a function of are shown in fig .[ fig : hme ] .we see for this example that the hme technique works well , and that accuracies of a fraction of 1 % are possible from the hme estimation at . as increased , the hme estimation worsens since , although more of the mcmc samples are included , reducing the binomial uncertainty on , imperfect sampling in the tails of the distribution plays a large role and we see the importance of limiting the range of the integration region for the hme calculation already with this simple one - dimensional example .the uncertainty is somewhat worse than what was found for the ame calculation , but probably adequate for the majority of applications .also , the calculation did not require the extra step of performing a sample mean calculation .scaled by the true value as a function of .the error bars correspond to the estimated uncertainty .right ) the actual error and the estimated uncertainty as a function of ., title="fig:",width=264 ] scaled by the true value as a function of .the error bars correspond to the estimated uncertainty .right ) the actual error and the estimated uncertainty as a function of ., title="fig:",width=264 ] as seen in fig .[ fig : mcmc1d ] , the target density is gaussian and therefore the laplace method is expected to work well .indeed , the laplace method yields an estimate within % of the true value in this example : .we now move to a target density composed of a product of ten dimensional gaussian distributions with non - diagonal covariance matrix .the target function in this case is : where is the covariance matrix , assumed to be known , and .the target function is ten - dimensional and has significant correlations among the ten parameters .the values of were chosen by generating random vectors using and the following covariance matrix and again could represent a type of situation found in a data analysis setting .the integration region for was taken as a hypercube of side length centered on .the value for can again be evaluated analytically by finding the similarity transformation that diagonalizes the covariance matrix .the expression of the integral in this case is \right ) .\ ] ] where and with a diagonal matrix .the true value of the integral for randomly generated data was evaluated using this expression and yielded .the mcmc program bat was used to sample from the target density with samples stored post - convergence ( yielding ) .the value of is given as a function of in fig .[ fig:10destimator ] . as a function of the value of ( in units of the standard deviation of the marginalized distribution ) for the product of ten - dimensional correlated gauss functions.,width=340 ]the arithmetic mean calculation was performed at each of values of as in the one - dimensional case , with samples in each ame run .the results are shown in fig .[ fig : evidence10d ] .as is seen , for values of around , the uncertainty is about 1 % .the method does not show any systematic biases for , and the estimated uncertainty is again a good estimator for the error . at small , where a small number of mcmc samples are used, the correlation between the mcmc samples produces some systematic errors in the evaluation of . as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty ., title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty ., title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty ., title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty ., title="fig:",width=264 ] the results for the hme estimator are also shown in fig . [fig : evidence10d ] .we see that accuracies of a few tens of % are achieved , but only in a narrow range . for ,the error is more than % and the hme estimate is no longer useful .also , the estimated uncertainty is too low and does not provide a reliable estimate of the true error .the hme method is clearly already running into trouble at this level of complexity .the target density is again a multivariate gaussian , and the laplace method works well , yielding .we now move beyond simple unimodal gaussian type target densities and consider a function in dimensions with degenerate modes lying on a dimensional surface of fixed radius , a gaussian shell : this function is centered at with degenerate modes along a surface of radius .the value of the function decreases away from the modal surface along a radius according to a gaussian shape with standard deviation .the integral of this function can be evaluated using spherical coordinates centered at , where is the radial coordinate in the space , so that the volume element , integrated over the angular coordinates , is with , so that we have we are left with a one - dimensional integral that can be easily calculated numerically to high precision .note that we have assumed that the integral in the region outside ( the corners in the hypercube ) is vanishingly small .this is the case for the examples considered in this article .for the three examples below , we use the following settings : radius , width and .the integration region extends from in each dimension .the parameter values result in .we use the bat code to produce mcmc samples from the target density , yielding an effective sample size . the sample distribution from the mcmc as well as the estimate of as a function of shown in fig .[ fig:2dshell ] .standard deviation ranges .right ) fraction of mcmc samples falling within the hypercube of side length as a function of the value of ( in units of the standard deviation of the distribution).,title="fig:",width=264 ] standard deviation ranges .right ) fraction of mcmc samples falling within the hypercube of side length as a function of the value of ( in units of the standard deviation of the distribution).,title="fig:",width=264 ] as can be seen in the figure , the mcmc has produced a reasonable sample distribution .the location of the mode from the posterior samples happens to be close to and is indicated in the figure ( note that in the figure ) .the lack of a single mode is not a problem for the ame and hme algorithms , but we no longer expect the laplace method to give sensible results .the mean values of are very close to and the standard deviation in each direction is about units .the hypercube centered at the mode found from the mcmc samples and with contains about % of the samples , and the hypercube with contains about % of the samples .we again use samples for our sample mean calculations at each of the values of .the results for are shown in the top plots in fig .[ fig : evidence2dshell ] , and we see that there is no difficulty in achieving a good result for the integral despite not having a simple mode for the target distribution .the accuracy of the calculation is good , and the uncertainty is better than 1 % for a wide range of , despite the rather small number of samples in the mcmc and ame calculations .we again find that our estimated uncertainty gives a good reproduction of the actual error .the hme evaluations are also given in figs .[ fig : evidence2dshell ] . herewe find good performance ( few % level accuracy ) up to , at which point the hme calculation starts to systematically deviate from the correct value . in this case , the estimated uncertainty does not give a reliable indication of the actual error for and in fact the uncertainty is grossly underestimated .this is a result of the missing mcmc samples at very small .the volume term in the numerator in eq . [ eq : hme ] grows as is increased , but is not properly compensated by large terms that should appear in the denominator from small values of .the inability to diagnose this behavior implies that the hme is unreliable . as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty.,title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty.,title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty.,title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty.,title="fig:",width=264 ] as expected , the laplace method does not work for the gaussian shell situation .for the two - dimensional example considered here , . herein a first calculation , we use the bat code to produce mcmc samples from the target density , yielding an effective sample size and calculate the evidence .we again use samples for our sample mean calculations .the results for the ame and hme evaluations are given in fig .[ fig : evidence10dshell ] . for the arithmetic mean calculation, we see the same pattern as in the previous examples . for small values of , the uncertainty coming from the small number of mcmc samples dominates .however , sub % errors are possible for , which corresponds to . as increases, the uncertainties from the sample mean calculation dominate since we move to regions of the space that do not contain significant probability mass .the estimated uncertainty is again accurate and can be used as a guide to choose the optimal value of as we discuss below .the hme estimate achieves few % accuracy at a somewhat smaller value of than the optimal for the sample mean calculation .the estimated uncertainty is again tends too small at larger and is not reliable .as expected , the laplace method does not work well and yields . as a check that these results are not due to small mcmc sample size, the calculations were redone for mcmc samples .the optimal value of changes somewhat for the sample mean calculation , but otherwise all results are basically as before .the systematic behavior of the is the same as for the smaller mcmc sample size ; no significant improvement in performance was found with the 10 times large mcmc sample size . as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty.,title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty.,title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty.,title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty.,title="fig:",width=264 ] as an extreme example , we considered a 50-dimensional gaussian shell . herethe modal surface is a 49-dimensional hypersphere and .the bat code was used to initially produce mcmc samples from the target density , yielding an effective sample size .the values of increase rapidly from at to at .the standard deviations in each dimension is about units , so that approximately covers the full support defined for the function .the results for the ame and hme evaluations are given in fig .[ fig : evidence50dshell ] . the best result for the samplemean calculation gives about % accuracy , whereas the hme calculation is within % of the correct result for a small range of where starts to increase .we used samples for our sample mean calculations , although this is clearly too small a number for such a large dimensional volume . the error from the sample mean calculation increases rapidly as we increase , and becomes completely unreliable for .for such a large volume , the vast majority of sample mean evaluations are in regions where the target density is vanishingly small and the uncertainty grossly underestimates the true error . in the next section ,we discuss a choice of settings for the sample mean calculation and redo the calculation shown here .as expected , the laplace method does not work well and yields .we again checked that these results are not due to small mcmc sample size , the calculations were redone for mcmc samples .the optimal location of changes to smaller values for the sample mean calculation and few % level accuracy is reached . for the hme calculation , a small improvement is also observed , but otherwise all results are basically as before . as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty ., title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty ., title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty ., title="fig:",width=264 ] as a function of , scaled by the true value .top right ) the actual error ( red ) , the estimated uncertainty from the sample mean calculation ( black ) and the total estimated uncertainty ( blue ) as a function of .bottom left ) scaled by the true value as a function of .bottom right ) the actual error and the estimated uncertainty as a function of .the error bars in the left plots correspond to the estimated uncertainty ., title="fig:",width=264 ]based on the results in the previous sections , we discuss now a procedure for choosing the value of for both the sample mean and harmonic mean estimators .as was seen in our examples , the uncertainty in the calculation for the ame estimator comes from two sources - the approximately binomial fluctuations in the number of mcmc samples included in our region of interest specified by , and the uncertainty coming from the sample mean calculation .the first uncertainty can be estimated from the mcmc output , and can be used to define a value of by specifying that this source of uncertainty should contribute half of the final uncertainty .i.e. , we find the value of such that ( see eq .[ eq : uncertainty ] ) where is the target uncertainty .we will use for our discussion below except for the fifty - dimensional gaussian shell example , where we take .once we have fixed in this way , we then find the corresponding value of and use this to calculate sample mean integrals with for a batch of samples , requiring a minimum of batches .we use the variance of these calculations to determine how many batches will be needed to get the desired uncertainty ; i.e. , the results for the examples given in the previous sections using this procedure for fixing the parameters of the algorithm is given in table [ tab : summary ] .as is seen , the range of values for is relatively narrow and only grows slowly with the complexity of the target function .the number of sample mean calculations however depends strongly on the complexity of the problem , and is also inversely dependent on the accuracy specified and on the size of the mcmc sample . for a given specified accuracy, is reduced as is increased , and this reduces the number of sample mean calculations necessary .we find that the ame algorithm gives a reliable estimate of the uncertainty for the examples chosen if the required number of sample mean calculations is not too large .we conclude that the ame calculation of the integral of the target density using a reduced volume around the mode of the target works well for the types of cases we have studied ..summary of the results on different target functions for the ame estimator of the normalizing integral . is the number of posterior samples from the mcmc , is the effective sample size , is the specified accuracy for the integral calculation , is the multiplier of the standard deviation along each dimension chosen by the algorithm , is the number of samplings of the function used in the sample mean calculation , is the true value of the integral , is the fractional error made in the calculation and is the estimated fractional uncertainty from the calculation . [cols="^,^,^,^,^,>,<,^,^",options="header " , ] as can be seen from the table , and as discussed earlier , the hme calculation works well for the simple target functions considered , but does not produce good results for the more complicated target functions . in particular , the estimated uncertainty does not provide a good estimate of the actual error , so that it is not possible to diagnose that the calcuclation is not performing well .we therefore do not recommend the use of the hme estimator to calculate the normalization integral for anything but the simplest low - dimensional target densities , the laplace estimation works well in cases where the target density is well approximated by a ( multivariate ) gaussian distribution .if this is known to be the case , then this approximation is easily calculated and can be used .however , it should be avoided if the shape of the target distribution is not well known .we have investigated techniques for the integration of the target density in cases where a mcmc algorithm has successfully run . we do not attempt to modify the sampling of the target density , but only to provide a post - processor for an mcmc algorithm . from the mcmc, we have an estimate of the global mode and also the variance of the samples marginalized along each parameter dimension .we use this information to define a hypercube centered on the global model and having side lengths proportional to the standard deviation along these directions , and then calculate the integral of the target function in the reduced volume using either an arithmetic mean or harmonic mean approach .the fraction of mcmc samples within the reduced volume was used to estimate the integral of the target density over the full volume of interest .this technique was tried on a variety of examples and also compared to a laplace estimator .the key elements of the methods studied are : * given the mcmc has been run successfully , the evaluation of the normalization of the target function can be performed using any sub support of the support of the target function ; * from the mcmc , we can find a point near the maximum of the target function , and we can perform the integration in a region which is in some ways optimal by centering the sub support on this point ; * it is possible to also calculate an estimated accuracy for the integral .our conclusions are that the arithmetic mean calculation performed in a hypercube centered on the observed mode works well and provides a technique for calculating the normalization of the target density with a reliable uncertainty estimate . on the other hand , the harmonic mean estimator only works well in situations where the range of values from the target density does not vary too widely , andthe laplace estimator is restricted for use on gaussian shaped target distributions .the authors would like to thank frederik beaujean , daniel greenwald , stephan jahn and kevin krninger for many fruitful discussions .9 see e.g. , c. robert and g. casella , ` monte carlo statistical methods ' , 2 edition , springer ( 2004 ) . h. jeffreys , ` theory of probability ' , 3 ed ., claredon press , oxford , mr0187257 ( 1961 ) .e. t. jaynes , ` probability theory : the logic of science ' , cambridge university press , cambridge , mr1992316 ( 2003 ) .d. n. vanderwerken , s. c. schmidler , ` parallel markov chain monte carlo ' , arxiv:1312.7479v1 n. friel and j. wyse , ` estimating the evidence - a review ' , stat .* 66 * ( 2012 ) 2800 .l. tierney and j. b. kadane , ` accurate approximations for posterior moments and marginal densities ' , journal of the american statistical associations , * 81 * ( 1986 ) 82 .m. a. newton and a. e. raftery , ` approximate bayesian inference with the weighted likelihood bootstrap ' , journal of the royal statistical society , series b*56 * ( 1994 ) 3 .s. chib , i. jeliazkov , ` marginal likelihood from the metropolis - hastings output ' , journal of the american statistical association * 96 * ( 2001 ) 270 .c. p. robert and d. wraith , ` computational methods for bayesian model choice ' , bayesian inference and maximum entropy methods in science and engineering : the 29 international workshop on bayesian inference and maximum entropy methods in science and engineering ( aip conference proceedings ) , vol . 1193 ( 2009 ) 251 .j. skilling , ` nested sampling for general bayesian computation ' , bayesian analysis * 1 * ( 2006 ) 833 .a. gelman and x. l. meng , ` simulating normalizing constants : from importance sampling to bridge sampling to path sampling ' , statistical science * 13 * ( 1998 ) 163 .n. friel and a. n. pettitt , ` marginal likelihood estimation via power posteriors ' , journal of the royal statistical society , series b*70 * ( 2008 ) 589 .a. caldwell , d. kollar , k. krninger , ` bat - the bayesian analysis toolkit ' , comput .commun . 180( 2009 ) 2197 - 2209 .r. e. kass , b. p. carlin , a. gelman , and r. neal , ` markov chain monte carlo in practice : a roundtable discussion ' , the american statistician , * 52 * ( 1998 ) 93 .a. e. gelfand and d. k. dey , ` bayesian model choice : asymptotics and exact calculations ' , journal of the royal statistical society , b * 56 * ( 1994 ) 501 - 514 . | techniques for evaluating the normalization integral of the target density for markov chain monte carlo algorithms are described and tested numerically . it is assumed that the markov chain algorithm has converged to the target distribution and produced a set of samples from the density . these are used to evaluate sample mean , harmonic mean and laplace algorithms for the calculation of the integral of the target density . a clear preference for the sample mean algorithm applied to a reduced support region is found , and guidelines are given for implementation . |
modelling of pedestrian dynamics is actual problem at present days .different approaches from the social force model ( and references therein ) based on differential equations to stochastic ca models ( and references therein ) are developed .they reproduce many collective properties including lane formation , oscillations of the direction at bottlenecks , the so - called `` faster - is - slower '' effect .these are an important and remarkable basis for pedestrian modelling .but there are still things to be done in order to reproduce individual pedestrian behavior more realistic and carefully .the model presented takes its inspiration from stochastic floor field ( ff ) ca model . herea static field is a map that pedestrian may use to orient in the space .dynamic field is used to model herding behavior in panic situations .it s known that regular situations imply that pedestrians analyze environment and choose their route more carefully ( see and reference therein ) .pedestrians keep a certain distance from other people and obstacles .the more hurried a pedestrian is and more tight crowd is this distance is smaller . we adopted a mathematical formalization of these points from .pedestrians minimize efforts to reach their destinations : feel strong aversion to taking detours or moving opposite to their desired direction .however , people normally choose the fastest rout but not the shortest .this means that opportunity to wait ( to stay at present place ) has to be realized in the model .( models ( and other ca models ) imply that people can stay at present place if there is no space to move only . )we realize this point ( people patience ) in the algorithm . as well as it s necessary to take into account that some effects are more reside for certain regions .for instance , clogging situations are more pronounced in the nearest to an exit areas .this means that spatial adaptivity of correspondent model parameters to be introduced in the model .all these changes and additions extend basis ff model towards emotional aspect and improve and make flexible decision making process . by this reason model obtained was named as _intelligent ff model_.as usual for ca models the space ( plane ) is sampled into cells ( it s an average space occupied by a pedestrian in a dense crowd ) which can either be empty or occupied by one pedestrian ( particle ) only . the von neumann neighborhood is used .it implies that each particle can move to one of four its next - neighbor cells or to stay at the present cell at each discrete time step , e.i ., .( empirically the average velocity of a pedestrian is about .so real time corresponding to one time step in the model is about .) such movement is in accordance with certain transition probabilities that are explained below .static ( ) and dynamic ( ) floor fields are introduced and discussed in .for each cell values of and are given ._ static floor field _ describes the shortest distance to an exit ( or other destination point that depends on a task ) . it does nt evolve with time and is nt changed by the presence of the particles .the value of is set inversely proportional to the distance from the cell to the exit .one can consider as a map that pedestrian can use to move to the target point , e.g. , exit ._ dynamic floor field _ is a virtual trace left by the pedestrians similar to the pheromone in chemotaxis .it is used to model a " long - ranged attractive interactions between the pedestrians , e.g. , herding behavior that is observed in panic situations .dynamic floor field is time dependent . in each timestep each decays with probability and diffuses with probability ] friction parameter that controls the resolution of conflicts in clogging situations . works as some kind of local pressure between the pedestrians .the higher is pedestrians are more handicapped by others trying to reach the same target cell .such situations are natural and well pronounced for nearest to exit ( destination point ) space . for other areasit s not typical but it s possible .so to realize it and make simulation of individuals realistic the coefficient is introduced ( in contrast with original ff model ) .* , ] .it takes maximal value if movement conditions in the direction are favorable . and if there is no free space to move .term proportionally decreases with the advent and approaching of some obstacles ( people , wall , etc . ) in the direction .other terms , , vary form to ( in general case ) and characterize style of people behavior. minimal value of parameter ( , or ) means that correspondent feature of behavior is nt realized and term does nt affect the probability . if all three terms are minimal then pedestrians walk free . and in this case only term determines the transition probability for each next - neighbor cell in accordance with people features : keeping apart from other people and obstacles , patience . in ff model pedestrians stay at present cell if there is no space to move only . herewe give pedestrians the opportunity to wait when preferable direction will free even if other directions are available for moving at this time .such behavior is reside to low and middle densities . to realize it transition probabilities ( [ 1 ] )do nt include a checking if cell occupied or not . in this case transition probabilities ( [ 1 ] ) can be considered as a rate of wish to go to the certain directions .possibility not to leave present cell is realized in step 3 .the idea of spatial adapted parameters is introduced here .it s clear that conditions ca nt be equal for all people involved .position of pedestrian in the space may determine some features of the behavior .thus clogging situations are natural and well pronounced for nearest to exit ( destination point ) space . for other areasit s not typical but it s possible . to realize it the coefficient for clogging parameter is introducedin order to test our model a regular evacuation process was simulated .this means that in examples presented . and we set , .there was simulated evacuation of one person ( n=1 ) from a room ( cells cells ) with one exit ( ) in the middle of a wall .recall that the space is sampled into cells of size which can either be empty or occupied by one pedestrian only .static field was calculated in accordance with . stating position is a cell in a corner near wall opposite to the exit .pedestrian moves towards the exit with . forsuch sampled space minimal value of time steps that require to leave the room starting from initial position is .different combinations of parameters and were considered .total evacuation time and trajectories were investigated .following table contains results over 500 experiments .figures [ gist ] show total evacuation time distributions for some couples of the parameters from table [ tmo ] over 500 realizations . and over 500 experiments.,title="fig : " ] and over 500 experiments.,title="fig : " ] + + and over 500 experiments.,title="fig : " ] and over 500 experiments.,title="fig : " ] + + and over 500 experiments.,title="fig : " ] and over 500 experiments.,title="fig : " ] + + figures [ treks ] present tracks of pedestrian ways over 500 realizations for some couples of the parameters from table [ tmo ] . and over 500 experiments.,title="fig : " ] and over 500 experiments.,title="fig : " ] + + and over 500 experiments.,title="fig : " ] and over 500 experiments.,title="fig : " ] + + and over 500 experiments.,title="fig : " ] and over 500 experiments.,title="fig : " ] + + different moving conditions are reproduced by these combinations .they vary from , to , .the former case can be interpreted as pedestrian moves in a very low visibility ( or by touch ) but he approximately knows the direction to destination point . in other wordsone can say that pedestrian does nt see , knows , and wants not so much to go to destination point ( exit ) .last one ( , ) describes situation when pedestrian sees , knows , and wants to go to destination point very much . note that if model presented corresponds to ff - model with the same other parameters . andit s clear that the pedestrian patience that was introduced in the model does nt pronounced in one pedestrian case .one can see that for small mode is very dependent on parameter .the bigger parameter is an influence of to is less pronounced .but the bigger parameter is more natural way pedestrian chooses tracks are more close to a line connecting starting point and exit ( non the less random component takes place ) . under proximity of wall decreases the probability ( [ 1 ] ) to move in this direction .thereby tracks are forced to tend to natural one .thus parameter fulfils its role to simulate environment analysis here . for this collective experimentthe space was a room ( cells cells ) with one exit ( ) .initial number of people is ( density ) .initial positions are random and people start to move towards the exit with .exit is in the middle of east wall .figures [ 300people_ks1 ] , [ 300people_ks3 ] present typical stages of evacuation process for different and . + and different .,title="fig : " ] and different .,title="fig : " ] and different .,title="fig : " ] + + + and different .,title="fig : " ] and different .,title="fig : " ] and different .,title="fig : " ] + + and different .,title="fig : " ] and different .,title="fig : " ] and different .,title="fig : " ] + + + and different .,title="fig : " ] and different .,title="fig : " ] and different .,title="fig : " ] + one can see that evacuation dynamics in case a ) differs from case b ) in both figures [ 300people_ks1 ] , [ 300people_ks3 ] .the reason of it is different parameters .if people avoid to approach to walls .and while crowd density allows pedestrians try to follow more natural way to the exit .the bigger is shape of crowd in front of exit is more diverse from the case of .( note if we have ff - model with the same other parameters . )one can notice that the closer to the exit pedestrians are circle shaped crowd in front of the exit is more unrealistic .the problem comes from computational aspect of coefficient .it s a positive that in the model obtained people avoid to approach to walls . butthis effect has to be less pronounced with approaching to wall ( walls as in a subsection example below ) surrounding exit .thus parameter has to be spatial adaptive .next table demonstrates numerical description of cases presented .let be frequency to choose direction over all experiment for each couple of parameters . here are north , east , sought , west , center ( stay at present place ) correspondingly , total number of movements including stayings at current position over all experiment . comparing cases 1 ) and 2 ) , 3 ) and 4 )correspondingly one can notice that greater leads to significant redistribution of flow. in the case of low says that people move as much as possible ( because people can stay at current position if all nearest cells are occupied only ) .if the opportunity to wait is realized in cases 2 ) and 4 ) has the greatest value . has the smallest value ( this direction is opposite to the exit ) , , are approximately equal because exit is in the middle of the wall .thus in cases 2 ) and 4 ) model reproduces more natural decision - making process . at the same timelet us remark that increasing makes evacuation process more directed and reduces total evacuation time ( see cases 1 ) and 3 ) , 2 ) and 4 ) correspondingly ) . for this collective experimentthe space was a room ( cells cells ) with one exit ( ) .initial ( density ) .initial positions are random and people start to move towards the exit with .spaces are presented in the figures [ 150people]a , [ 150people]b .+ + two combinations of parameters and were considered . andtotal evacuation time was investigated .table [ tmoexits ] contains results over 100 experiments .initial positions of people were the same for all experiments .\a ) b ) one can notice once again that in the case a ) parameter does nt influence on total evacuation time under such .but in other case increase of leads to significant delay of evacuation .the reason of it is `` computational '' repulsion from the walls . andas a result pedestrians do nt use corner between wall and exit , exit is not fully used , total evacuation time increases .so this is one more example that shows necessity in at least parameter adaptivity .in the paper the intelligent ff cellular automation model is presented .modifications made are to improve realism of the individual pedestrian movement simulation .the following features of people behavior are introduced : keeping apart from other people ( and obstacle ) , patience .idea of spatial adaptation of model parameters is pronounced and one method is presented ( parameter ) .model obtained saved opportunities to reproduce variety of collective effects of pedestrian movement from free walk to escape panic and took more flexibility .the simulation made showed real improvements in decision - making process in comparison with basic ff model and pointed out some problems .the following points seem to be very important for realistic pedestrian simulation and are under future investigations . in a case of emergencyappearing of clogging situation in front of exit often leads to appearing fallen or injured people ( or jam ) .the physical interactions in such crowd add up and cause dangerous pressures up to which `` can bend steel barriers or push down brick walls '' .fallen or injured people act as `` obstacles '' , and escape is further slowed .continuous model can reproduce pushing and physical interactions among pedestrians .ca model does nt allow to do it .parameter works in the model as some kind of local pressure between the pedestrians ( the higher is pedestrians are more handicapped by others trying to reach the same target cell ) . but by means of fallen or injured people are not simulated .our further intention is to produce method to evaluate common pressure to each pedestrian in ca model , and if pressure is over some barrier to indicate correspondent cell as new obstacle. there are at least two reasons for parameters to be adaptive .one of them is learning that is reside to people .therefor at least parameters and need to be time adaptive and spatial dependent as well .parameter needs to be spatial adaptive because of negative computational effects .so methods to adapt model parameters are under further investigation .1 burstedde , c. ; k. klauck ; a. schadschneider ; j. zittartz . 2001 .`` simulation of pedestrian dynamics using a twodimensional cellular automaton . ''_ physica a _ , no .295 , 507525 .helbing , d. 2001 .`` traffic related self - driven many - particle systems . '' _ rev .mod . phys .73 _ , no .4 . kirchner , a. ; a. schadschneider .2002.``simulation of evacuation processes using a bionics - inspired cellular automation model for pedestrian dynamics . ''_ physica a _ , no .312 , 260276 .malinetskiy , g.g . and m.e .`` an application of cellular automation for people dynamics modelling . '' _ journal of computational mathematics and mathematical physics 44 _ , no . 11 , 21082112.(rus . ) nishinari , k. ; a. kirchner ; a. namazi ; a. schadschneider . `` extended floor field ca model for evacuation dynamics . ''_ e - print cond - mat/0306262 _ yamamoto , k. , kokubo , s. , nishinari , k. 2007 .`` simulation for pedestrian dynamics by real - coded cellular automata ( rca ) . ''_ physica a _, doi:10.1016/j.physa.2007.02.040 . | a stochastic cellular automata ( ca ) model for pedestrian dynamics is presented . our goal is to simulate different types of pedestrian movement , from regular to panic . but here we emphasize regular situations which imply that pedestrians analyze environment and choose their route more carefully . and transition probabilities have to depict such effect . the potentials of floor fields and environment analysis are combined in the model obtained . people patience is included in the model . this makes simulation of pedestrians movement more realistic . some simulation results are presented and comparison with basic ff - model is made . |
various stochastic partial differential equations ( spdes ) have emerged over the last two decades in different areas of mathematical finance .a classical example is the heath - jarrow - morton interest rate model of the form where is the forward rate of tenor at time and its instantaneous volatility . in ,a similar equation has been proposed more recently to model electricity forwards .most of the spdes studied share with ( [ hjm ] ) the property that the derivatives of the solution only appear in the drift term ; in the case of ( [ hjm ] ) the volatility of the brownian driver does not depend on the solution at all .numerical methods for hyperbolic spdes of the type ( [ hjm ] ) have been studied , for example , in .this article , in contrast , considers the parabolic spde where is a standard brownian motion , and and are real - valued parameters .it is clear that the behaviour of this equation is fundamentally different from those with additive or multiplicative noise .the significance of ( [ spde ] ) for the following applications is that it describes the limiting density of a large system of exchangeable particles .specifically , if we consider the system of sdes for , with and , where are assumed i.i.d . with finite second moment , the empirical measure has a limit for ,whose density satisfies ( [ spde ] ) in a weak sense . for a derivation of this result in the more general context of quasi - linear pdessee .while the motivation in is to use a large particle system ( [ sdesys ] ) to approximate the solution to the spde ( [ spde ] ) , our view point is to use ( [ spde ] ) as an approximate model for a large particle system , and we will argue later the ( computational ) advantages of this approach in situations when the number of particles is large . as a first possible application , one may consider as the log price processes of a basket of equities , which have idiosyncratic components and share a common driver ( the `` market '' ) . if the size of the basket is large enough , the solution to the spde can be used to find the values of basket derivatives . in this paper , we study an application of a similar model to basket credit derivatives .we mention in passing that equations of the form ( [ spde ] ) arise also in stochastic filtering .to be precise , ( [ spde ] ) is the zakai equation for the distribution of a signal given observation of , see e.g. .it is interesting to note that the solution to the spde ( [ spde ] ) without boundary conditions can be written as the solution of the pde shifted by the current value of the brownian driver , in particular , if , then the intuitive interpretation of this result is that the independent brownian motions have averaged into a deterministic diffusion in the infinite particle limit , whereas the common factor , which moves all processes in parallel , shifts the whole profile ( and also adds to the diffusion , via the it term ) . in , the analysis of the large particle systemis extended to cases with absorption at the boundary ( ) , it is shown that there is still a limit measure , which may now be decomposed as where is the proportion of absorbed particles ( the `` loss function '' ) , and the density of satisfies ( [ spde ] ) in with absorbing boundary condition consider applications to basket credit derivatives . for the market pricing examples ,they consider a simplified model , where defaults are monitored only at a discrete set of dates . between these times , the default barrier is inactive and ( [ spde ] ) is solved on the real line by using ( [ pde ] ) and ( [ shift ] ) . for the initial - boundary value problem ( [ spde ] ) , ( [ bc ] ) ,however , such a semi - analytic solution strategy is no longer possible and an efficient numerical method is needed .moreover , there is a loss of regularity at the boundary in this case , such that but not , as is documented in .recent papers on the numerical solution of spdes deal with cases relevant to ours , yet structurally crucially different .a comprehensive analysis of finite difference and finite element discretisations of the stochastic heat equation with multiplicative white noise and non - linear driving term is given in and , respectively . shows a lax equivalence theorem for the sde in a hilbert space , driven by a process from a class including brownian motion , where is a suitable ( e.g. elliptic differential ) operator and a lipschitz function . in this paper, we propose a milstein finite difference discretisation for ( [ spde ] ) and analyse its stability and convergence in the mean - square sense by fourier analysis .a main consideration of this paper is the computational complexity of the proposed methods , and we will demonstrate that a multilevel approach achieves a cost for the spde simulation no larger than that of direct monte carlo sampling from a known univariate distribution , for r.m.s .accuracy , and is in that sense optimal .multilevel monte carlo path simulation , first introduced in , is an efficient technique for computing expected values of path - dependent payoffs arising from the solution of sdes .it is based on a multilevel decomposition of brownian paths , similar to a brownian bridge construction .the complexity gain can be explained by the observation that the variance of high - level corrections involving a large number of timesteps is typically small , and consequently only a relatively small number monte carlo samples is required to estimate these contributions to an acceptable accuracy .overall , for sdes , if a r.m.s .accuracy of is required , the standard monte carlo method requires operations , whereas the multilevel method based on the milstein discretisation requires operations .the first extension of the multilevel approach to spdes was for parabolic pdes with a multiplicative noise term .there have also been recent extensions to elliptic pdes with random coefficients .our approach , for a rather different parabolic spde , is similar to the previous work on sdes and spdes in that the solution is decomposed into a hierarchy with increasing resolution in both time and space. provided the variance of the multilevel corrections decreases at a sufficiently high rate as one moves to higher levels of refinement , the number of fine grid monte carlo simulations which is required is greatly reduced .indeed , the total cost is only to achieve a r.m.s. accuracy of compared to an cost for the standard approach which combines a finite difference discretisation of the spatial derivative terms and a milstein discretisation of the stochastic integrals .the rest of the paper is structured as follows .section [ sec : finite - difference ] outlines the finite difference scheme used , and analyses its accuracy and stability in the standard monte carlo approach .section [ sec : multilevel ] presents the modification to multilevel path simulation of functionals of the solution .numerical experiments for a cdo tranche pricing application are given in section [ sec : results ] , providing empirical support for the postulated properties of the scheme and demonstrating the computational gains achieved .section [ sec : conclusions ] discusses the benefits over standard monte carlo simulation of particle systems and outlines extensions .integrating ( [ spde ] ) over the time interval ] ( note is roughly in the centre ) , and also the initial - boundary value problem with zero dirichlet conditions on ] is .since the computational cost is proportional to this implies an overall cost which is .the aim of the multilevel monte carlo simulation is to reduce this complexity to .consider monte carlo simulations with different levels of refinement , , with being the coarsest level , ( i.e. the largest values for and ) and level being the finest level corresponding to that used by the standard monte carlo method .let denote an approximation to payoff using a numerical discretisation with parameters and .because of the linearity of the expectation operator , it is clearly true that = { \mathbb{e}}[{\widehat{p}}_0 ] + \sum_{l=1}^l { \mathbb{e}}[{\widehat{p}}_l \!-\ ! { \widehat{p}}_{l-1 } ] .\label{eq : identity}\ ] ] this expresses the expectation on the finest level as being equal to the expectation on the coarsest level plus a sum of corrections which give the difference in expectation between simulations using different numbers of timesteps . the multilevel idea is to independently estimate each of the expectations on the right - hand side in a way which minimises the overall variance for a given computational cost .let be an estimator for ] using samples .each estimator is an average of independent samples , which for is the key point here is that the quantity comes from two discrete approximations using the same brownian path .the variance of this simple estimator is = n_l^{-1 } v_l ] 2 . = \left\ { \begin{array}{ll } { \mathbb{e}}[{\widehat{p}}_0 ] , & l=0 \\[0.1 in ] { \mathbb{e}}[{\widehat{p}}_l \!-\ ! { \widehat{p}}_{l-1 } ] , & l>0 \end{array}\right .[ ml - iii ] \leq c_2\ , n_l^{-1 } 2^{-\beta\ , l } ] , ] , ] , ] as well as the variance of the standard single level estimate , while the top - right plot shows the convergence of the expectation ] ) , and apply the following interface conditions at : to maintain quadratic grid convergence in spite of the discontinuity introduced by ( [ discont ] ) , we choose the mesh such that a grid point coincides with the 0 boundary ( e.g. by setting in the above example ) , and set the numerical solution after default monitoring to 0 for grid coordinates below 0 and to its previous value at 0 , see e.g. .it is seen in fig .[ fig : spde_results_discr ] that convergence is very similar to the previous case .here we discuss the computational complexity of the multilevel solution of the spde compared with the alternative use of the multilevel method for solving the sdes which arise from directly simulating a large number of sdes .we have already explained that to achieve a r.m.s .accuracy of requires work when solving the spde by the standard monte carlo method , but the cost is using the multilevel method , provided conjecture [ conj ] is correct .consider now the alternative of using a finite number of particles ( firms ) , , to estimate the tranche loss in the limit of an infinite number of particles ( firms ) . in this case, empirical results suggest that there is an additional error ( see also ) , and the proof of this convergence order is the subject of current research . taking this to be the case , the optimal choice of to minimise the computational complexity to achieve an r.m.s .error of is . using the standard monte carlo method ,the optimal timestep is , and the optimal number of paths is , so the overall cost is .using the multilevel method for the sdes reduces the cost per company to , so the total cost is .this complexity information is summarised in table [ table : comparison ] .there is also a practical implementation aspect to note .the computational cost per grid point in the finite difference approximation of the spde is minimal , requiring just three floating point multiply - add operations if equation ( [ discrete ] ) is re - cast as with the coefficients computed once per timestep , for all . if we let be the cost of generating all of the gaussian random numbers for a single spde simulation , then the cost of the rest of the finite difference calculation with 20 points in ( as used on the coarsest level of our multilevel calculations ) is probably similar , giving a total cost of for each spde . on the other hand, each sde needs its own gaussian random numbers for the idiosyncratic risk , and so the cost of simulating _ each _ sde is approximately , roughly half of the cost of the spdes on the coarsest level of approximation , giving a total cost of . ' '' '' method / model & sde & spde + ' '' '' standard mc & & + ' '' '' multilevel mc & & + we have shown that stochastic finite differences combined with a multilevel simulation approach achieve optimal complexity for the computation of expected payoffs of an spde model . in the case of an absorbing boundary , the complexity estimate is a conjecture in so far it relies on the convergence order of the finite difference scheme , which does not follow from the fourier analysis of the unbounded case .the matrix stability analysis in appendix [ sec : matanal ] could form part of a rigorous analysis if a lax equivalence theorem could be proved . in the case of multiplicative white noisethis is shown in .one difficulty in the present case of an spde with stochastic drift is the loss of regularity towards the boundary , which may be accounted for by weighted sobolev norms of the solution , but even then it is not clear that convergence of the functionals of interest follows .there are several possible extensions of the present basic model as discussed in , ranging from stochastic volatility and jump - diffusion to contagion models .the methods developed in this paper should be of use there also , building for example on multilevel versions of jump - adapted discretisations for jump - diffusion sdes .bain , a. & crisan , d. _ fundamentals of stochastic filtering _ , springer , 2009 .barth , a. , schwab , c. , & zollinger , n. multi - level monte carlo finite element method for elliptic pdes with stochastic coefficients , _ num ._ , 119(1):123 - 161 , 2010 .benth , f.e . & koekebakker , s. stochastic modeling of financial electricity contracts , _ energy economics _ , 30(3):11161157 , 2008 .black , f. and cox , j. valuing corporate securities : some effects of bond indenture provision , _ j. finance _, 31:351367 , 1976 .bruti - liberati , n. & platen , e. _ numerical solution of stochastic differential equations with jumps in finance _ ,springer , 2010 .buckwar , e. & sickenberger , t. a comparative linear mean - square stability analysis of maruyama- and milstein - type methods , _ math . comp ._ , 81:11101127 , 2011 .bujok , k. & reisinger , c. numerical valuation of basket credit derivatives in structural jump - diffusion models , _ j. comp ._ , to appear. bush , n. , hambly , b. , haworth , h. , jin , l. , & reisinger , c. stochastic evolution equations in portfolio credit modelling , _ siam fin ._ , 2(1):627664 , 2011 .carmona , r. , fouque , j .- p . ,& vestal , d , interacting particle systems for the computation of rare credit portfolio losses , _ fin ._ , 13(4):613633 , 2009 .carter , r. & giles , m.b .sharp error estimates for discretisations of the 1d convection / diffusion equation with dirac initial data , _i m a j. num ._ , 27(2):406425 , 2007 .cliffe , k.a ., giles , m.b . , scheichl , r. , & teckentrup , a.l .multilevel monte carlo methods and applications to elliptic pdes with random coefficients , _ computing and visualization in science _, 14(1):315 , 2011 .fouque , j. , wignall b. , & zhou , x. modeling correlated defaults : first passage model under stochastic volatility ._ j. comp ._ , 11(3 ) : 4378 , 2008 .giles , m.b .multi - level monte carlo path simulation , _ operations research _, 56(3):981986 , 2008 .giles , m.b . improved multilevel monte carlo convergence using the milstein scheme , pp .343358 in _ monte carlo and quasi - monte carlo methods 2006 _ , editors keller , a. , heinrich , s. , & niederreiter , h. , springer - verlag , 2008 .glasserman , p. _ monte carlo methods in financial engineering _ , springer , 2004 .graubner , s. multi - level monte carlo methoden fr stochastische partielle differentialgleichungen .diplomarbeit , tu darmstadt , 2008 .gyngy , i. lattice approximations for stochastic quasi - linear parabolic partial differential equations driven by space - time white noise ii , _ potential anal ._ , 11:137 , 1999 .gyngy , i. & nualart , d. implicit schemes for stochastic quasi - linear parabolic partial differential equations driven by space - time white noise , _ potential anal ._ , 7:725757 , 1997 .heath , d. , jarrow , r. , & morton , a. bond pricing and the term structure of interest rates : a new methodology for contingent claims valuation , _ econometrica _ , 60(1):77105 , 1992 .higham , d.j .mean - square and asymptotic stability of the stochastic theta method , _ siam j. num ._ , 38(3):753769 , 2000 .hull , j. , predescu , m. , & white , a. the valuation of correlation - dependent credit derivatives using a structural model , _j. credit risk _, 6(3):99132 , 2005 . kloeden , p.e . &platen , e. _ numerical solution of stochastic differential equations _ , springer , 1992 .krylov , n.v .a -theory of the dirichlet problem for spdes in general smooth domains , _ probab .theory relat . fields _ , 98:389421 , 1994 .kurtz , t.g .& xiong , j. particle representations for a class of nonlinear spdes , _ stoch ._ , 83:103126 , 1999 .lang , a. a lax equivalence theorem for stochastic differential equations , _ j. comp ._ , 234(12):33873396 , 2010 .merton , r. , on the pricing of corporate debt : the risk structure of interest rates , _ j. fin ._ , 29:449470 , 1974 .pooley , d.m . ,vetzal , k.r . , & forsyth , p.a .remedies for non - smooth payoffs in option pricing , _ j. comp . fin ._ , 6:2540 , 2003 .richtmyer , r.d .& morton , k.w ._ difference methods for initial - value problems _ , wiley - interscience , 1967 .roth , c. difference methods for stochastic partial differential equations , _ z. angew_ , 82(1112):821830 , 2002 .roth , c. a combination of finite difference and wong - zakai methods for hyperbolic stochastic partial differential equations , _ stoch ._ , 24(1):221240 , 2006 .saito , y. & mitsui , t. stability analysis of numerical schemes for stochastic differential equations , _siam j. num ._ , 33(6):22542267 , 1996 .schnbucher , p.j ._ credit derivatives pricing models _ , wiley , 2003 .walsh , j.b .finite element methods for parabolic stochastic pdes , _ potential anal ._ , 23:143 , 2005 .xia , y. & giles , m.b .multilevel path simulation for jump - diffusion sdes , in _monte carlo and quasi - monte carlo methods 2010 _ , editors wozniakowski , h. & plaskota , l. , springer - verlag , 2012 .zhou , c. , an analysis of default correlations and multiple defaults , _ the review of financial studies _, 14:555576 , 2001 .if is the vector with elements then the finite difference equation can be expressed as a & = & i - \frac{\mu\ , k}{2h}\ , d_1 + \frac{(1\!-\!\rho)\ , k}{2h^2}\ , d_2 , \\[0.05 in ] b & = & -\ , \frac{\sqrt{\rho\ , k}}{2h}\ , d_1 , \\[0.05 in ] c & = & \frac{\rho\ , k}{2h^2}\ , d_2,\end{aligned}\ ] ] where is the identity matrix and and are the matrices corresponding to central first and second differences , which for are from the recurrence relation we get & = & { \mathbb{e}}\left [ v_n^t ( a^t + b^t\ , z_n + c^t\ , z_n^2)(a + b\ , z_n + c\ , z_n^2)\ v_n \right ] \\[0.1 in ] & = & { \mathbb{e}}\left [ v_n^t \left ( ( a\!+\!c)^t ( a\!+\!c ) + b^t b + 2\ ,c^t c \,\right ) v_n \right].\end{aligned}\ ] ] noting that is anti - symmetric and is symmetric , and that where corresponds to a central second difference with twice the usual span , ( with the end values of being chosen to correspond to and ) , and and are each entirely zero apart from one corner element , then after some lengthy algebra we get } \\ & = & { \mathbb{e}}\left [ v_n^t m v_n \right ] - \left ( e_1 + e_2 \right ) { \mathbb{e}}[(v_1^n)^2 ] - \left ( e_1 - e_2 \right ) { \mathbb{e}}[(v_{j-1}^n)^2],\end{aligned}\ ] ] where and it can be verified that the eigenvector of has elements for , and the associated eigenvalue is where are the same functions as defined in the mean - square fourier analysis . in addition , in the limit , , and therefore in this limit the fourier stability condition is also a sufficient condition for mean - square matrix stability .figure [ fig : conv ] shows the convergence behaviour as the computational grid is refined .level 0 has ; is reduced by factor 2 and by factor in moving to finer levels .the top - left plot shows the convergence of ] .the bottom two plots show the behaviour of , which is estimated on each grid level using the standard second difference the left plot indicates that the mean of this quantity is well behaved , but the right plot indicates a singular behaviour of its variance , with the value increasing rapidly with increased grid resolution . | in this article , we propose a milstein finite difference scheme for a stochastic partial differential equation ( spde ) describing a large particle system . we show , by means of fourier analysis , that the discretisation on an unbounded domain is convergent of first order in the timestep and second order in the spatial grid size , and that the discretisation is stable with respect to boundary data . numerical experiments clearly indicate that the same convergence order also holds for boundary - value problems . multilevel path simulation , previously used for sdes , is shown to give substantial complexity gains compared to a standard discretisation of the spde or direct simulation of the particle system . we derive complexity bounds and illustrate the results by an application to basket credit derivatives . |
options are financial contracts which give the holder the right to buy ( call options ) or sell ( put options ) commodities or securities for a predetermined exercise ( or strike ) price by a certain expiration date .conventional european ( american ) options can be exercised only on ( at any time up to ) the expiration date .since the option confers on its holder a right with no obligation , it should carry a price at the time of contract .it is the classic work of black , scholes and merton which suggested a strategy for determining a fair price for the option in a risk - free environment .closed - form valuation within the black - scholes - merton equilibrium pricing theory is only possible for a small subset of financial derivatives . in the majority of casesone must appeal to numerical techniques such as monte carlo simulations , or finite difference methods and much of the effort in the field has been in developing efficient algorithms for numerically solving the black - scholes equation .an alternative direction has been the evaluation of discrete - time , discrete - state stochastic models of the market on binomial and trinomial trees .not only is this discrete approach intuitive and easily accessible to a less mathematically sophisticated audience ; but it also seems to us to be a more accurate description of market dynamics and better suited for evaluating more involved financial instruments .moreover , the few exact black - scholes results available can be recovered in the appropriate continuous - time trading limit .the main difficulty in pricing with binomial trees has been the non - monotonic numerical convergence and the dramatic increase in computational effort with increasing number of time steps .for example , the state of the art calculations involve memory storage scaling linearly ( quadratically ) with the number of time steps , , for european ( american ) options , while the computation time increases like in both cases . in this paperwe reconsider valuation on binomial trees from what we call a forward looking " prospective : we imagine acting as well - educated consumers who attempt to eliminate risk and estimate the future expected value of an option according to some reasonable dynamical model .we will regard the movement of the price on the tree as a random walk ( with statistical properties consistent with a risk - neutral world ) with walls " imposed by the nature of the option , such as the possibility of early exercise ( american options ) or the presence of barriers .the resulting mathematical formulation then has two conceptually distinct components : the first ingredient is an explicit description of the possible walls " . for example , in the case of barrier american options both the barrier and the early exercise " surface need to be specified .the second step will be to compute the probability that the price reaches particular values at every accessible point on the tree .this involves counting the number of paths reaching that point in the presence of walls " , a somewhat involved but exactly solvable combinatorics problem .once these two steps ( specifying the walls and computing the probabilities ) are accomplished the value of both european and american options , with and without barriers , can be written down explicitly . in an attempt to be pedagogical ,we will limit ourselves to the simplest put options : european , simple american and european with a straight up - and - out " barrier . although the calculation can be simply extended to the barrier american option that discussion merits a separate publication .as far as we know , in the case of trees explicit formulas like the ones we are proposing exist in the literature only in the simplest case of conventional european options . for the more complicated case of american options ,the main issues are best summarized in the last chapter of neil chriss book : the true difficulty in pricing american options is determining exactly what the early exercise boundary looks like .if we could know this _ a priori _ for any option ( e.g. , by some sort of formula ) , we could produce pricing formulas for american options . "below we propose a solution to this problem in the context of binomial trees .our formulation complements the earlier studies of american options in the limit of continuous - time trading which also focus on the presence of an early exercise boundary for the valuation of path - dependent instruments .the study of the continuum limit of our formulas is instructive and will be left for a future publication .to establish notation we begin by dividing the life of an option , , into time intervals of equal length , .we assume that at each discrete time ( ) the stock price moves from its initial value , , to one of two new values : either up to ( ) or down to ( ) .this process defines a tree with nodes labeled by a two dimensional vector , ( ) and characterized by a stock price , the price reached at time after up and down movements , starting from the original price .the probability of an up ( down ) movement will be denoted by ( ) ; and thus each point on the tree is also characterized by the probability , , which represents the probability associated with a single path of time steps , ( ) of which involve an increase ( decrease ) in the stock price .computing the probability of connecting the origin with point requires , in addition to the single path probability , a factor counting _ the number _ of such possible paths in the presence of a barrier and/or the possibility of early exercise .the calculation of this degeneracy factor involves the details of each financial derivative and it will be discussed in turn for each of our examples .the binomial tree model introduces three free parameters , and .two of these are usually fixed by requiring that the important statistical properties of the random process defined above , such as the mean and variance , coincide with those of the continuum black - scholes - merton theory .in particular , where is the risk - free interest rate , and the volatility , , is a measure of the variance of the stock price . we are left with one free parameter which can be chosen to simplify the theoretical analysis ; one might choose , for example , , which simplifies the tree geometry by arranging that an up motion followed by a down motion leads to no change in the stock price .this condition together with ( [ statprop1 ] ) and ( [ statprop2 ] ) implies : we stress that equations ( 1 - 5 ) are to be regarded as short - time approximations where terms higher order in were ignored . with these definitions out of the way we can begin discussing the valuation of put options with strike price and expiration time .the simple european put option is a good illustration of our forward looking " approach .we are interested in all those paths on the tree which , at expiration time , reach a price , , for which the option should be exercised .that implies that ] .as already mentioned above , = \aleph _ e [ n , j ] p_u ^j ( 1-p_u ) ^{n - j} ] counts the number of paths starting at the origin and reaching the price in time steps . for the case of conventional european optionsthis is just the number of paths of time steps , with up and down movements of the price , and is thus given by the binomial coefficient , ~ = ~\left ( \begin{array}{c } n\\j \end{array } \right ) ~=~\frac{n!}{j!(n - j)!}.\ ] ] the resulting expression for the mean value of the option at maturity is then discounted to the time of contract by the risk - free interest rate factor , , to determine the current expected value of the option : this expression is not new : it was first discussed by cox and rubinstein who also showed that in the appropriate continuous trading - time limit ( ) ( [ valueev ] ) reduces to the black - scholes result .we are now ready to extend ( [ valueev ] ) into an exact formula for the mean value of an european put option with a barrier .although our approach can be used for other barrier instruments , we consider the simplest case of an up - and - out " put option which ceases to exist when some barrier price , , higher than the current stock is reached . with the choice an explicit equation for the nodes of the tree which constitute the barrier can be written down : here , ] . since the probability that any allowed path starting with the present stock price , , reaches an exercise price at maturity , , is still ( with ) the average value of the european barrier optioncan be written in a form similar to ( [ valueev ] ) : ~ p_u ^j ( 1-p_u)^{n - j } \left ( x - s_0 u^j d^{n - j}\right ) , \ ] ] where ] is given by ~=~ \left(\begin{array}{c } n\\j \end{array } \right ) ~-~\sum _ { h=0 } ^{h_m } \aleph _ { eb } ^{res}[j_b + 1 + 2h , j_b + 1+h ] \left ( \begin{array}{c }n - j_b -1 -2h\\ j - j_b -1 -h \end{array}\right ) , \ ] ] where the second term on the right - hand side represents the contribution from the unwanted paths which hit the barrier ( [ barrier ] ) before reaching an exercise point . to understand the form of the excluded contribution in ( [ excluded ] ) we first note that reaching the excluded region requires that the path hits the barrier at least once .one might think that the number of unwanted paths can then be calculated by ( i ) counting the number of paths connecting the origin to a given point on the barrier ; ( ii ) multiplying this by the number of paths connecting that point on the barrier with the exercise point this includes all paths which wander _ into _ the above - barrier region ; and finally ( iii ) summing over all points of the barrier ( [ barrier ] ) .however , a particular path reaching a given point on the barrier might have already hit any of the previous barrier points , and thus it would also be counted in the contribution in ( ii ) from all paths starting at the first barrier point reached by the particular path under consideration .thus , summing indiscriminately over barrier points would lead to overcounting unless , in ( i ) , we only include those paths which hit the barrier for the first time . in other words ,( i ) must only include paths starting from the origin which reach the particular point on the barrier without having previously visited any other barrier point .the number of such restricted paths ( reaching the point ) is what we denoted by ] , is already included as the contribution to ( [ convolutione ] ) .note that ( [ convolutione ] ) can be solved by standard laplace transform ( or -transform ) techniques .since in applying these ideas to the more complicated american options we will lose the convolution form the kernel will depend on and separately and not only through the difference , we will proceed in a more general way and stay in configuration space " until the very end .we prefer to regard ( [ convolutione ] ) as a matrix equation of the form : here and are dimensional vectors , with components ] .the nilpotent property of allows us to write down the explicit solution for ( [ matrixeqeb ] ) , ^{-1}{\bf d}_{eb}~=~ \sum _ { r=0 } ^{h_m } ( -1)^r { \bf q}_{eb } ^r { \bf d}_{eb},\ ] ] which in turn leads to the following formula for the value of the option , {h , l}\left ( \begin{array}{c } j_b + 1 + 2l\\ j_b + 1 + l\end{array}\right ) \\ & \times & p_u ^j ( 1-p_u)^{n - j } \left ( x - s_0 u^j d^{n - j}\right ) \nonumber.\end{aligned}\ ] ] the lower limit , , on the external sum in ( [ priceres ] ) excludes all paths unaffected by the presence of the barrier ; also we have explicitly indicated the and/or dependence of the various quantities involved ; and have separated out the contribution to ] , is explicitly given . to begin our calculation we will need some very general properties of the barrier .these follow from two simple characteristics of early exercise : ( i ) if the point is an early exercise point , then so are all points deeper in - the - money " , ; and ( ii ) if two adjacent points at the same time step , and , are both early exercise points so is the point .( the latter property follows from a conventional backwardation " argument which indicates that the average expected payoff at , discounted at the risk - free interest rate , is smaller than the actual payoff , thus making itself an early exercise point . )it is not hard to see that ( i ) and ( ii ) guarantee that the inner part of the early exercise region can not be reached without crossing the exb .thus , if we define to be the first time for which early exercise becomes possible and parametrize the points on the exb as ) ] .moreover , the structure of the tree ensures that ] either increases by one or remains the same .the formal expression for the price of an american option can be written down once one recognizes that once a path hits the exb the option expires and thus any point on the barrier can be reached at most once . as a result ,the value of the option is a sum of ( appropriately discounted ) payoffs along the barrier , weighted by the probability of reaching each point on the barrier without having visited the barrier at previous times .we can then write the expected value of an american option as : p_u ^{j_x [ i_a + h ] } ( 1-p_u ) ^{i_a + h - j_x [ i_a + h ] } \left ( x - s_0 u^{j_x [ i_a + h ] } d^{i_a + h -j_x [ i_a + h ] } \right ) , \ ] ] where denotes the number of paths reaching the exb in time steps without having previously visited any points on the barrier .one last step is the determination of , the value of the american put at every point on the tree which , in turn , will allow us to derive the equation for the exb .this is easily done by simply translating the origin in ( [ valueavfin ] ) : {h , l}\left ( \begin{array}{c } i_a + l - i\\ j_x [ i_a +l ] -j \end{array } \right ) \nonumber \\ & \times & p_u ^{j_x [ i_a + h]-j } ( 1-p_u ) ^{i_a + h - j_x [ i_a + h ] -i+j}\left ( x - s_0 u^{j_x [ i_a + h ] } d^{i_a + h -j_x [ i_a + h ] } \right ) .\end{aligned}\ ] ] together with ( [ defsurface ] ) this then leads to the rather formidable - looking equation for the barrier height ] would be ill - defined hence the choice ( [ defsurface ] ) . ] equations ( [ exb ] ) and ( [ exb2 ] ) for the boundary together with the formula for the value of the option , ( [ valueavfin ] ) , constitute an exact pricing strategy for a conventional american put .a similar formula for an american put with an up - and - out " barrier will be discussed in a future publication .it is instructive to consider equations ( [ valueavfin ] ) , ( [ exb ] ) and ( [ exb2 ] ) in the explicitly solvable case of a straight barrier .we begin with the observation that , at expiration , , ( [ exb ] ) reduces to the equation for ] by one with each backward time step we reach along the straight line , =i - n+j^* ] either increases by one or remains the same , this straight line represents a lower bound for the early exercise barrier . for this straight barrier ( [ inversion ] ) and ( [ valueavfin ] ) reduce to , ~p_u ^h \left ( 1-p_u \right ) ^{n - j^ { * } } \left ( x - s_0 u^h d^{n - j^{*}}\right ) \end{aligned}\ ] ] with we expect that the result for the true barrier should approach the straight line formula for coarse enough time steps , , ( where this is the first time of early exercise in the limit of continuous - time trading ) .we have presented a scheme for pricing options with and without barriers on binomial trees . to the best of our knowledgeours is the first explicit derivation of exact formulas treating barriers on binomial trees .it is our expectation that in the limit of continuous - time trading we should be able to recover the few exact results available in the literature , especially for american options .we also hope that our explicit formulas may provide a framework for improving the efficiency of numerical computations .the authors dedicate this paper to professor ferdinando mancini , a remarkable teacher , colleague and friend , on the occasion of his 60th birthday .we are grateful to stanko barle for reading the manuscript and bringing the work of references and to our attention .finally , we acknowledge the hospitality of the nyu physics department where most of this work was conceived . | we reconsider the valuation of barrier options by means of binomial trees from a forward looking " prospective rather than the more conventional backward induction " one used by standard approaches . this reformulation allows us to write closed - form expressions for the value of european and american put barrier - options on a non - dividend - paying stock . |
insect development rates are applied not only in pest control management but also in forensic entomology .several species of dipteran and coleopteran families infest decaying material in order to breed offspring , and this includes the colonization of animal carcasses as well as dead bodies . in homicide investigations , determination of the age of larvae feeding on a corpse can indicate a minimal post - mortem interval ( pmi ) .this is often important in forensic case work .the general life cycle of blow flies includes four stages : egg stage , larval stage , pupal stage and imago stage . during the larval stage , three instarscan be separated : 1st , 2nd and 3rd instar , where the latter is divided due to behavioral changes in feeding and post - feeding larvae .blow flies deposit egg clutches directly on the food substrate , such as a dead body , in a position where the eggs are protected and in a moist environment . this ensures a food supply for the hatching 1st instar larvae .the first three instars each undergo a moult to reach the next developmental stage ; the stages can be distinguished by the number of respiratory slits at the posterior end of the larvae .the third instar stage lasts for longer than the first two and is divided in a feeding and a post - feeding phase .the latter is a preparation for pupation .therefore , the larvae leave the food source to find a suitable place for pupation , emptying their gut .about one third of the pre - adult development time is spent in the post - feeding larval stage . then pupation sets in and the imago develops within the pupal case till eclosion .this last stage persists for about half of the time of the total development .the larval growth rate depends on its body temperature , which is directly influenced by environmental conditions as ambient temperature and the heat generated by maggot aggregations . also , an important detail for pmi determination is that each species has its own temperature dependent growth rate . in forensic case work ,two different methods are frequently used to calculate a pmi .the first uses isomegalen or isomorphen diagrams , by which the lengths or the developmental stage of the larvae are combined as a function of time and mean ambient temperature in a single diagram . according to its originators ,this method is optimal only if the body and therefore the larvae were not undergoing fluctuating temperatures , e.g. in an enclosed environment where the temperature was nearly constant .the second method of calculating a pmi estimates the accumulated degree days or hours ( add or adh ) .adh values represent a certain number of `` energy hours '' that are necessary for the development of insect larvae .the degree day or hour concept assumes that the developmental rate is proportional to the temperature within a certain species - specific temperature range ( overview in ) .however , the relationship of temperature and development rate ( reciprocal of development time ) is typically curvilinear at high and low temperatures and linear only in between .the formula for calculating adh is given by where is the development time , is the ambient temperature , and the minimum developmental threshold temperature is a species - specific value , the so called development zero , which is the x - intercept , i.e. , an extrapolation of the linear approximation of the reciprocal of the developmental time .this value has no biological meaning , it is the mathematical consequence of using a linear regression analysis .one basic condition for using the adh method is that the adh value for completing a developmental stage stays constant within certain temperature thresholds .for example a developmental duration for finishing a certain stage of 14 days at 25 results in 238 add when a base temperature of 8 is assumed .a developmental duration of 19 days at 21 results in 231 add , both add - values are in the same range .we analyzed a published data - set for the development of _ lucilia sericata _( meigen 1826 ) and calculated the corresponding adh values for these data .[ f - temp_adh ] shows the calculated adh values for a base temperature of =8 ( as calculated by a linear regression analysis for the used data - set ) . in the figurewe see a new effect : for the younger and also shorter developmental phases the adh values are nearly constant over the complete range of temperatures , but for the post - feeding and the pupal stages the adh values are strongly temperature dependent .in general , the adh method seems to give good results only when the larvae of interest have been exposed to temperatures similar to those used in generating the reference value applied in the pmi calculation .moreover , the temperature range in which the development rate is actually linear is not wide enough to cover all temperatures during a typical summer in germany ( see also examples for may / june 2008 in fig .[ f - temp_profile ] ) .furthermore , neither developmental durations nor base temperatures for development have been calculated for species originating from germany .the method must therefore be used carefully .furthermore , it is highly problematic that uncertainties for temperature measurements from a crime scene can not be taken into account by either of the commonly used methods for pmi determination .it is difficult to determine the actual temperature controlling the larvae at a real crime scene .since temperature is the variable that most influences development , it is crucial to consider it as accurately as possible .the standard procedure is to use temperatures of the nearest weather station for the desired time frame and correct them by applying a regression starting from temperatures measured at the crime scene , when taking the larvae as evidence . the corrected values still contain uncertainties that can not be accounted for by the methods currently used for pmi determination .no information exists for either model about the quality of the method or the error intervals of the calculated pmis .we analyzed developmental data for _ l. sericata _ at different temperatures and fitted an individual exponential function for each developmental stage .data used as input to the model were published by grassberger and reiter ( 2001 ) and represent the minimal time in hours to complete each larval phase ( egg stage = stage 0 , 1st instar = stage 1 , 2nd instar = stage 2 , 3rd instar feeding = stage 3 , 3rd instar post - feeding = stage 4 and pupal stage = stage 5 ) until eclosion of the adult blow fly .the used data - set is one of the rare sets which covers a lot of temperatures and the resulting growth curve seems to represent growth behavior well ( see original paper ) .unfortunately , grassberger and reiter do not give any error values for their measurements , so we assumed an error for the developmental times of about 1 hour .these authors used 250 g of raw beef liver in plastic jars , and placed 100eggs on the food substrate .the jars were placed in a precision incubator . at each temperature regimethe procedure was repeated 10times .every 4hours , four of the most developed maggots were removed from the plastic jars , killed in boiling water , and preserved in alcohol and then their stage of development was determined .our new larval growth model is based on the data shown in fig .[ f - temp_time ] , in which the duration of each developmental stage was measured as a function of temperature .these data points were fitted with an exponential function of the form : where is the duration of one developmental stage as a function of temperature .the parameters fitted for the different stages are shown in table [ t - fitpar ] .the parameter defines how strongly the time interval depends on temperature ; the higher the parameter in table [ t - fitpar ] , the steeper is the gradient of the fitted curve . represents the minimum time interval required for finishing a certain developmental stage and provides the absolute normalization .the developmental stages of the maggots were determined every , such that time measurement errors are set to following an uniform distribution .it is assumed that the maggot body temperature is known to an accuracy of 3% in order to take into account uncertainties about differences between ambient and maggot body temperature .the parameters , and were determined by minimizing the sum of error squares . as seen in fig .[ f - temp_time ] , the exponential function accurately models the behavior during all developmental stages and will be used below . in all stagesthe developmental duration at temperatures below 24 starts to rise exponentially .[ f - temp_adh ] shows the calculated adh values corresponding to eq .( [ e - adh ] ) ( data points ) .in addition , the figure shows the function ( lines ) . is calculated by eq .( [ e - fit ] ) with the previously fitted parameters ( table [ t - fitpar ] ) .again , the functions give a reasonable description of the data .nevertheless , the model is an empirical one , based on the observations of the data points generated by grassberger and reiter ( 2001 ) . for european and especially german temperatures , calculation of the total developmental duration must allow for non - linear temperature behavior in order to ensure accuracy .the basic idea underlying a new approach in pmi determination is to follow an ambient time - temperature profile backwards in time starting from the time point at which the maggots of interest were collected .the idea of backwards calculation is obviously similar to the adh method , but in the new model the important improvement is the way of calculating the larval age .the latter is calculated successively during certain time steps using the fitted functions ( introduced in fig .[ f - temp_time ] ) corresponding to the current developmental stage . in each stage the relative developmental progress is ( values 0 - 1 ) where 0 is the beginning and 1 is the finishing point of each developmental stage ; e.g. a maggot in the middle of the post - feeding stage is , at the end of the post - feeding stage it is and so forth .the developmental duration spent in each individual stage is calculated by solving the relation : where is the infinitesimal relative development .the calculation starts with the developmental stage of the maggot at the time of collection , summing the developmental progress of each stage backwards until the beginning of the egg stage is reached .the calculation for each collection stage uses .the total development time or post - mortem interval ( pmi ) is then given by for the new model a program was written in c++ using root ( http://root.cern.ch/ ) .this program includes all mentioned mathematical steps and produces the figures shown here as output . for each new pmi calculation ,the corresponding temperature profile can be inserted and individually chosen uncertainties can be included . to explore the uncertainties in the total developmental duration , a monte - carlo simulation was applied , which is commonly used for simulations in life sciences .it is a method for calculating one final uncertainty after considering all statistically independent uncertainties that influence e.g. the larval age .the mean pmi with corresponding standard variation is calculated times taking into account and varying all uncertainties described in the following .first , the developmental profiles have uncertainties due to the measurement procedure .second , the time - temperature profile from the collection scene is not known precisely and must be approximated using temperature values from nearby weather stations .the variations are introduced for each model as follows : development profile : : : the mean duration values of the temperature - time data are randomly smeared with a uniform distribution with corresponding error ; for the maggot body temperature a gaussian distribution is used .new fits with the function in eq .( [ e - fit ] ) are performed for each stage .time - temperature profile : : : deviations between the temperature profile at the collection scene and the nearest weather station are accounted for by gaussian smearing of time and temperature , with the corresponding errors and as width for each data point . can be inserted in the models calculation individually dependent on the differences between the temperatures at finding place and weather station .we calculated the pmi for a mock crime scene with the following parameters : the error of the measurement of the original data , the errors of the data of the weather station and , the difference of the ambient temperature and larval body temperature % for 10.000 models for a fixed collection stage progress of .the results are shown in fig .[ f - temp_profile ] _ ( upper part ) _ which is a direct output of the new program that calculates the pmi .the lower time axis defines the progress of the temperature profile forward in time , representing the time frame of interest .the temperature profile used here ( black line ) is taken from the minimum and maximum temperatures in may and june 2008 measured at cologne / bonn airport .the right end of the diagram marks a fictional time point of maggot collection and therefore the starting point for pmi calculation .the upper time axis depicts the pmi backwards in time starting from the moment of maggot collection . for each developmental stagethe pmi was calculated by following a linear interpolation between the maximum and minimum temperatures .the histograms illustrate the pmi distribution for each stage and show a clear single peak structure .the arrows on the top show the 1-standard deviation interval for each stage around the mean pmi value , and range between 0.1 and 1.2 days ( depending on the stage ) . since no data points below temperatures were measured , the functions were extrapolated to lower temperatures . as expected , the pmi and the corresponding standard deviation increase with higher developmental duration ( see arrows above histogram ) .since the exact progress within the developmental stage at collection time is most of the time also unknown , a third uncertainty is introduced : stage progress : : : the developmental stage at collection time was determined only to integer precision , so that it is assumed the exact progress is an uniformly distributed value between 0 and 1 .consequently , the starting value for the pmi calculation at time is randomly and uniformly chosen within the interval [ 0,1 ] for each model . fig .[ f - temp_profile ] _ ( lower part ) _ shows the pmi calculation for the same parameters as before , but without setting the progress of the development for each stage to a fixed value .the 1-standard deviation values increase by 0.3 to 3.3 days .the resulting uncertainty in the progress of the stage contributes about 75% to the total pmi error interval .in addition , the histograms show deviations from a clear single peak structure , e.g. for the pupal stage , implying that the pmi probabilities for 21days and 26days are nearly the same . to use the new model , the crucial parameter is therefore the correct determination of the progress of the developmental stage of maggots collected from a corpse .the impact of correct temperature determination at the maggot collection scene is shown in fig .[ f - stage_pmi_compare ] .the data points represent the mean pmis with an error bar of 1 standard deviation as a function of collection stage for three different temperature profiles .the triangles show the pmis for the original temperature profile as measured at cologne / bonn airport .the bullets ( squares ) show the results for the same profile but subtracted ( and added ) by 2 .as expected , the pmis and the corresponding standard deviations of the lower ( higher ) temperature profile increase ( decrease ) relative to the nominal profile .these differences in temperature of 2 give rise to an effect of 15 - 30% .that implies that a miscalculation of the temperature at the crime scene of 2 will result in a miscalculation of the pmi by 15 - 30% .the later the stage , the greater the deviation from the actual pmi . in fig .[ f - pmi_rel_compare ] , pmis calculated using the corresponding mean temperature values in the temperature interval $ ] are compared with pmis from our model using the three temperature profiles introduced previously .the calculated mean temperatures were as follows ( calculated for the time frame till completion of each stage ) : stage 0 = 18 , stage 1 = 19 , stage 2 = 20 , stage 3 = 19 , stage 4 = 16 , stage 5 = 17 .the pmi values based on the temperature profile and those based on a mean temperature value agree to within about 5% for the high temperature value ( original profile + 2 ) in all stages .the deviation between mean temperature and the original temperature profile exceeds the 10% level starting at the 3rd instar feeding stage , and increases to 25% in the pupal stage .this effect becomes even larger for the low temperature profile ( original profile subtracted by 2 ) . starting from the 2nd instar stage , the deviation increases from about 10% up to about 65% for the pupal stage .this means that use of mean temperature values overestimates the influence of low temperatures and underestimates periods of high temperatures .the effect should be larger if the mean temperature during the development is lower still , e.g. in spring or fall . in general , more data points are needed for the developmental duration at low temperature ranges to provide more reliable statements .we calculated the pmi in a real case where the actual pmi was known due to a confession of the offender . at the end of august 2007the victim was killed in early morning and was found 4 days later also in the morning on a grassland .this leads to a pmi of approximately 96 hours .the victim was stabbed to death and had several wounds which would act as attractant to the blow flies .it can be assumed that blow flies started ovipositing early after death occurred .autopsy was performed directly after the corpse was recovered and several 2nd instar larvae of _l. sericata _ were collected .the largest larvae measured 6.1 mm .hourly temperature values were taken from a weather station 10 km away .the mean temperature was 16 .using grassberger and reiters isomegalen diagram for a larvae measuring 6 mm and a mean temperature of 16 results in a time interval of 3.2 days plus 30 hours ( larval development time plus egg period ) . in total a pmi of 107 hours is indicated .this would shift the time of oviposition to nighttime , which is a highly unlikely event .the same data can be used to calculate the adh value for _ l. sericata _ for reaching 6 mm in order to calculate the pmi not based on the mean temperature but on hourly data .as mentioned earlier , a regression analysis of the data set reveals a base temperature of 8 .the corresponding adh value is therefore 856 , based on the equation : subtracting the hourly adh values , estimated by the temperature values from the weather station and the base temperature , from the starting value of 856 results in a pmi of 101 hours . to use the new model for calculating larval age in the real case ,information about the progress of the 2nd instar larval stage was required . in the original work of grassberger and reiter ( 2001 , figure 1 )a figure is included showing the growth of the larvae and also the time points for each moult . according to this figure , the 2nd instar stage sets in after the larvae have reached a size of approximately 4 mm and ends when the larvae have reached a size of approximately 8 mm . as the largest larvae we collected measured 6 mm , we chose p=0.5 as progress for the larval stage .we included the hourly temperature profile and chose a temperature error of 1 .the result of the calculation was a pmi of 99 hours ( sd = 3 hours ) .these calculations of a pmi in a real case show that all three methods give reasonable results .furthermore , it becomes obvious that the new model is a possible alternative for the existing methods with the benefit of directly providing a standard deviation for the calculation .the new model improves the larval age calculation in specific ways .it can be used in non - linear parts of the temperature dependent development , and includes individually defined uncertainties for a temperature profile determined retrospectively from the nearest weather station . in the new model the temperature profile plus the determination of the larval stage are translated into a mean pmi as well as a standard deviation .pmi calculation using mean temperatures , however , can lead to severe deviations from the real pmi .so far , the main uncertainty arises from the fact that the developmental stage is determined only on a 1 - 6 scale ( egg , 1st instar , 2nd instar , 3rd instar feeding , 3rd instar post - feeding and pupae ) .as shown above , 75% of the uncertainties in the model depend on the exact determination of the developmental progress , and additional length values , as shown for the pmi calculation in the real case , will propably increase its accuracy leading to more accurate pmi calculations .moreover , the next step is to produce own growth data with known error values to refine the inclusion of uncertainties that are only rough estimates at the present time and to improve the till now only empirical model . nevertheless , the new pmi calculation program is suitable for use in forensic case work as a general tool for pmi determination .scientists from every country or climatic region can incorporate their own growth values for different species and ensure a high accuracy in pmi determination .anderson g ( 2001 ) _ forensic entomology : the utility of arthropods in legal investigations _, chapter insect succession on carrion and its relationship to determining time of death .crc press , pp .143175 archer m and elger m ( 2003 ) female breeding - site preferences and larval feeding strategies of carrion - breeding calliphoridae and sarcophagidae ( diptera ) : a quantitative analysis .australian journal of zoology 51:165174 grassberger m and reiter c ( 2001 ) effect of temperature on lucilia sericata ( diptera : calliphoridae ) development with special reference to the isomegalen- and isomorphen - diagram .forensic sci int 120(1 - 2):3236 | homicide investigations often depend on the determination of a minimum post - mortem interval ( pmi ) by forensic entomologists . the age of the most developed insect larvae ( mostly blow fly larvae ) gives reasonably reliable information about the minimum time a person has been dead . methods such as isomegalen diagrams or adh calculations can have problems in their reliability , so we established in this study a new growth model to calculate the larval age of _ lucilia sericata _ ( meigen 1826 ) . this is based on the actual non - linear development of the blow fly and is designed to include uncertainties , e.g. for temperature values from the crime scene . we used published data for the development of _ l. sericata _ to estimate non - linear functions describing the temperature dependent behavior of each developmental state . for the new model it is most important to determine the progress within one developmental state as correctly as possible since this affects the accuracy of the pmi estimation by up to 75% . we found that pmi calculations based on one mean temperature value differ by up to 65% from pmis based on an 12-hourly time temperature profile . differences of 2 in the estimation of the crime scene temperature result in a deviation in pmi calculation of 15 - 30% . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore |
in material science , fractography is concerned with the description of fracture surfaces in solids and is routinely used to determine the cause of failure in engineering structures .various types of crack growth ( fatigue , stress corrosion , cracking ... ) produce different characteristic features on the surface , which in turn can be used to identify the failure mode and direction .fractography is one of the most used experimental techniques to recover some aspects of crack dynamics , and is thus a major tool to develop and evaluate theoretical models of crack growth behavior .fractography in two dimensional and three dimensional materials is fundamentally different .a broken surface in three dimensions is the trace of a front line singularity and unless a full dynamical measurement is available , it is not possible to reconstruct its propagation history .an extreme example is the family of systems in which a crack front is confined to move along a weak plane and where the post - mortem surface is simply a flat surface that can not reveal any information on the actual advance of the crack front .in contrast , a crack path in two dimensions is simply the trace left by the propagation of a crack tip and thus can be thought of as the world - line of a moving singularity .this means that a post - mortem fractographic analysis can fully recover the crack propagation history .this shows that the 2d problem provides a good framework with many advantages to decipher the crack tip dynamics , and is therefore our focus here . for slowly propagating cracks in homogeneous materials ,fracture surfaces are smooth and a fundamental question that arises in that context is the stability of the propagating crack with respect to a prescribed path .a stability analysis of two - dimensional cracks propagating in homogeneous materials based on linear elastic fracture mechanics ( lefm ) was performed by cotterell and rice , and yields the famous -criterion .this criterion states that if the quantity called the -stress ( see eq .( [ eq : sif ] ) below for a definition ) is positive the crack path becomes unstable , whereas if the path is stable .experimentally and theoretically , the instability predicted by the -criterion has been proven to be a necessary condition but not a sufficient one .the stability analysis of cotterell and rice being incomplete already in homogeneous media can not be expected to describe correctly crack propagation in heterogeneous media where the path of the fracture is generally rough .actually , for such materials , fracture surfaces are claimed to exhibit fractal ( or self - affine ) properties .the self - affinity of a -dimensional surface is fully characterized by exponents . since the dynamics of cracks in heterogeneous media is a rich field encompassing a wide range of physical phenomena , it is important to distinguish between three different exponents : the one describing roughness in the direction perpendicular to the crack propagation , the second one describing the roughness in the direction of the propagation ( the so called `` out - of - the - plane '' roughness , which is the subject of this paper ) and the third one describing the in - plane roughness of the crack front during its propagation through the material . in some casesthese exponents are related but generically they are independent. there are some experimental measurements of the roughness exponent of two - dimensional or quasi two - dimensional cracks ( in the appendix some methods of measurements of the `` out - of - the - plane '' roughness exponent are described ) . for berea sandstone ( ) ; for concrete ( ) ; for paper ( , and ) ; for wood ( ) and for a system of drinking straws ( ) .as can be seen , all the values vary between and and thus suggest non - universal behaviour .in particular note that the measured exponents are all larger than , a value that corresponds to the roughness exponent of an uncorrelated random walk . in this workwe aim at a thorough study of crack propagation in 2d disordered materials .a natural question is whether the roughness of the broken surface is related to an instability mechanism of the crack tip propagation . in the followingwe will derive an equation of motion for a crack propagating in disordered medium that would allow us to study both its stability and roughness properties .the paper is organized as follows : we start by recalling the stability analysis la cotterell and rice. then we present the formulation of a stochastic model that takes into account the material heterogeneity and uses results regarding kinked cracks , and extend the -criterion to heterogeneous materials .we then specialize to the case and discuss the roughness of the resulting crack surfaces .thanks to an exact result of the model in that limit , we are able to obtain analytically the form of the power spectrum of the paths , and offer an alternative interpretation of experimental results .we conclude by discussing the implication of our result on the methodology of self - similarity analysis by suggesting a new measurement bias that has not been considered previously .the key ingredient that allows a general discussion of cracks in a brittle material is the fact that the static stress field in the vicinity of the crack tip has the following universal expansion where are polar coordinates with located at the crack tip , and are known functions describing the angular variations of the stress field components . in this expansion , ( ) and are the stress intensity factors ( sifs ) and the nonsingular -stress respectively .this singular behaviour of the stress field justifies the expectation that the crack - tip dynamics could be formulated in terms of the sifs and the -stress alone . in a 2d material ,well - established criteria for quasi - static crack propagation are the griffith energy criterion and the principle of local symmetry ( pls ) .this is expressed by the following equations of motion \ ; , \label{eq : grif}\\ k_2 & = & 0 \qquad \qquad \qquad \qquad \qquad \qquad [ { \rm pls } ] \ ; , \label{eq : pls}\end{aligned}\ ] ] where is the lam shear coefficient and is the fracture energy .( [ eq : grif ] ) states that in order to induce crack propagation , the energy release rate must be large enough to create new crack surfaces .( [ eq : pls ] ) imposes the symmetry of the stress field in the vicinity of the crack tip , such that it is locally under a pure opening mode .therefore , the crack path is mainly selected by the pls while eq .( [ eq : grif ] ) controls the intensity of the loading necessary to allow propagation .other propagation criteria have been proposed in the literature , notably the maximum energy release rate criterion which states that the crack extends in the direction that maximizes the rate of energy release .however , the pls has been shown to be the only self - consistent one .a linear stability analysis of quasi - static two - dimensional crack propagation in homogeneous materials based on these concepts has been performed by cotterell and rice , which gave rise to the -criterion .this criterion states that for , a tensile crack propagation becomes unstable with respect to small perturbations around the straight path .otherwise the straight crack propagation is stable .experimentally , the -criterion is known to hold , at least for cases with , that is when the prediction is that cracks become unstable . however, even when the crack path can become unstable in some situations ( for example , the thermal crack problem in which the crack path exhibits an oscillatory instability ) .in addition to the -criterion , cotterell and rice predicted that for a semi - infinite straight crack experiencing a sudden local shear perturbation the subsequent crack path scales as in the stable regime , while in the unstable case .however , the result is only marginally stable ( i.e. for large s ) , thus reflecting a limited aspect of stability .this situation calls for a revision of the stability properties of slow cracks , especially in heterogeneous materials .based on these observations we propose an equation describing the propagation of a crack in a disordered medium that allows to predict its path and study its stability . our model is based on a description where all the relevant information is encoded in the sif s and in the -stress .the crack propagation criteria used are the griffith energy balance ( [ eq : grif ] ) and the principle of local symmetry ( [ eq : pls ] ) . the physical picture of a propagating crack in a disordered material in the current formulationis summarized in fig .[ fig : curved ] .it assumes that the crack tip propagates smoothly until it encounters a heterogeneity that changes locally the fracture energy and induces a local shear perturbation . as a result, the crack forms a kink at a prescribed angle depending on the local perturbation induced by the heterogeneity . in order to calculate this angleit is necessary first to introduce some results regarding kinked cracks. corresponds to location of the heterogeneity.,width=302 ] consider an elastic body containing a straight crack with a kinked curved extension of length and kink angle . using standard assumptions related on the scaling of the stress field in the vicinity of the crack tipit is shown that the shape of the local crack extension should be given by where is the slope of the kink and the curvature parameter quantifies the curved extension of the kink .moreover , it is shown that the static sifs at the crack tip after kinking are related to the sifs before kinking and to the -stress via \sqrt{s}+o(s)\ ; , \label{eq : exp}\ ] ] where , and are universal functions in the sense that they do not depend on the geometry of the body nor on the applied loading .they depend only on the kink angle and were computed in .note that eq .( [ eq : exp ] ) shows that in general unless a special symmetry sets it to zero .therefore , applying the principle of local symmetry means that the expansion ( [ eq : exp ] ) should vanish order by order in . in the presence of a small shear loading ( ) the extension of the initial straight crackmust therefore satisfy where the expansions of the functions , and for small angles have been used .( [ eq : kink0 ] ) fixes the kink angle that develops due to the presence of shear perturbations , while eq .( [ eq : kink1 ] ) determines the subsequent curvature of the crack path . in order to use eq .( [ eq : kink0 ] ) , one needs to know the sifs just before kinking for an arbitrary broken surface ( with ) . for pure opening loading conditions and using a perturbation analysis around a straight crack ( i.e. a crack parallel to the -axis whosetip coincides with the curved crack located at ) , it can be shown that to first order in one can write and as functionals of through where and are the stress intensity factors and the -stress component of a straight crack located at .also , under pure tensile loading one readily has note that is proportional to , as expected , but also depends on the geometry of the problem via .the superscript refers to quantities corresponding to the configuration of a centered straight crack ( i.e. one that is located at ) .in addition , we will assume , as in , that is independent of , arguing that its variation in space does not modify qualitatively the results .however , a variation in the stress intensity factor should induce a variation in the -stress . indeed , lefm insures that for the same conditions under which eq .( [ eq : k2 ] ) is valid , one has .notice that eq .( [ eq : k2star ] ) reveals an additional source of bias in the stability analysis of cotterell and rice , since the linear perturbation performed in ignores the term proportional to in the expression of . at this pointwe introduce the heterogeneities in our model .the source of heterogeneities can be either variations of the elastic moduli in the material , or from residual stresses that were introduced for example by welding , or during the machining of the material .since the stress field is tensorial , these heterogeneities should affect both and independently .the local fluctuations in the toughness denoted by have a finite mean .however , the local shear fluctuations , denoted by must have a vanishing average because of pls .assuming that the crack advance between heterogeneities obeys the principle of local symmetry and using eqs .( [ eq : kink0],[eq : k2],[eq : k2star ] ) and the discussion above , one concludes that the local kinking angle is determined by + o\left ( \delta k_\ell^2 , \delta k_\ell^2 \right ) \label{eq : dtheta},\end{aligned}\ ] ] where is a length - scale that depends on the geometry of the configuration .for example , it is proportional to the width of the strip in the case of a finite strip geometry - a configuration that is often adopted in experiments .the stage is set now to write the equation governing the crack path evolution .since we are dealing with linear perturbations , we will also neglect the curvature parameter introduced in eq .( [ eq : shape ] ) , and assume that the crack extension after kinking is always straight .this assumption is justified when or when the distances between successive kinking events are small ( which is equivalent to high density of heterogeneities ) .it leads to the configuration depicted in fig .[ fig : model ] , from which one can easily read the equation . in the limit of small equal intervals between successive heterogeneities , one has , then and eq . ( [ eq : dtheta ] ) leads to where the indexes have been replaced by the position through the passage to the continuum limit .( [ eq : motion ] ) reveals two noise terms that can be redefined by also , let us use the geometrical scale of the configuration as a unit length and define the constant then , eq .( [ eq : motion ] ) becomes the following dimensionless stochastic equation note that by choosing the length scale , the total extension of the crack is not given .also , can be either positive or negative depending on the sign of the -stress .the discontinuous nature of crack propagation in a disordered material imposes a detailed discrete microscopic description of the influence of heterogeneities .the resulting stochastic integro - differential equation of the crack path should be derived as the continuum limit of the discrete model .this approach is different from previous pure continuum modeling that implicitly assumes smoothness of the paths and _ one _ source of noise that is introduced _ a posteriori_. in opposite , eq .( [ eq : start ] ) shows that our approach leads to derivatives of _ two _ noise terms , one of which is multiplicative and the other is additive , without imposing them _ a priori_. the properties of the noise terms are prescribed by the original distribution of heterogeneities in the material that may exhibit long - range correlations as well as anisotropy . although such features may be important and in order to remain general , we assume short range correlations and thus model the noise terms as independent gaussian white noises note that and that enter eq .( [ eq : start ] ) are conserved random terms ( i.e. , derivatives of white noises ) modeling the fluctuations in the local toughness and the local shear respectively . in the following, we will show that the simple scenario of uncorrelated disorder already offers a rich spectrum of results . including additional features in the disorder , such as long - range power law correlations ,could lead to richer phenomena and is left as a possible extension to the present analysis . to be consistent with the derivation of the model one needs both the noise amplitudes and to be small .however , from eq .( [ eq : start ] ) one can see that varying the amplitude is equivalent to multiplying by a constant , i.e. to fixing the overall scale of the height fluctuations , which does not influence the roughness of the curve .since the scaling properties are not affected , the value of will not be reported in the following and will be the only pertinent noise parameter . as a result be presented in arbitrary units .regarding the -stress , one expects to be of order in the framework of lefm .as a first observation , if the material is homogeneous or weakly disordered , one has and the solution of eq .( [ eq : start ] ) is simply .the addition of suitable initial conditions allows recovering the zero order solution corresponding to a centered straight crack path . eq . ( [ eq : start ] ) should be understood as resulting from a perturbation analysis of the crack trajectory around the solution in the absence of heterogeneities that is selected by the pls .this should be contrasted with the stability analysis of a straight crack in a homogeneous material with respect to other solutions that satisfy also the pls .an example of such a situation is the thermal crack problem where oscillatory crack paths exist in addition to the centered straight one and become more stable than the straight configuration at a given well defined threshold .the crack propagation there is always smooth and is very different from the stability encountered in disordered materials , which is due to the large density of heterogeneities that induces discontinuous propagation via linear segments between the heterogeneities . in order to study crack paths that result from eq .( [ eq : start ] ) , we start with a numerical integration of it .the initial condition will be always chosen to be a straight semi - infinite crack , for . in a discretized form , eq .( [ eq : start ] ) becomes for , using as the uniform distance between heterogeneities and as initial conditions . eq .( [ eq : start - dis ] ) is a discretized version of eq .( [ eq : start ] ) that corresponds to the it prescription and was chosen by the discrete manner by which eq .( [ eq : start ] ) was derived .also , the quantities are taken as independent random numbers , equally - distributed in the segment ] .thus corresponds to the density of heterogeneities and a small probes the regime of highly disordered materials .[ fig : zoom ] shows an example of a crack grown using eq .( [ eq : start - dis ] ) .the inset shows a zoom into a small part of the path , which may be suggestive of self - similar properties to the naked eye .however , before a thorough study of this aspect , a stability analysis la cotterell and rice should be performed . , and .[ fig : zoom],width=377 ] as mentioned above , the classical -criterion of cotterell and rice states that straight tensile crack propagation in homogeneous materials become unstable when . we therefore simulated crack paths with various values of in order to test this criterion within our model .we first consider the case with only one shear perturbation at , after which the local toughness is and only is allowed to fluctuate .the results are presented in fig .[ fig : t - crit]a .essentially , we recover the -criterion , namely an instability occurs for ( or equivalently ) .it turns out that by adding the shear perturbations ( i.e. , ) , the same scenario is recovered ( see fig .[ fig : t - crit]b ) .one noticeable difference is that for positive values of , the divergence of the path seems to accelerate due to the presence of the shear perturbations .still , paths that do not experience shear perturbations ( and ) will not destabilize in their presence .it should be mentioned that within the approach of cotterell & rice it is not possible to follow more than one kink , as would certainly be the case in a disordered material where cracks propagate via many consecutive kinking events . in summary ,our results confirm the -criterion for homogeneous materials and extends it to disordered systems .it is shown that straight crack propagation is unstable for and stable elsewhere .moreover , in the stable case the marginal stability has been cured by the suppressing the square root behaviour predicted in .-stress and with , .( a ) , i.e. ) and ( b ) .note that for the cases , the range of the -axis is wider emphasising the exponential increase of the amplitude of the oscillations in those cases .[ fig : t - crit],title="fig:",width=340 ] -stress and with , .( a ) , i.e. ) and ( b ) . note that for the cases , the range of the -axis is wider emphasising the exponential increase of the amplitude of the oscillations in those cases .[ fig : t - crit],title="fig:",width=340 ]in view of these results , from now on we will restrict our study to stable paths - as we are interested in crack roughness .we first focus on the case , since it is simple enough to allow for definite numerical and analytical results , and at the same time contains the necessary complexity .this claim is based on scaling arguments - power counting which is also supported by a numerical study . to put it simpler , as long as the crack path is stable , the presence of the -stress does not change dramatically the shape of the crack paths . at the end of the paper, we will come back to this point and show how a non zero -stress influences the result . for the case of vanishing -stress ( in eq .( [ eq : start ] ) ) , exact results have been obtained previously . here , these results will be summarised briefly and extended .one technical difference is that in this work lengths are scaled using the width and not using the length ; as a result the dimensionless parameter defined in is set to here .essentially , our analysis is divided into two parts .the idea is to identify first the possible responses of the crack path to one shear perturbation , and only then to generalize to a superposition of many shear perturbations .qualitatively , in reaction to a single shear perturbation , a straight crack deviates from its former direction by an angle which is proportional to the strength of the perturbation and then starts to relax to its original form .interestingly , it is found that the crack path can relax in two different ways : either by decaying exponentially ( inset of fig .[ fig : eta1]a ) or by decaying exponentially while oscillating ( inset of fig .[ fig : eta1]b ) . this behaviour can be understood from the study of the logarithmic derivative of the crack path , , which becomes stationary during the relaxation .a fokker - planck equation is derived for , along similar lines to those described in , and the effect of the toughness fluctuations , , on average , can be reproduced by an effective deterministic evolution .this result allows to derive an effective ( coarse grained ) simple equation of motion in the presence of a single shear perturbation ( and ) , namely \label{eq : h - c } \ , \ ] ] where . comparing this with eq .( [ eq : start - dis ] ) one concludes that the averaged equation ( [ eq : h - c ] ) is obtained from the full one ( in a non - trivial way explained in ) by simply replacing the noise term with a constant , , that is always negative ( even though is equally positive and negative ) , and proportional to its variance .this nontrivial result shows that the effect of the local toughness fluctuations is not so dramatic on the shape of the crack , apart from its constant variance and of course apart from setting a relevant scale for the energy that has to be invested in making the crack grow .however , one should be careful with the interpretation of eq .( [ eq : h - c ] ) , as it describes only mean quantities , which does not imply that each realization behaves exactly the same .now , one can easily solve eq .( [ eq : h - c ] ) with the initial conditions ( i.e. one shear perturbation at the origin only ) . for get this result shows why two kinds of responses to an initial perturbation are possible . the solutions of eq .( [ eq : h - c ] ) can exhibit either an exponential decay or damped oscillations depending on the sign of .when , simply decays exponentially , while for , the hyperbolic sine becomes an oscillating function , and thus we find an oscillatory relaxation . since traditionally , noisy data are analysed in fourier space by looking for example at the power spectrum , it would be interesting to obtain an analytical expression for it as well where is the fourier component of . in figs . [fig : eta1]a-[fig : eta1]b below we compare the result of the averaged power spectrum over simulated paths ( all with the same parameters but different realizations of the noise ) for the two cases ( damped oscillations ) and ( i.e. , simple exponential decay ) . as can be seenthe theoretical curve is in very good agreement with the numerical result over many decades . and .the insets show examples of such paths , while the main figures show the power - spectra averaged over realizations of the noise .dashed curves are the corresponding theoretical curves . ( a )the case : results produced using , with initial conditions .( b ) the case : results produced using , , with initial conditions .[ fig : eta1],title="fig:",width=302 ] and .the insets show examples of such paths , while the main figures show the power - spectra averaged over realizations of the noise .dashed curves are the corresponding theoretical curves .( a ) the case : results produced using , with initial conditions .( b ) the case : results produced using , , with initial conditions .[ fig : eta1],title="fig:",width=302 ] a natural step forward is to study crack propagation in a regime where there are many shear perturbations .this of course amounts to retaining the additive noise in eq .( [ eq : start ] ) . unlike the fluctuations in the local toughness ,the shear perturbations can not be modeled by a constant .this term seems crucial for creating the random patterns that are observed for fracture surfaces in nature .interestingly , varying the various parameters results in rather different patterns , as shown in fig .[ fig:0.6]a-[fig:0.8]a .moreover , when analyzed using the real space methods , such as the min - max or the rms method ( see appendix ) , one can produce various values of roughness exponent which depend on the parameters of the model .[ fig:0.6]b-[fig:0.8]b show the analysis the two crack paths and show that one can obtain , for example , values of roughness exponent and that can be found in literature . and .( b ) results of a min - max and rms analysis that yields over more than decades .( c ) the power spectrum of the crack path and a theoretical prediction for it based on the parameters of the model averaged over realizations .[ fig:0.6],width=302 ] and .( b ) results of a min - max and rms analysis that yields over more than decades .( c ) the power spectrum of the crack path and a theoretical prediction for it based on the parameters of the model averaged over realizations .[ fig:0.6],title="fig:",width=302 ] and .( b ) results of a min - max and rms analysis that yields over more than decades .( c ) the power spectrum of the crack path and a theoretical prediction for it based on the parameters of the model averaged over realizations .[ fig:0.6],title="fig:",width=302 ] and .( b ) results of a min - max and rms analysis that yields over a decade and a half .( c ) the power spectrum of the crack path and a theoretical prediction for it based on the parameters of the model averaged over realizations .[ fig:0.8],width=302 ] and .( b ) results of a min - max and rms analysis that yields over a decade and a half .( c ) the power spectrum of the crack path and a theoretical prediction for it based on the parameters of the model averaged over realizations .[ fig:0.8],title="fig:",width=302 ] and .( b ) results of a min - max and rms analysis that yields over a decade and a half .( c ) the power spectrum of the crack path and a theoretical prediction for it based on the parameters of the model averaged over realizations .[ fig:0.8],title="fig:",width=302 ] however , when looking at the power - spectrum of each crack path there is no longer a simple scaling picture , as in the min - max or rms plots . for both cracks , the power spectrum always begins with a plateau for small values of , but for larger values of there is nt an easy way to determine a slope , which seems to vary over different values .this phenomenon is traditionally interpreted as a crossover between different regimes characterized by different roughness exponents in the analysis of experimental data . in view of the analytical results which we obtained in the previous section we argue for a different and simpler scenario .the starting point is eq .( [ eq : start ] ) with , supplemented with the initial condition and . in this equation is just a nonhomogeneous term .this means that once we have a solution for the homogeneous equation , we can build a special solution that solves the nonhomogeneous part . recalling that and are independent random variables, we conclude that , as before , averaging over realizations of the local toughness fluctuations amounts to replacing by .this yields + \eta_2'(x ) \label{eq : h2-c } \ .\ ] ] using the initial conditions , the solution of this equation is given by where is the solution to the homogeneous problem given by the power spectrum of this solution yields the expression this expression is different from the expression given by eq .( [ eq : h - cps ] ) as there is a term in numerator , which implies a tail of in the spectrum .the coefficients in eq .( [ eq : h2-cps ] ) are determined from the fourier transform of a stationary signal after cutting out the transient regime , that is .this leads to a system with effective random initial conditions at . since is chosen in the steady regime the statistics of and are known , and an analytical expression of the power spectrum can be obtained . figs . [ fig:0.6]c-[fig:0.8]c shows that despite eq .( [ eq : h2-cps ] ) agrees very well with the simulations data for the power spectrum , one might be tempted to fit it with a power law ansatz . however , apart from a tail at large s , which yields a roughness exponent at small length scales , eq .( [ eq : h2-cps ] ) tells us that there is no self - affine behaviour of fracture surfaces at intermediate length scales , a simple crossover is taking place .the result contained in eq .( [ eq : h2-sol ] ) allows us to derive the full probability distribution function of as defined in eq .( [ eq : delh ] ) , which is becoming a popular measure for self - affinity . using eq .( [ eq : h2-sol ] ) and some simple manipulations one can rewrite as } \label{eq : delh2 } \ , .\ ] ] here , the left most point has been pushed to in order to ensure stationarity .then , the required pdf is formally given by } } \right ) } \right\rangle_{\eta_2 } \label{eq : pdf1 } \ , .\ ] ] using the fourier representation of the delta distribution , eq .( [ eq : pdf1 ] ) becomes } } } \right\rangle _ { \eta _ 2 } dq } \label{eq : pdf2 } \ , .\ ] ] since the term in the exponent is a linear combination of independent random terms one gets } } } \right\rangle = e^{-\frac{1}{2}q^2 \left\langle { \left [ { \int _ { - \infty } ^0 { h'_0 \left ( { - x } \right)\left [ { \eta _ 2 \left ( { x + \delta x } \right ) - \eta _ 2 \left ( x \right ) } \right]dx } } \right]^2 } \right\rangle } \label{eq : pdf3 } \ , , \ ] ] namely a gaussian , and one just needs to calculate its variance } } \right]^2 \right\rangle_{\eta_2 } \nonumber \\ & & = \frac{d_2}{c}\left[1 - \exp\left\ { - c\delta x { \textstyle{{4 - c\cosh \left [ { \frac{1}{2}\sqrt { c(c - 4 ) } \delta x } \right ] + \sqrt { c(c - 4 ) } \sinh \left [ { \frac{1}{2}\sqrt { c(c - 4 ) } \delta x } \right ] } \over { c - 4 } } } \right\ } \right ] \label{eq : pdf4 } \ , , \end{aligned}\ ] ] and finally ^ 2 } \label{eq : pdf5 } \ , .\ ] ] recall that for the underlying shape to be self affine the pdf must obey two properties ( see eq .( [ scaling4 ] ) ) : first , it should have the same form for all the scales .this property is explicitly obeyed by the derived pdf ( [ eq : pdf5 ] ) .the second requirement is that the rms scales as .this property is not obeyed here , since meaning that there is a slow crossover from a square - root behaviour , for small scales , to a constant , for large scales , and strictly speaking the path is not self - affine .sampled from a crack path simulated using and over time points on a semi - log scale . for each a pure gaussianis plotted as a guide to the eye .note that the various distributions are shifted logarithmically horizontally for visual clarity .( b ) a comparison of the moments to those expected for a gaussian distribution for . the ratio is presented and the dashed lines mark a deviation interval .[ fig:0.6pdf],title="fig:",width=302 ] sampled from a crack path simulated using and over time points on a semi - log scale . for each a pure gaussianis plotted as a guide to the eye .note that the various distributions are shifted logarithmically horizontally for visual clarity .( b ) a comparison of the moments to those expected for a gaussian distribution for . the ratio is presented and the dashed lines mark a deviation interval .[ fig:0.6pdf],title="fig:",width=302 ] in order to check these theoretical predictions , a propagating crack has been simulated over a very long interval , using the parameters presented in fig .[ fig:0.6 ] , in order to produce the pdf for various values of .as can be seen , the height distribution seems to exhibit gaussian statistics as predicted . in order to verify this more quantitatively, we compare the moments of normalized by the moment , namely to those obtained by the gaussian distribution , namely up to order .the results are presented in fig .[ fig:0.6pdf]b , and supports the gaussianity of the distributions .finally , we compare the width of distribution to the one given by eq .( [ eq : pdf4 ] ) . as can be seen in fig .[ fig:0.6pdfsigma ] , although the theoretical prediction captures the form and has the right order of magnitude , it clearly deviates from the result of the simulation .in fact , it seems that the prefactor in eq .( [ eq : pdf4 ] ) ( i.e. ) underestimates the measured one , such that by tuning this prefactor one can reproduce the right behaviour over the whole range .a possible reason for this difference is due to discretization and finite - size scaling .another reason could be the fact that we derive the pdf by first averaging over ( and thus obtaining the effective equation ( [ eq : h2-c ] ) ) , and only then averaging over , while in reality these two noisy terms fluctuate simultaneously at the same scale .since the pdf is a sensitive probe this delicate issue is pronounced . at any rate ,the simulated confirms the statement that there is a crossover from a square - root behaviour to a constant , and thus no real self affinity exists .obtained from the simulation ( with the parameters defined in fig .[ fig:0.6pdf ] ) to the one calculated in eq .( [ eq : pdf4 ] ) .[ fig:0.6pdfsigma],width=302 ] let us now study of the effect of the -stress term in eq .( [ eq : start ] ) .still , we consider only the stable regime , that is when . in real experimental situations ,one expects and most physical systems can be well described by the case .indeed our analysis and numerical results show that eq .( [ eq : start ] ) with exhibits scaling behaviour that is very close to the case . nevertheless , in order to demonstrate the impact of a non - zero -stress, a large value of is used .[ fig : rought ] shows results of crack paths grown with and the corresponding power spectrum . , and .( b ) rms and min - max curves , the fit is just a guide to the eye ( c ) the power spectrum of the crack path over realizations and a theoretical prediction based on the heuristic approximation given by eq .( [ eq : h3-cps ] ) .[ fig : rought],width=302 ] , and .( b ) rms and min - max curves , the fit is just a guide to the eye ( c ) the power spectrum of the crack path over realizations and a theoretical prediction based on the heuristic approximation given by eq .( [ eq : h3-cps ] ) .[ fig : rought],title="fig:",width=302 ] , and .( b ) rms and min - max curves , the fit is just a guide to the eye ( c ) the power spectrum of the crack path over realizations and a theoretical prediction based on the heuristic approximation given by eq .( [ eq : h3-cps ] ) .[ fig : rought],title="fig:",width=302 ] there are no analytical results for .however , one expects a simplification in the spirit of the previous sections , i.e that averaging over would yield an effective langevin equation of the form + \eta_2'(x ) \label{eq : h3-c } \ , \ ] ] where and are renormalized deterministic prefactors , that would in general depend on and .the spectrum of the solution averaged over would then be given by by comparing with the numerical results we find evidence indicating that one needs to take , where is defined similarly to the case . using this modified form ,the agreement with the simulation is good ( see fig . [fig : rought]c ) . note that in the expression ( [ eq : h3-c ] ) the term related to is not dominant for small s neither for large s which means that it does not modify the shape of the spectrum in a dramatic way .this is consistent with the power - counting argument mentioned before .the probability distribution functions for height increments can be easily computed numerically .[ fig : pdft ] reports the same details as for the case .[ fig : pdft]a-[fig : pdft]b show deviations from gaussianity at large scales meaning that the pdf of the height differences are not self - similar .since the first requirement for self - affinity is violated , it is clear that an analysis of the scaling of the variance is biased , and the results it yields should be taken prudently .[ fig : pdft]c reports such an analysis , i.e. , and interestingly shows that the deviations from gaussianity are sufficient to produce similar artifacts to those seen in the previous sections , namely a seemingly power law behaviour for small values of .we verified that the power spectrum of the crack paths does not manifest the same bias , but rather reproduces a behaviour at the tail as expected from eq .( [ eq : h3-cps ] ) ( implying ) . anyway , the fact that the crack paths becomes flat at the large scales is independent of the method used . , and on a semi - log scale . for each plot in addition a pure gaussian as a guide to the eye .note that we shifted the various distributions logarithmically for visual clarity .( b ) a comparison of the moments to those expected for a gaussian distribution for , revealing some deviations from gaussianity .the dashed lines marks a deviation interval .( c ) the rms obtained from the simulation on a log - log scale .the first part of the curve that seems to follow a power law is fitted and an exponent close to is obtained .[ fig : pdft],title="fig:",width=302 ] , and on a semi - log scale . for each plot in addition a pure gaussian as a guide to the eye .note that we shifted the various distributions logarithmically for visual clarity .( b ) a comparison of the moments to those expected for a gaussian distribution for , revealing some deviations from gaussianity .the dashed lines marks a deviation interval .( c ) the rms obtained from the simulation on a log - log scale .the first part of the curve that seems to follow a power law is fitted and an exponent close to is obtained .[ fig : pdft],title="fig:",width=302 ] , and on a semi - log scale . for each plot in addition a pure gaussian as a guide to the eye .note that we shifted the various distributions logarithmically for visual clarity .( b ) a comparison of the moments to those expected for a gaussian distribution for , revealing some deviations from gaussianity .the dashed lines marks a deviation interval .( c ) the rms obtained from the simulation on a log - log scale .the first part of the curve that seems to follow a power law is fitted and an exponent close to is obtained .[ fig : pdft],width=302 ]so far we discussed the stability of crack paths in heterogeneous media , and the possible shapes they can take .it turned out that in the stable regime ( ) one can get many possible types of patterns depending on the parameters of the model , including random patterns that seem to resemble self similar shapes .actually , the analysis whose results are summarized in figs .[ fig:0.6]b-[fig:0.8]b supports this observation and suggests that not only can one produce self - similar shapes but also a large family of such shapes that seem to span a wide range of different roughness exponents . however , the analysis of the corresponding power spectra of these crack paths yields a different conclusion .a power spectrum of the form ( [ eq : h2-cps ] ) or ( [ eq : h3-cps ] ) means that strictly speaking the shape is not self - affine . to be more precise, it implies the existence of a self - affine structure on a small scale described by a roughness of ( deduced from the large behaviour of the spectrum , i.e. ) which is super - imposed on a decaying function or on damped oscillations at larger scales , and thus the spectra crosses over to a flat behaviour for small s ( possibly with a peak at some particular as in fig .[ fig : eta1]b ) . in this context ,an attempt to fit a straight line to the power spectrum for intermediate values of might yield a seemingly reasonable fit for one / two decades but is certainly unjustified .so how does one settle the difference between the results that come form the real space and the fourier space approaches ?the answer to this question is not restricted to crack surfaces and is related to a general discussion of reliability of self - affine measurements .a starting point is the work of schmittbuhl _ that reviews various methods to extract the roughness exponent from measured or simulated profiles . in , the authors compare between various methods with respect to different artifacts that can appear in the data , such as misorientation or signal amplification .interestingly , they came up with a useful sensitivity assessment of each method with respect to the biases .however , they did not discuss the case when a self - affine structure is imposed on an oscillating background , or more generally on a bias which is not translation invariant .for this case , we claim that the real - space methods ( such as min - max and rms ) are highly vulnerable , while the power - spectrum is , naturally , very robust .an extreme example is given below in fig .[ fig : sin ] , where a pure sinusoidal path , which is clearly not a self - affine profile , is analyzed using the min - max and the rms methods .the resulting curves misleadingly reveal decades of self - affine behaviour ( actually , an arbitrary number of decades can be devised easily ) , with a roughness exponent . in contrast , a fourier analysis of this profile gives essentially a delta function localized at the wavelength of the oscillations .this artifact has not been discussed in nor in other reviews of existing methods for measuring the roughness exponent such as .decades with a roughness exponent .[ fig : sin],width=302 ] back to our model , this extreme example clearly favors fourier based analysis over real - space approaches .moreover , it can also explain why by varying the parameters of the model and by using a real - space analysis , we could obtain a whole range of s between ( which is the roughness exponent of a simple random walk ) and .this possibility is of course excluded when looking at the power spectrum where we have clear predictions for the its form .finally , let us discuss the approach of examining self - affinity through an analysis of the whole pdf of height differences . in principle , this method allows to check in a very precise way the self - affinity of crack paths and yields the roughness exponent . however , this method is not easy to implement since it needs a large amount of data that is typically much more than the amount of data that a usual experiment can yield . in our simulationswe could use relatively long cracks of pixels with a reasonable effort .even with this large amount of raw data it was not easy to get rid of artifacts in the pdf such as over estimates of the roughness exponent and detection of deviations from gaussianity , while the power spectrum could perform better already with smaller samples .this means that while in principle the pdf method is superior to other ones , when discussing real samples which are always of a limited precision and resolution , other methods such as the analysis of the power spectrum ( resulting only from a -point statistics ) are usually better .in this paper , we studied the stability and roughness of slow cracks in 2d disordered materials with respect to the disorder .our approach relies on a solid ground in both mechanics and statistical physics . after proposing an equation of motion of a crack tip in a 2d disordered material ,we first generalize the well known -criterion predicted to disordered materials and then describe the roughening of crack paths . using this equation of motion, we observe numerically various possible patterns , including oscillating , decaying and rough paths .we analyze the rough cracks using commonly used techniques . by using real - space methods we are able to obtain a whole range of roughness exponents , while the power - spectrum of the paths does not support these findings .thanks to an exact result we are able to predict the power spectrum analytically for ( eq .( [ eq : h2-cps ] ) ) , and approximately for ( eq .( [ eq : h3-cps ] ) ) .these analytical results suggest that the shapes are not self - affine , but rather flat objects on the large scale and random - walk like objects ( with ) at small scales .we conclude that in such situations real - space methods are very vulnerable ( as they mix the scales and can yield seemingly self - affinity with almost any between and ) , while the fourier - space approach is much more appropriate and thus preferred .the last point is not only relevant to the analysis of cracks , but applies to analysis of self - affine shapes where an oscillating background is likely to exist .we hope that this will contribute to the general discussion of reliability of self - affine measurements . from an experimental point of view , it could be interesting to use expressions like eqs .( [ eq : h2-cps ] ) and ( [ eq : h3-cps ] ) to fit the experimental data of 2d rough cracks . an important prediction implied by these expressionsis that the scale at which the crossover to a flat shape occurs is roughly , that is a geometric mean of a geometric length - scale and a disorder length - scale ( which is the density of disorder times the amplitude of the toughness fluctuations ) .this suggests that by varying the width of the strip in which the crack propagates and/or tuning the density and amplitude of the heterogeneities one can change the crossover scale .another interesting prediction is that by varying the sign of the combination from positive to negative ( for example by reducing the density of heterogeneities ) one can switch between a pure exponential decay of the shear perturbations to an oscillatory one . in the presence of many shear perturbations ,this property can be easily observed as a peak appearing in the power spectrum , as the transition to an oscillatory response occurs .it could be the case that some generalization of this effect is responsible to the transition reported in from oscillatory to rough cracks .it could be that an extension of the approach developed here , together with the recent results of to account for thermal effects , provide a proper explanation for this transition .an open question , not developed in this work , is how does the material micro - structure affect the roughness of the cracks paths . here, we considered only short - ranged disorder , modeled by -correlated noise for both kinds of disorder and ( corresponding to toughness and shear fluctuations respectively ) .it is clear that considering off - lattice heterogeneities and taking into account the different correlation properties of and would make the paths and their resulting power spectra more complex - a well known phenomenon in stochastic systems .furthermore , long - range power law correlations that are known to exist in certain materials such as quasi - crystals , porous materials and others could yield long range correlations in the toughness / shear fluctuations , and thus lead to yet richer phenomena . actually , a recent discrete numerical model of fracture has shown that long - range correlations in the disorder and its anisotropy can lead to non - universal scaling exponents .these results may change dramatically future approaches to problems of crack propagation in disordered materials .finally , a fundamental open question is whether this work could help to understand the enigma of crack roughness in 3d samples .it is hoped that the in - plane roughness , mentioned above , and studied experimentally , may provide an important starting point .more precisely , modeling the simplified 3d problem may be combined with the out - of - the - plane fluctuations ( namely the current work ) to yield a full 3d theory .various elements needed in that direction can be found in the literature but such a theory is still far from being formulated .we thank b. derrida , a. boudaoud , k. j. mly and s. santucci for fruitful discussions .for any random surface , one of the most studied quantities to characterize its geometry is the roughness exponent .for one dimensional crack paths embedded in two dimensional materials one single exponent is needed .this exponent is also known as the hurst exponent .there are many methods to measure self - affine exponents and we will shortly review here four that are commonly used .let us parameterize the path using the function with . using the variable bandwidth methods , the roughness exponent can be related to the scaling of the width in the -direction as a function of a window size in the -direction .more specifically one expects .what is left is to specify a way to define the width .a possible definition is given by the standard deviation ( or rms ) of the height profile ^ 2 dx } \sim \left(\delta x\right)^{\zeta } \label{scaling1}\ .\ ] ] we will refer to it as the rms ( root mean square ) method .another variable bandwidth method is the min - max method defined by a more elaborated way to extract the roughness exponent , which allows to test the self - affinity at the same time , is based on examining directly the probability distribution function ( pdf ) of the discrete gradient rather than just looking at its second moment as in the rms method . in order to implement this method , one needs to plot the ( properly normalized ) pdf of for every .if the shape is self - affine two conditions should be met .first , the pdfs emanating from different values of , namely , should collapse on to a single curve .second , the normalization factor of the various pdfs should scale as .these requirements can be summarized by where is the scale factor .an important remark is that the master curve for the pdf does not have to be gaussian ( although it can be ) in order to imply self - affinity , it just has to be the same across all the different scales .although this is a very rigorous and precise method that allows both testing for self - affinity and determining the roughness exponent , it is not always possible to implement it in experimental systems as it requires precise data that spans many orders of magnitude .the last method that we will describe is not based on real - space measurements , but rather on the fourier transform of the path , denoted here as , where is the wave - number .more precisely , the power - spectrum of a self - affine path is expected to scale as 99 d. hull , _ fractography _ ( cambridge university press , cambridge , 1999 ) . k. ravi - chandar and b. yang , j. mech .solids * 45 * , 591 ( 1997 ) . c. guerra , j. scheibert , d. bonamy , and d. dalmas , proc .usa * 109 * , 390 ( 2012 ) .p. daguier , e. bouchaud , and g. lapasset , `` roughness of a crack front pinned by microstructural obstacles '' , europhys .lett . * 31 * , 367 ( 1995 ) .j. schmittbuhl and k. j. mly , `` direct observation of a self - affine crack propagation '' , phys .* 78 * , 3888 ( 1997 ) .a. delaplace , j. schmittbuhl , and k. j. mly , `` high resolution description of a crack front in a heterogeneous plexiglas block '' , phys .e 60 , 1337 ( 1999 ) .k. j. mly , s. santucci , j. schmittbuhl , and r. toussaint , `` local waiting time fluctuations along a randomly pinned crack front '' , phys .96 , 045501 ( 2006 ) .j. chopin , a. prevost , a. boudaoud , and m. adda - bedia , `` crack front dynamics across a single heterogeneity '' , phys .107 , 144301 ( 2011 ) .b. cotterell and j. r. rice , `` slightly curved or kinked cracks '' , int .* 16 * , 155 ( 1980 ) .j. c. radon , p. s. leevers , and l. e. culver , `` fracture toughness of pmma under biaxial stress '' , in fracture 1977 , vol 3 , university of waterloo press ( 1977 ) 1113 - 1118 .a. yuse and m. sano , `` transition between crack patterns in quenched glass plates '' , nature ( london ) * 362 * , 329 ( 1993 ) .m. marder , `` instability of a crack in a heated strip '' , phys .e * 49 * , 51 ( 1994 ) .m. adda - bedia and y. pomeau , `` crack instabilities of a heated glass strip '' , phys .e * 52 * , 4105 ( 1995 ) .f. corson , m. adda - bedia , h. henry and e. katzav , `` thermal fracture as a framework for quasi - static crack propagation '' , int . j. fract . * 158 * , 1 ( 2009 ) .r. d. deegan , s. chheda , l. patel , m. marder , h. l. swinney , j. kim and a. de lozanne , `` wavy and rough cracks in silicon '' , phys .e * 67 * , 066209 ( 2003 ) .t. menouillard , and t. belytschko , `` analysis and computations of oscillating crack propagation in a heated strip '' , int .* 167 * , 57 ( 2011 ) .b. b. mandelbrot , d. e. passoja and a. j. paullay , `` fractal character of fracture surfaces of metals '' , nature * 308 * , 721 ( 1984 ) .e. bouchaud , g. lapasset and j. plans , `` fractal dimension of fracture surfaces : a universal value ? '' , europhys . lett .* 13 * , 73 ( 1990 ) .k. j. mly , a. hansen , e. l. hinrichsen and s. roux , `` experimental measurements of the roughness of brittle cracks '' , phys .lett . * 68 * , 213 ( 1992 ) . m. marder , `` roughing it '' , science * 277 * , 647 ( 1997 ) .j. schmittbuhl , j .-vilotte and s. roux , `` reliability of self - affine measurements '' , phys .e * 51 * , 131 ( 1995 ) . v.y. milman , n. a. stelmashenko and r. blumenfeld , `` fracture surfaces : a critical review of fractal studies and a novel morphological analysis of scanning tunneling microscopy measurements '' , prog .sci . * 38 * , 425 ( 1994 ) .h. bakke and a. hansen , `` accuracy of roughness exponent measurement methods '' , phys .e * 76 * , 031136 ( 2007 ) .l. ponson , d. bonamy , and e. bouchaud , `` two - dimensional scaling properties of experimental fracture surfaces '' , phys .lett . * 96 * , 035506 ( 2006 ) .e. katzav , and m. schwartz , `` exponent inequalities in dynamical systems '' , phys .* 107 * , 125701 ( 2011 ) .e. katzav , and m. schwartz , `` dynamical inequality in growth models '' , epl * 95 * , 66003 ( 2011 ) .f. plourabou , k. w. winkler , l. petitjean , j .-hulin , and s. roux , `` experimental study of fracture surface roughness on rocks with crack velocity '' , phys .e * 53 * , 277 ( 1996 ) .a. s. balankin , o. susarrey , r. g. paredes , l. morales , d. samayoa , and j. m. lpez , `` intrinsically anomalous roughness of admissible crack traces in concrete '' , phys .e * 72 * , 065101(r ) ( 2005 ) .l. i. salminen , m. j. alava , and k. j. niskanen , `` analysis of long crack lines in paper webs '' , eur .j. b * 32 * , 369 ( 2003 ) .e. bouchbinder , i. procaccia , s. santucci , and l. vanel , `` fracture surfaces as multiscaling graphs '' , phys .lett . * 96 * , 055509 ( 2006 ) .n. mallick , p .-cortet , s. santucci , s. g. roux , and l. vanel , `` discrepancy between sub - critical and fast rupture roughness : a cumulant analysis '' , phys .lett . * 98 * , 255502 ( 2007 ) .t. engy , k. j. mly , a. j. hansen , and s. roux s , `` roughness of two - dimensional cracks in wood '' , phys .lett . * 73 * , 834 ( 1994 ) . c. poirier , m. ammi , d. bideau , and j. p. troadec , `` experimental study of the geometrical effects in the localization of deformation '' , phys .. lett . * 68 * , 216 ( 1992 ) .barabasi and h. e. stanley , _ fractal concepts in surface growth _( cambridge university press , cambridge , 1995 ) .e. katzav , m. adda - bedia , and b. derrida , `` fracture surfaces of heterogeneous materials : a 2d solvable model '' , epl * 78 * , 46006 ( 2007 ). k. b. broberg , _ cracks and fracture _ ( academic press , london , 1999 ) .j. b. leblond , _ mcanique de la rupture fragile et ductile _( hermes science publications , 2003 ) .r. v. goldstein and r. l. salganik , `` brittle fracture of solids with arbitrary cracks '' , int .j. fracture * 10 * , 507 ( 1974 ) .m. adda - bedia , r. arias , m. ben amar , and f. lund , `` generalized griffith criterion for dynamic fracture and the stability of crack motion at high velocities '' , phys .e * 60 * , 2366 ( 1999 ) . j. a. hodgdon and j. p. sethna , `` derivation of a general three - dimensional crack - propagation law : a generalization of the principle of local symmetry '' , phys .b * 47 * , 4831 ( 1993 ) .i. afek et al . , `` void formation and roughening in slow fracture '' , phys . rev .e * 71 * , 066127 ( 2005 ) .k.p . mroz and z. mroz , `` on crack path evolution rules '' engineering fracture mechanics * 77 * , 1781 ( 2010 ) .g. erdogan and g.c .sih , `` on the crack extension in plates under plane loading and transverse shear '' , asme j. basic engineering * 85 * , 519 ( 1963 ) . v.hakim and a. karma , `` crack path prediction in anisotropic brittle materials '' , phys .* 95 * , 235501 ( 2005 ) .v. hakim and a. karma , `` laws of crack motion and phase - field models of fracture '' , j. mech . phys .57 * , 342 ( 2009 ) .m. amestoy and j. b. leblond , `` crack paths in plane situations - ii .detailed form of the expansion of the stress intensity factors '' , int .j. solids structures * 29 * , 465 ( 1992 ) .j. b. leblond , `` the stress intensity factor at the tip of a kinked and curved crack '' , int .j. solids struct .* 25 * , 1311 ( 1989 ) .a. b. movchan , h. gao , and j. r. willis , `` on perturbations of plane cracks '' , int . j. solids struct .* 35 * , 3419 ( 1998 ) .y. sumi , `` a second order perturbation solution of a non - collinear crack and its application to crack path prediction of brittle fracture in weldment '' , naval architecture and ocean engineering * 28 * , 143 ( 1992 ) .s. ramanathan , d. ertas , and d. s. fisher , `` quasistatic crack propagation in heterogeneous media '' , phys .lett . * 79 * , 873 ( 1997 ) .m. ansari - rad , s.m .vaez allaei , and m. sahimi , `` nonuniversality of roughness exponent of quasistatic fracture surfaces '' , phys .e * 85 * , 021121 ( 2012 ) .j. zinn - justin , _ quantum field theory and critical phenomena ( 3 ed . ) _ , ( oxford university press , oxford , 1996 ) .b. derrida and e. gardner , `` lyapunov exponent of the one dimensional anderson model : weak disorder expansions '' , j. physique * 45 * , 1283 ( 1984 ) .d. bonamy , l. ponson , s. prades , e. bouchaud , and c. guillot , `` scaling exponents for fracture surfaces in homogeneous glass and glassy ceramics '' , phys .lett . * 97 * , 135504 ( 2006 ) . s. santucci , k. j. mly , a. delaplace , j. mathiesen , a. hansen , j. .h. bakke , j. schmittbuhl , l. vanel , and p. ray , `` statistics of fracture surfaces '' , phys .e * 75 * , 016104 ( 2007 ) .l. ponson , h. auradou , m. pessel , d. lazarus , and j. p. hulin , `` failure mechanisms and surface roughness statistics of fractured fontainebleau sandstone '' , phys . rev .e * 76 * , 036108 ( 2007 ) .t. horst , k. reincke , s. ilisch , g. heinrich and w. grellmann , `` fracture surface statistics of filled elastomers '' , phys .e * 80 * , 046120 ( 2009 ) .d. bonamy , `` intermittency and roughening in the failure of brittle heterogeneous materials '' , j. phys .* 42 * , 214014 ( 2009 ) .e. katzav and m. schwartz , `` self - consistent expansion for the kardar - parisi - zhang equation with correlated noise '' , phys .e * 60 * , 5677 ( 1999 ) .e. katzav and m. schwartz , `` kardar - parisi - zhang equation with temporally correlated noise : a self - consistent approach '' , phys .e * 70 * , 011601 ( 2004 ) .r. lifshitz , `` quasicrystals : a matter of definition '' , foundations of physics * 33 * , 1703 ( 2003 ) .e. katzav , s. f. edwards and m. schwartz , `` structure below the growing surface '' , europhys . lett . * 75 * , 29 ( 2006 ) .d. r. askeland and p. p. phule , _ the science and engineering of materials ( 5 ed . ) _ ( thomson , 2006 ). h. gao and j. r. rice ., `` first - order perturbation analysis of crack trapping by arrays of obstacles '' , j. appl . mech . * 56 * , 828 ( 1989 ) .s. ramanathan and d. s. fisher , `` dynamics and instabilities of planar tensile cracks in heterogeneous media '' , phys .lett . * 79 * , 877 ( 1997 ) .s. ramanathan and d. s. fisher , `` onset of propagation of planar cracks in heterogeneous media '' , phys .b * 58 * , 6026 ( 1998 ) . m. adda - bedia , e. katzav and d. vandembroucq , `` second - order variation in elastic fields of a tensile planar crack with a curved front '' , physe * 73 * 035106 ( 2006 ) .e. katzav and m. adda - bedia , `` roughness of tensile crack fronts in heterogeneous materials '' , europhys . lett .* 76 * ( 2006 ) 450 - 456 .e. katzav , m. adda - bedia , m. ben amar , and a. boudaoud , `` roughness of moving elastic lines - crack and wetting fronts '' , phys .e * 76 * , 051601 ( 2007 ) .n. pindra , v. lazarus and j. b. leblond , `` geometrical disorder of the fronts of a tunnel - crack propagating in shear in some heterogeneous medium '' , j. mech .. sol . * 58 * , 281 ( 2010 ) .m. vasoya , j. b. leblond , and l. ponson , `` a geometrically nonlinear analysis of coplanar crack propagation in some heterogeneous medium '' , int .j. sol . and struc .* 50 * , 371 ( 2013 ) .r. c. ball and h. larralde , `` linear stability analysis of planar straight cracks propagating quasistatically under type i loading '' , int .* 71 * , 365 - 377 ( 1995 ) .a. a. al - falou , h. larralde and r. c. ball , `` effect of t - stresses on the path of a three - dimensional crack propagating quasistatically under type i loading '' , int .j. solids structures * 34 * , 569 - 580 ( 1997 ) . | we study the stability and roughness of propagating cracks in heterogeneous brittle two - dimensional elastic materials . we begin by deriving an equation of motion describing the dynamics of such a crack in the framework of linear elastic fracture mechanics , based on the griffith criterion and the principle of local symmetry . this result allows us to extend the stability analysis of cotterell and rice to disordered materials . in the stable regime we find stochastic crack paths . using tools of statistical physics we obtain the power spectrum of these paths and their probability distribution function , and conclude they do not exhibit self - affinity . we show that a real - space fractal analysis of these paths can lead to the wrong conclusion that the paths are self - affine . to complete the picture , we unravel the systematic bias in such real - space methods , and thus contribute to the general discussion of reliability of self - affine measurements . |
gravitational waves , the ripples in the fabric of space - time , were predicted by einstein s general theory of relativity .although astronomical observations have inferred the existence of gravitational waves , they have yet to be detected directly .the laser interferometer gravitational - wave observatory ( ligo ) is one of the large - scale gravitational - wave detectors currently being built worldwide .the pre - stabilized laser ( psl ) subsystem is the light source for the ligo detector as shown in figure [ psl - position ] .the output of the psl is modematched into the suspended modecleaner before being coupled into the ligo interferometer . the term _ pre - stabilized _ is used because the laser undergoes two stages of stabilization prior to being injected into the interferometer .the 10-w laser used is configured as a master - oscillator - power - amplifier ( mopa ) , with a 700mw single - frequency , single - mode non - planar ring oscillator used as the master oscillator .the control strategy uses the actuators of the master oscillator in order to stabilize the frequency .power stabilization is achieved by control of the power amplifier output .the psl topology is shown in figure [ psl - parts ] .light from the laser is modematched into a high - throughput , ring fabry - perot cavity called the pre - modecleaner ( pmc ) .the psl has a design requirement that the output be close to the shot - noise limit for 600mw of detected light at the interferometer modulation frequency of 25mhz .as this is beyond the bandwidth of any electronics servo , it is done by passive filtering by the pmc . by appropriate choice of mirror reflectivity ,the pmc acts as a tracking bandpass filter with a pole at the cavity half - bandwidth .one of the pmc mirrors is epoxied to a piezoelectric transducer ( pzt ) to vary the length of the cavity .the servo electronics constantly adjusts the pzt voltage in order to keep the incident light resonant with the cavity .astrophysical models suggest that in order to plausibly detect candidate gravitational - wave sources , the ligo detector must achieve a displacement sensitivity of better than 10 at 100hz .this corresponds to a frequency noise of 10 at 100 hz .the frequency stabilization servo utilizes three frequency actuators inside the 10-w laser . a thermo - electric cooler ( tec ) bonded to the laser gain medium actuates on the laser frequency by thermally changing the optical path length .dc1hz adjustments to the laser frequency are made with the tec .this actuator , modeled as three poles at 0.1hz , has a coefficient of 4ghz / v and is used for large scale adjustments to the laser frequency .also bonded to the laser gain medium is a pzt , which covers dc10khz . a voltage applied tothe pzt stresses the laser medium and induces refractive index changes to change the laser frequency .the pzt has a flat response to and is known to have a number of mechanical resonances beyond 100khz .fast frequency fluctuations beyond 10khz are handled by the third frequency actuator , a pockels cell located between the master oscillator and power amplifier .a small fraction of the output of the pmc is sampled and frequency shifted through an 80mhz acousto - optic modulator ( aom ) .the output of the aom is focussed into a phase modulator that imparts sidebands at 21.5mhz .the output of the phase modulator is then modematched into a high - finesse , linear fabry - perot cavity which is used as a frequency reference against which the laser frequency is stabilized .the frequency stabilization scheme employs the well - known pound - drever - hall technique in which the light incident on the reference cavity is phase - modulated .both the carrier and sideband light reflected from the reference cavity is focused onto a tuned photodetector .the output of the tuned photodetector is bandpass filtered and synchronously demodulated to derive the error signal . in order to ensure closed - loop stability ,the open - loop gain of the pzt actuator must be well below that of the pockels cell at the pzt mechanical resonance frequency . to ensure this ,the pzt actuator path is aggressively rolled off after the designed 10khz crossover . in the absence of the pockels cell ,the pzt path is naturally unstable at . with a dynamic range some 30 times greater than that of the pockels cell ,a self - sustaining oscillation may arise if saturation occurs in the pockels cell path .limiting the dynamic range of the pzt actuator prevents this instability .photons in the laser light induce a source of noise in the interferometer known as radiation pressure noise .this noise arises from the momentum imparted to the mirrors as statistically different numbers of photons reflect off the mirrors in the interferometer . to minimize the movement of the interferometer mirrors due to radiation pressure , the intensity fluctuations of the lasermust be stabilized to the level of / . currently in the prototype design phase, the intensity servo utilizes a current shunt for fast regulation of the power amplifier pump diode current .placed in parallel with the power amplifier pump diodes , the current shunt was designed to carry .the intensity stabilization servo adopts a dual - loop topology as illustrated in figure [ psl - iss ] .inputs from photodetectors located after the pmc and modecleaner are used in either a single or dual sensor configuration . in the single sensor configuration, the outer - loop photodetector provides the signal to the servo electronics . in the case where the modecleaner is not locked, the single - sensor signal comes from the inner loop photodetector . in the dual sensor case ,both the inner and outer feedback paths provide signals to the servo electronics . in the dual loop configuration , noise suppressionis established in two phases . closing the inner loop yields a high - bandwidth , well - behaved inner loop with partial noise suppression . the outer loop is then closed around the inner loop to provide the balance of the noise suppression .the user control and interface is via the experimental physics and industrial control system ( epics ) . through epicsthe operator can remotely monitor the performance of the psl and adjust the various servo loop gains and settings .the operator interface is a series of graphical screens , that indicate the current status of the psl .processing the data and events is the input / output controller ( ioc ) , a baja4700e mips - based processor running the vxworks kernel .the ioc performs the real - world input / output tasks and local control tasks , and provides status information through the channel access network protocol .the control software for the psl is event - driven and is written in state notation language . although not fully debugged , automated operation from cold start through to full operation has been demonstrated .one software routine constantly adjusts the tec on the 10-w laser to keep the laser frequency well within the dynamic range of the pzt .one consequence of this is that lock re - acquisition is instantaneous once the cause of the loss of lock is removed . at present a dozen signalsare acquired and logged through the ligo data acquisition system .fast signals are acquired at the rate of 16khz whilst slower signals are acquired at 256hz .all signals are recorded and logged .we thank the entire ligo team for assistance and support .this work is supported by the national science foundation under cooperative agreement phy9210038. 4 alex abramovici _et.al._ , `` ligo : the laser interferometer gravitational - wave observatory '' , science , * 256 * , 325 , april ( 1992 ) .a. lazzarini and r. weiss , `` ligo science requirements document ( srd ) '' , internal ligo document e950018 - 02-e .p. king , r. savage and s. seel , `` ( infrared ) pre - stabilized ( psl ) design requirements '' , internal ligo document t970080 - 09-d .r.w.p . drever _et.al._ , `` laser phase and frequency stabilization using an optical resonator '' , appl . phys . *31 * , 97 , ( 1983 ) . | to meet the strain sensitivity requirements , of the laser interferometer gravitational wave observatory ( ligo ) , the laser frequency and amplitude noise must initially be reduced by a factor of 1000 in the pre - stabilized portion of the interferometer . a control system was implemented to provide laser noise suppression , data acquisition interfaces , diagnostics , and operator control inputs . this paper describes the vme - based analog and digital controls used in the ligo pre - stabilized laser ( psl ) . |
the detection of a sparse set of facial landmarks in still images has been a widely - studied problem within the computer vision community .interestingly , many face analysis methods either systematically rely on video sequences ( e.g. , facial expression recognition ) or can benefit from them ( e.g. , face recognition ) .it is thus surprising that facial landmark tracking has received much less attention in comparison .our focus in this paper is on one of the most important problems in model - specific tracking , namely that of updating the tracker using previously tracked frames , also known as incremental ( face ) tracking .the standard approach to face tracking is to use a facial landmark detection algorithm initialised on the landmarks detected at the previous frame .this exploits the fact that the face shape varies smoothly in videos of sufficiently high framerates : if the previous landmarks were detected with acceptable accuracy , then the initial shape will be close enough for the algorithm to converge to a `` good '' local optimum for the current frame too . hence ,tracking algorithms are more likely to produce highly accurate fitting results than detection algorithms that are initialised by the face detector bounding box . however , in this setting the tracker still employs a generic deformable model of the face built offline using a generic set of annotated facial images , which does not include the subject being tracked .it is well known that person - specific models are far more constrained and easier to fit than generic ones .hence one important problem in tracking is how to improve the generic model used to track the first few frames into an increasingly person - specific one as more frames are tracked .learned offline is updated with each new frame . ]this problem can be addressed with incremental learning , which allows for the smart adaptation of pre - trained generic appearance models .incremental learning is a common resource for generic tracking , being used in some of the state - of - the - art trackers , and incremental learning for face tracking is by no means a new concept , please see ross et al . for early work on the topic .more recently , incremental learning within cascaded regression , the state - of - the - art approach for facial landmark localisation , was proposed by xiong & de la torre and independently by asthana et al . . however , in both and the model update is far from being sufficiently efficient to allow real - time tracking , with mentioning that the model update requires 4.7 seconds per frame .note that the actual tracking procedure ( without the incremental update ) is faster than 25 frames per second , clearly illustrating that the incremental update is the bottleneck impeding real - time tracking .if the model update can not be carried out in real time , then incremental learning might not be the best option for face tracking - once the real - time constraint is broken in practice one would be better off creating person - specific models in a post - processing step ( e.g. , re - train the models once the whole video is tracked and then track again ) .that is to say , without the need and capacity for real - time processing , incremental learning is sub - optimal and of little use . our main contribution in this paper is to propose the first incremental learning framework for cascaded regression which allows real - time updating of the tracking model . to do this ,we build upon the concept of continuous regression as opposed to standard sampling - based regression used in almost all prior work , including and .we note that while we tackle the facial landmark tracking problem , cascaded regression has also been applied to a wider range of problems such as pose estimation , model - free tracking or object localisation , thus making our methodology of wider interest .we will release code for training and testing our algorithm for research purposes .our main contributions are as follows : * we propose a complete * new formulation for continuous regression * , of which the original continuous regression formulation is a special case .crucially , our method is now formulated by means of a * full covariance matrix capturing real statistics * of how faces vary between consecutive frames rather than on the shape model eigenvalues .this makes our method particularly suitable for the task of tracking , something the original formulation can not deal with .* we incorporate continuous regression in the cascaded regression framework ( coined cascaded continuous regression , or * ccr * ) and demonstrate its performance is equivalent to sampling - based cascaded regression .* we derive the * incremental learning for continuous regression * , and show that it * is an order of magnitude faster * than its standard incremental sdm counterpart . *we evaluate the incremental cascaded continuous regression ( * iccr * ) on the 300vw data set and show the importance of incremental learning in achieving state - of - the - art performance , especially for the case of very challenging tracking sequences .facial landmark tracking methods have often been adaptations of facial landmark detection methods .for example , active appearance models ( aam ) , constrained local models ( clm ) or the supervised descent method ( sdm ) were all presented as detection algorithms .it is thus natural to group facial landmark tracking algorithms in the same way as the detection algorithms , i.e. splitting them into discriminative and generative methods . on the generative side ,aams have often been used for tracking .since the model fitting relies on gradient descent , it suffices to start the fitting from the last solution .tracking is particularly useful to aams since they are considered to have frequent local minima and a small basin of attraction , making it important that the initial shape is close to the ground truth .aams have further been regarded as very reliable for person specific tracking , but not for generic tracking ( i.e. , tracking faces unseen during training ) .recently showed however that an improved optimisation procedure and the use of in - the - wild images for training can lead to well - behaving person independent aam . eliminating the piecewise - affine representation and adopting a part - based model led to the gauss - newton deformable part model ( gn - dpm ) , which is the aam state of the art . historically , discriminative methods relied on the training of local classifier - based models of appearance , with the local responses being then constrained by a shape model .these algorithms can be grouped into what is called the constrained local models ( clm ) framework . however , the appearance of discriminative regression - based models quickly transformed the state - of - the - art for face alignment .discriminative regressors were initially used within the clm framework substituting classifiers , showing improved performance . however , the most important contributions came with the adoption of cascaded regression and direct estimation of the full face shape rather than first obtaining local estimates .successive works have further shown the impressive efficiency and reliable performance of face alignment algorithms using cascaded regression .however , how to best exploit discriminative cascaded regression for tracking and , in particular , how to best integrate incremental learning , is still an open problem .in this section we revise the preliminary concepts over which we build our method . in particular , we describe the methods most closely related to ours , to wit the incremental supervised descent method and the continuous regressor , and motivate our work by highlighting their limitations . a face imageis represented by , and a face shape is a matrix describing the location of the landmarks considered .a shape is parametrised through a point distribution model ( pdm ) . in a pdm, a shape is parametrised in terms of \in \real^{m} ] .[ eq : train_problem_def ] has a closed form solution as : where and are the mean and covariance of the data term , .finally , we can see that eq .[ eq : closed_form_cont ] can be expressed in a more compact form .let us first define the following shorthand notation : ] and $ ] . then : where . through this arrangement , the parallels with the sampling - based regression formula are clear ( see eq .[ eq : linear_reg ] ) .it is interesting that , while the standard linear regression formulation needs to sample perturbed shapes from a distribution , the continuous regression training formulation only needs to extract the features and the jacobians on the ground - truth locations .this means that once these features are obtained , re - training a new model under a different distribution takes seconds , as it only requires the computation of eq .[ eq : compact_form_cont ] .now that we have introduced a new formulation with the continuous regression capable of incorporating a data term , it is straightforward to extend the cr into the cascade regression formulation : we take the distribution in equation [ eq : parsdm_sampling ] as the _ data term _ in eq .[ eq : train_problem_def ] .one might argue that due to the first - order taylor approximation required to solve equation [ eq : train_problem_def ] , ccr might not work as well as the sdm .one of the main experimental contributions of this paper is to show that in reality this is not the case : in fact ccr and sdm have equivalent performance ( see section [ sec : experimental_results ] ) .this is important because , contrary to previous works on cascaded regression , incremental learning within ccr allows for real time performance .once frame is tracked , the incremental learning step updates the existing training set with , where denotes the predicted shape parameters for frame .note that in this case consists of only one example compared to examples in the incremental sdm case .the update process consists of computing matrix , which stores the feature vector and its jacobian at and then , using the shorthand notation , updating continuous regressor as : in order to avoid the expensive re - computation of , it suffices to update its value using the woodbury identity : note that , where accounts for the number of shape parameters .we can see that computing eq .[ eq : closed_form_icont ] requires computing first , which is .this is a central result of this paper , and reflects a property previously unknown .we will examine in section [ sec : computational_complexity ] its practical implications in terms of real - time capabilities .in this section we first detail the computational complexity of the proposed iccr , and show that it is real - time capable . then, we compare its cost with that of incremental sdm , showing that our update rules are an order of magnitude faster .* iccr update complexity : * let us note the computational cost of the feature extraction as .the update only requires the computation of the feature vector at the ground truth , and in two adjacent locations to compute the jacobian , thus resulting in complexity .interestingly , this is independent from the number of cascade levels .then , the update equation ( eq . [ eq : closed_form_icont ] ) , has a complexity dominated by the operation , which has a cost of .it is interesting to note that is a matrix of size and thus its inversion is extremely efficient .the detailed cost of the incremental update is : * incremental sdm update complexity : * incremental learning for sdm requires sampling at each level of the cascade .the cost per cascade level is , where denotes the number of samples .thus , for cascade levels the total cost of sampling is .the cost of the incremental update equations ( eqs .( [ eq : incremental_discrete_eq1]-[eq : incremental_discrete_eq4 ] ) ) , is in this case dominated by the multiplication , which is .the detailed computational cost is : * detailed comparison and timing : * one advantage of iccr comes from the much lower number of feature computations , being as low as 3 vs. the computations required for incremental sdm .however , the main difference is the complexity of the regressor update equation for the incremental sdm compared to for the iccr . in our case , , while . the feature dimensionality results from performing pca over the feature space , which is a standard procedure for sdm .note that if we avoided the use of pca , the complexity comparison would be even more in our favour .a detailed summary of the operations required by both algorithms , together with their computational complexity and the execution time on our computer are given in algorithm [ algo_iccr ] .note that is the cost of projecting the output vector into the pca space .note as well that for incremental sdm , the `` sampling and feature extraction '' step is repeated times ._ iccr update ( total : 72 ms . ) : _+ ' '' '' _ isdm update ( total : 705 ms . ) : _ +this section describes the experimental results .first , we empirically demonstrate the performance of ccr is equivalent to sdm . in order to do so ,we assess both methods under the same settings , avoiding artefacts to appear , such as face detection accuracy .we follow the vot challenge protocol . then , we develop a fully automated system , and we evaluate both the ccr and iccr in the same settings as the 300vw , and show that our fully automated system achieves state of the art results , illustrating the benefit of incremental learning to achieve it .* training data : * we use data from different datasets of static images to construct our training set . specifically , we use helen , lfpw , afw , ibug , and a subset of multipie .the training set comprises images .we have used the facial landmark annotations provided by the 300 faces in the wild challenge , as they offer consistency across datasets .the _ statistics _ are computed across the training sequences , by computing the differences of ground - truth shape parameters between consecutive frames .given the easiness of the training set with respect to the test set , we also included differences of several frames ahead . this way , higher displacements are also captured .* features : * we use the sift implementation provided by xiong & de la torre .we apply pca on the output , retaining 2000 dimensions .we apply the same pca to all of the methods , computed during our sdm training .* test data : * all the methods are evaluated on the test partition of the 300 videos in the wild challenge ( 300vw ) .the 300vw is the only publicly - available large - scale dataset for facial landmark tracking .its test partition has been divided into categories 1 , 2 and 3 , intended to represent increasingly unconstrained scenarios .in particular , category 3 contains videos captured in totally unconstrained scenarios .the ground truth has been created in a semi - supervised manner using two different methods .* error measure : * to compute the error for a specific frame , we use the error measure defined in the 300vw challenge .the error is computed by dividing the average point - to - point euclidean error by the inter - ocular distance , understood as the distance between the two outer eye corners . in order to demonstrate the performance capability of our ccr method against sdm, we followed the protocol established by the visual object tracking ( vot ) challenge organisers for evaluating the submitted tracking methods . specifically ,if the tracker error exceeds a certain threshold ( 0.1 in our case , which is a common definition of alignment failure ) , we proceed by re - initialising the tracker . in this case, the starting point will be the ground truth of the previous frame .this protocol is adopted to avoid the pernicious influence on our comparison of some early large failure from which the tracker is not able to recover , which would mean that successive frames would yield a very large error .results are shown in fig .[ fig : ccrvssdm ] ( * left * ) .we show that the ccr and the sdm provide similar performance , thus ensuring that the ccr is a good starting point for developing an incremental learning algorithm .it is possible to see from the results shown in fig .[ fig : ccrvssdm ] that the ccr compares better and even sometimes surpasses the sdm on the lower levels of the error , while the sdm systematically provides a gain for larger errors with respect to the ccr .this is likely due to the use of first - order taylor approximation , which means that larger displacements are less accurately approximated .instead , the use of _ infinite _ shape perturbations rather than a handful of sampled perturbations compensates this problem for smaller errors , and even sometimes provides some performance improvement .we now show the benefit of incremental learning with respect to generic models .the incremental learning needs to filter frames to decide whether a fitting is suitable or harmful to update the models .that is , in practice , it is beneficial to filter out badly - tracked frames by avoiding performing incremental updates in these cases .we follow and use a linear svm trained to decide whether a particular fitting is `` correct '' , understood as being under a threshold error . despite its simplicity, this tactic provides a solid performance increase .results on the test set are shown in fig .[ fig : ccrvssdm ] ( * right * ) . , title="fig : " ] , title="fig : " ] we developed a fully automated system to compare against state of the art methods .our fully automated system is initialised with a standard sdm , and an svm is used to detect whether the tracker gets lost .we assessed both our ccr and iccr in the most challenging category of the 300vw , consisting of 14 videos recorded in unconstrained settings . for a fair comparison, we have reproduced the challenge settings ( a brief description of the challenge and submitted methods can be found in ) .we compare our method against the top two participants .results are shown in fig .[ fig : soacomp ] .the influence of the incremental learning to achieve state of the art results is clear .importantly , as shown in the paper , our iccr allows for real - time implementation .that is to say , our iccr reports state of the art results whilst working in near real - time , something that could not be achieved by previous works on cascaded regression .code for our fully automated system is available for download at www.cs.nott.ac.uk/~psxes1 . ]in this article we have proposed a novel facial landmark tracking algorithm that is capable of performing on - line updates of the models through incremental learning . compared to previous incremental learning methodologies, it can produce much faster incremental updates without compromising on accuracy .this was achieved by firstly extending the continuous regression framework , and then incorporating it into the cascaded regression framework to lead to the ccr method , which we showed provides equivalent performance to the sdm .we then derived the incremental learning update formulas for the ccr , resulting in the iccr algorithm .we further show the computational complexity of the incremental sdm , demonstrating that iccr is an order of magnitude simpler computationally .this removes the bottleneck impeding real - time incremental cascaded regression methods , and thus results in the state of the art for real - time face tracking .the work of snchez - lozano , martinez and valstar was supported by the european union horizon 2020 research and innovation programme under grant agreement no 645378 , aria - valuspa .the work of snchez - lozano was also supported by the vice - chancellor s scholarship for research excellence provided by the university of nottingham .the work of tzimiropoulos was supported in part by the epsrc project ep / m02153x/1 facial deformable models of animals .we are also grateful for the given access to the university of nottingham high performance computing facility , and we would like to thank jie shen and grigoris chrysos for their insightful help in our tracking evaluation .as shown in the paper , linear regression aims at minimising the average expected training error , respect to .the average expected error is formulated as follows : where represents the ground truth shape parameters , and the parameters displacement . due to the intractability of the integral ,this is typically solved by a mcmc ( sampling - based ) approximation , in which samples are taken from the distribution .the continuous regression framework avoids the need to sample by performing the first order taylor approximation of the function , defined as : where , evaluated at , is the jacobian of the feature representation of image , respect to shape parameters , at . combining this approximation with the integral in eq .[ eq : train_problem_def_appx ] leads to : d \delta \p \hspace{-5pt } \label{eq : long_deriv_ccr_appx}\end{aligned}\ ] ] where recall is the shorthand of .if we group independent , linear , and quadratic terms , respect to ,we can express eq .[ eq : long_deriv_ccr_appx ] as : d \delta \p , \label{eq : long_deriv_ccr2_appx}\end{aligned}\ ] ] where and .let us assume that is parametrised by its mean and covariance .then , it follows that : which means that the expected error , for the -th training example , has a closed - form solution as follows : now , is obtained after minimising eq .[ eq : closed_form_appx ] , whose derivatives are obtained as follows : this leads to the solution presented in the paper : can see that our new formulation generalises that of . more specifically , if we solve eq .[ eq : train_problem_def ] for the non - rigid parameters only , we can define to be a uniform distribution defined within the limits , with the eigenvalue assciated to the -th basis of the pdm , and the number of standard deviations considered for that eigenvalue . in such case , eq .[ eq : train_problem_def ] would be defined , for non - rigid parameters , as : which is the problem definition that appeared in .moreover , we can see that such a uniform distribution would be parametrised by a zero - mean vector and a diagonal covariance matrix whose entries are .in such case , eq . [ eq : closed_form_cont ] would be reduced to the solution presented in .that is to say , the continuous regression presented in assumed a uniform distribution , without connection to tracking statistics , and no correlation between target dimensions were possible .instead , our formulation accepts a data term " , which correlates the target dimensions , and allows for its solution for rigid parameters as well .data term " is crucial to the performance of the ccr . | this paper introduces a novel real - time algorithm for facial landmark tracking . compared to detection , tracking has both additional challenges and opportunities . arguably the most important aspect in this domain is updating a tracker s models as tracking progresses , also known as incremental ( face ) tracking . while this should result in more accurate localisation , how to do this online and in real time without causing a tracker to drift is still an important open research question . we address this question in the cascaded regression framework , the state - of - the - art approach for facial landmark localisation . because incremental learning for cascaded regression is costly , we propose a much more efficient yet equally accurate alternative using continuous regression . more specifically , we first propose cascaded continuous regression ( ccr ) and show its accuracy is equivalent to the supervised descent method . we then derive the incremental learning updates for ccr ( iccr ) and show that it is an order of magnitude faster than standard incremental learning for cascaded regression , bringing the time required for the update from seconds down to a fraction of a second , thus enabling real - time tracking . finally , we evaluate iccr and show the importance of incremental learning in achieving state - of - the - art performance . code for our iccr is available from http://www.cs.nott.ac.uk/~psxes1 16subnumber1763 |
time - reversal techniques have attracted great attention recently due to the large variety of interesting potential applications .the basic idea of time - reversal ( often also referred to as phase - conjugation in the frequency domain ) can be roughly described as follows : a localized source emits a short pulse of acoustic , electromagnetic or elastic energy which propagates through a richly scattering environment .a receiver array records the arriving waves , typically as a long and complicated signal due to the complexity of the environment , and stores the time - series of measured signals in memory . after a short while, the receiver array sends the recorded signals time - reversed ( i.e. first in - last out ) back into the same medium . due to the time - reversibility of the wave fields , the emitted energy backpropagates through the same environment , practically retracing the paths of the original signal , and refocuses , again as a short pulse , on the location where the source emitted the original pulse .this , certainly , is a slightly oversimplified description of the rather complex physics which is involved in the real time - reversal experiment . in practice ,the quality of the refocused pulse depends on many parameters , as for example the randomness of the background medium , the size and location of the receiver array , temporal fluctuations of the environment , etc .surprisingly , the quality of the refocused signal increases with increasing complexity of the background environment .this was observed experimentally by m. fink and his group at the laboratoire ondes et acoustique at universit paris vii in paris by a series of laboratory experiments ( see for example ) , and by w. a. kuperman and co - workers at the scripps institution of oceanography at university of california , san diego , by a series of experiments performed between 1996 and 2000 in a shallow ocean environment ( see for example ) .the list of possible applications of this time - reversal technique is long .in an iterated fashion , resembling the power method for finding maximal eigenvalues of a square matrix , the time - reversal method can be applied for focusing energy created by an ultrasound transducer array on strongly scattering objects in the region of interest .this can be used for example in lithotripsy for localizing and destroying gall - stones in an automatic way , or more generally in the application of medical imaging problems .detection of cracks in the aeronautic industry , or of submarines in the ocean are other examples .see for example , and for related work . the general idea of time - reversibility , and its use in imaging and detection , is certainly not that new . looking into the literature for example of seismic imaging , the application of this basic idea can be found in a classical and very successful imaging strategy for detecting scattering interfaces in the earth , the so - called migration technique . however , the systematic use and investigation of the time - reversal phenomenon and its experimental realizations started more recently , and has been carried out during the last 1015 years or so by different research groups .see for example for experimental demonstrations , and for theoretical and numerical approaches .one very young and promising application of time - reversal is * communication*. in this paper we will mainly concentrate on that application , although the general results should carry over also to other applications as mentioned above .the paper is organized as follows . in section [ tro.sec ]we give a very short introduction into time - reversal in the ocean , with a special emphasis on underwater sound communication .wireless communication in a mimo setup , our second main application in this paper , is briefly presented in section [ mimo.sec ] . in section [ sym.sec ]we discuss symmetric hyperbolic systems in the form needed here , and examples of such systems are given in section [ exa.sec ] .in section [ spa.sec ] , the basic spaces and operators necessary for our mathematical treatment are introduced . the inverse problem in communication , which we are focusing on in this paper ,is then defined in section [ opt.sec ] . in section [ mat.sec ], we derive the basic iterative scheme for solving this inverse problem .section [ adj.sec ] gives practical expressions for calculating the adjoint communication operator , which plays a key role in the iterative time - reversal schemes presented in this paper . in section [ atrm.sec ] the acoustic time - reversal mirroris defined , which will provide the link between the acoustic time - reversal experiment and the adjoint communication operator .the analogous results for the electromagnetic time - reversal mirror are discussed in section [ etrm.sec ] .section [ tra.sec ] combines the results of these two sections , and explicitly provides the link between time - reversal and the adjoint imaging method .sections [ gmc.sec ] , [ mns.sec ] , and [ rls.sec ] propose then several different iterative time - reversal schemes for solving the inverse problem of communication , using this key relationship between time - reversal and the adjoint communication operator .the practically important issue of partial measurements ( and generalized measurements ) is treated in section [ pm.sec ] .finally , section [ sfr.sec ] summarizes the results of this paper , and points out some interesting future research directions .the ocean is a complex wave - guide for sound . in addition to scattering effects at the top and the bottom of the ocean , also its temperature profile and the corresponding refractive effectscontribute to this wave - guiding property and allow acoustic energy to travel large distances .typically , this propagation has a very complicated behaviour .for example , multipathing occurs if source and receiver of sound waves are far away from each other , since due to scattering and refraction there are many possible paths on which acoustic energy can travel between them .surface waves and air bubbles at the top of the ocean , sound propagation through the rocks and sedimentary layers at the bottom of the ocean , and other effects further contribute to the complexity of sound propagation in the ocean .when a source ( e.g. the base station of a communication system in the ocean ) emits a short signal at a given location , the receiver ( e.g. a user of this communication system ) some distance away from the source typically receives a long and complicated signal due to the various influences of the ocean to this signal along the different connecting paths . if the base station wants to communicate with the user by sending a series of short signals , this complex response of the ocean to the signal needs to be resolved and taken into account . in a classical communication system , the base station which wants to communicate with a user broadcasts a series of short signals ( e.g. a series of zeros and ones ) into the environmentthe hope is that the user will receive this message as a similar series of temporally well - resolved short signals which can be easily identified and decoded .however , this almost never occurs in a complex environment , due to the multipathing and the resulting delay - spread of the emitted signals .typically , when the base station broadcasts a series of short signals into such an environment , intersymbol interference occurs at the user position due to the temporal overlap of the multipath contributions of these signals . in order to recover the individual signals ,a significant amount of signal processing is necessary at the user side , and , most importantly , the user needs to have some knowledge of the propagation behaviour of the signals in this environment ( i.e. he needs to know the channel ) .intersymbol interference can in principle be avoided by adding a sufficient delay between individual signals emitted at the base station which takes into account the delay - spread in the medium .that , however , slows down the communication , and reduces the capacity of the environment as a communication system .an additional drawback of simply broadcasting communication signals from the base station is the obvious lack of interception security .a different user who also knows the propagation behaviour of signals in the environment , can equally well resolve the series of signals arriving at his location and decode them .several approaches have been suggested to circumvent the above mentioned drawbacks of communication in multiple - scattering environments .some very promising techniques are based on the time - reversibility property of propagating wave - fields .the basic idea is as follows .the user who wants to communicate with the base station , starts the communication process by sending a short pilot signal through the environment .the base station receives this signal as a long and complex signal due to multipathing .it time - reverses the received signal and sends it back into the environment .the backpropagating waves will produce a complicated wave - field everywhere due to the many interfering parts of the emitted signal .however , due to the time - reversibility of the wave - fields , we expect that the interference will be constructive at the position of the user who sent the original pilot signal , and mainly destructive at all other positions .therefore , the user will receive at his location a short signal very similar to ( ideally , a time - reversed replica of ) the pilot signal which he sent for starting the communication .all other users who might be in the environment at the same time will only receive noise speckle due to incoherently interfering contributions of the backpropagating field .if the base station sends the individual elements ( ones and zeros ) of the intended message in a phase - encoded form as a long overlapping string of signals , the superposition principle will ensure that , at the user position , this string of signals will appear as a series of short well - separated signals , each resembling some phase - shifted ( and time - reversed ) form of the pilot signal . in order to find out whether this theoretically predictedscenario actually takes place in a real multiple - scattering environment like the ocean , kuperman have performed a series of four experiments in a shallow ocean environment between 1996 and 2000 , essentially following the above described scenario .the experiments have been performed at a mediterranean location close to the italian coast .( a similar setup was also used in an experiment performed in 2001 off the coast of new england which has been reported in yang . ) a schematic view of these experiments is shown in figure [ figure1 ] .the single user is replaced here by a probe source , and the source - receive array ( sra ) plays the role of the base station. an additional vertical receive array ( vra ) was deployed at the position of the probe source in order to measure the temporal and spatial spread of the backpropagating fields in the neighbourhood of the probe source location . in this shallow environment ( depths of about m , and distances between and km )the multipathing of the waves is mostly caused by multiple reflections at the surface and the bottom of the ocean .the results of the experiments have been reported in .they show that in fact a strong spatial and temporal focusing of the backpropagating waves occurs at the source position . a theoretical explanation of the temporal and spatial refocusing of time - reversed waves in random environments has been given in blomgren .the underwater acoustics scenario described above directly carries over to situations which might be more familiar to most of us , namely to the more and more popular wireless communication networks using mainly electromagnetic waves in the microwave regime . starting from the everyday use of cell - phones , ranging to small wireless - operating local area networks ( lan ) for computer systems or for private enterprise communication systems ,exactly the same problems arise as in underwater communication .the typically employed microwaves of a wavelength at about 1030 cm are heavily scattered by environmental objects like cars , fences , trees , doors , furniture etc .this causes a very complicated multipath structure of the signals received by users of such a communication system .since bandwidths are limited and increasingly expensive , a need for more and more efficient communication systems is imminent .recently , the idea of a so - called multiple - input multiple - output ( mimo ) communication system has been introduced with the potential to increase the capacity and efficiency of wireless communication systems .the idea is to replace a single antenna at the base station which is responsible for multiple users , or even a system where one user communicates only with his own dedicated base antenna , by a more general system where an array of multiple antennas at the base station is interacting simultaneously and in a complex way with multiple users .a schematic description of such a mimo system ( with seven base antennas and seven users ) is given in figure [ figure2 ] .see for example for recent overviews on mimo technology .time - reversal techniques are likely to play also here a key role in improving communication procedures and for optimizing the use of the limited resources ( especially bandwidth ) which are available for this technology .one big advantage of time - reversal techniques is that they are automatically adapted to the complex environment and that they can be very fast since they do not require heavy signal processing at the receiver or user side . in , an * iterated time - reversal scheme * for the optimal refocusing of signals in such a mimo communication systemwas proposed , which we will describe in more details in section [ gmc.sec ] . in the present paperwe will establish a direct link between the time - reversal technique and solution strategies for inverse problems . as an application of this relationship, we will derive iterative time - reversal schemes for the optimization of wireless or underwater acoustic mimo communication systems .the derivation is performed completely in time - domain , for very general first order symmetric hyperbolic systems describing wave propagation phenomena in a complex environment .one of the schemes which we derive , in a certain sense the basic one , will turn out to be practically equivalent to the scheme introduced in , although the derivation uses different tools .therefore , it provides a new interpretation of that scheme . the other schemes which we introduceare new in this application , and can be considered as either generalizations of the basic scheme , or as independent alternatives to that scheme .each of them addresses slightly different objectives and has its own very specific characteristics .we treat wave propagation in communication systems in the general framework of symmetric hyperbolic systems of the form here , and are real - valued time - dependent -vectors , , and ] , such that we will always have .\ ] ] the _ energy density _ is defined by the _ total energy _ in at a given time is therefore the _ flux _ is given by the following , we want to give some examples for symmetric hyperbolic systems as defined above .of special interest for communication are the system of the linearized acoustic equations and the system of maxwell s equations .we will discuss these two examples in detail in this paper .another important example for symmetric hyperbolic systems is the system of elastic waves equations , which we will however leave out in our discussion for the sake of shortness .we only mention here that all the results derived here apply without restrictions also to linear elastic waves .elastic wave propagation becomes important for example in ocean acoustic communication models which incorporate wave propagation through the sedimentary and rock layers at the bottom of the ocean . as a model for * underwater sound propagation * , we consider the following linearized form of the acoustic equations in an isotropic medium here , is the velocity , the pressure , the density , and the compressibility .we have , and ( where as a superscript always means transpose ) .moreover , we have with the notation , we can write as the operators , , can be recovered here from by putting the energy density is given by and the energy flux is we mention that the dissipative case can be treated as well in our framework , and yields analogous results to those presented here . as a second example, we will consider maxwell s equations for an anisotropic medium with some energy loss due to the inherent conductivity .this can for example model * wireless communication * in a complex environment . we have , and , .moreover , here , and are symmetric positive definite matrices modelling the anisotropic permittivity and permeability distribution in the medium , and is a symmetric positive semi - definite matrix which models the anisotropic conductivity distribution . in wireless communication , this form can model for example dissipation by conductive trees , conductive wires , rainfall , pipes , etc .the operator can be written in block form as with the operators , , can be recovered here from by putting the energy density is given by the energy flux is described by the _ poynting vector _ as already mentioned , also elastic waves can be treated in the general framework of symmetric hyperbolic systems . for more detailswe refer to .for our mathematical treatment of time - reversal we introduce the following spaces and inner products .we assume that we have users , in our system , and in addition a base station which consists of antennas , .each user and each antenna at the base station can receive , process and emit signals which we denote by for a given user , , and by for a given base antenna , .each of these signals consists of a time - dependent -vector indicating measured or processed signals of time - length . in our analysiswe will often have to consider functions defined on the time interval ] .for simplicity , we will use the same notation for the function spaces defined on ] .it will always be obvious which space we refer to in a given situation .lumping together all signals at the users on the one hand , and all signals at the base station on the other hand , yields the two fundamental quantities with )^n\big)^{{j } } , \qquad z\,=\,\big(l_2([0,t])^n\big)^{{k}}.\ ] ] the two signal spaces and introduced above are equipped with the inner products }\big\langle { { \bf s}}^{(1)}_j(t ) , { { \bf s}}^{(2)}_j(t)\big\rangle_n\,dt \\\left\langle{{\bf r}}^{(1)},{{\bf r}}^{(2)}\right\rangle_{z}&= & \sum_{k=1}^{{k}}\int_{[0,t]}\big\langle { { \bf r}}^{(1)}_k(t ) , { { \bf r}}^{(2)}_k(t)\big\rangle_n\,dt.\end{aligned}\ ] ] the corresponding norms are each user and each antenna at the base station can send a given signal or , respectively .this gives rise to a source distribution , , or , , respectively . here andin the following we will use in our notation the following convention .if one symbol appears in both forms , with and without a hat ( ^ ) on top of this symbol , then all quantities _ with _ the hat symbol are related to the users , and those _ without _ the hat symbol to the antennas at the base station .each of the sources created by a user or by a base antenna will appear on the right hand side of ( [ sym.1 ] ) as a mathematical source function and gives rise to a corresponding wave field which satisfies ( [ sym.1 ] ) , ( [ sym.2 ] ) .when solving the system ( [ sym.1 ] ) , ( [ sym.2 ] ) , typically certain sobolev spaces need to be employed for the appropriate description of the underlying function spaces ( see for example ) . for our purposes, however , it will be sufficient to assume that both , source functions and wave fields , are members of the following canonical function space which is defined as )^{n}\,,\,{{\bf u}}=0\ ; \mbox{on}\;\partial \omega\times [ 0,t]\,,\,\;\|{{\bf u}}\|_{u}<\infty\},\ ] ] and which we have equipped with the usual energy inner product }\int_{\omega}\left\langle\gamma({{\bf x}}){{\bf u}}({{\bf x}},t ) , { { \bf v}}({{\bf x}},t)\right\rangle_n\,d{{\bf x}}dt,\ ] ] and the corresponding energy norm also here , in order to simplify the notation , we will use the same space when considering functions in the shifted time interval ] .typically , when a user or an antenna at the base station transforms a signal into a source distribution , it is done according to a very specific antenna characteristic which takes into account the spatial extension of the user or the antenna .we will model this characteristic at the user by the functions , , and for base antennas by the functions , . with these functions, we can introduce the linear source operators and mapping signals at the set of users and at the set of base antennas into the corresponding source distributions and , respectively .they are given as we will assume that the functions are supported on a small neighbourhood of the user location , and that the functions are supported on a small neighbourhood of the antenna location . moreover , all these neighbourhoods are strictly disjoint to each other . for example, the functions could be assumed to be -approximations of the dirac delta measure concentrated at the user locations , and the functions could be assumed to be -approximations of the dirac delta measure concentrated at the antenna locations .both , users and base antennas can also record incoming fields and transform the recorded information into signals .also here , this is usually done according to very specific antenna characteristics of each user and each base antenna . for simplicity( and without loss of generality ) , we will assume that the antenna characteristic of a user or base antenna for receiving signals is the same as for transmitting signals , namely for the user and for a base antenna .( the case of more general source and measurement operators is discussed in section [ pm.sec ] . ) with this , we can define the linear measurement operators and , respectively , which transform incoming fields into measured signals , by finally , we define the linear operator mapping sources to states by where solves the problem ( [ sym.1 ] ) , ( [ sym.2 ] ) . as already mentioned , we assume that the domain is chosen sufficiently large and that the boundary is sufficiently far away from the users and base antennas , such that there is no energy reaching the boundary in the time interval ] ) due to the finite speed of signal propagation . therefore , the operator is well - defined .formally , we can now introduce the two linear * communication operators * and which are at the main focus of this paper .they are defined as the operator models the following situation .the base station emits the signal which propagates through the complex environment . the users measure the arriving wave fields and transform them into measurement signalsthe measured signals at the set of all users is .the operator describes exactly the reversed situation .all users emit together the set of signals , which propagate through the given complex environment and are received by the base station .the corresponding set of measured signals at all antennas of the base station is just .no time - reversal is involved so far .in the following , we outline a typical problem arising in communication , which gives rise to a mathematically well - defined inverse problem .a specified user of the system , say , defines a ( typically but not necessarily short ) pilot signal which he wants to use as a template for receiving the information from the base station .the base station wants to emit a signal which , after having travelled through the complex environment and arriving at the user , matches this pilot signal as closely as possible .neither the base station nor any other user except of are required ( or expected ) to know the correct form of the pilot signal for this problem . as an additional constraint, the base station wants that at the other users , , as little energy as possible arrives when communicating with the specified user .this is also in the interest of the other users , who want to use a different channel for communicating at the same time with the base antenna , and want to minimize interference with the communication initiated by user .the complex environment itself in which the communication takes place ( i.e. the channel ) is assumed to be unknown to all users and to the base station . in order to arrive at a mathematical description of this problem, we define the ideal signal received by all users as each user only knows his own component of this signal , and the base antenna does not need to know any component of this ideal signal at all . * the inverse problem of communication*:in the terminology of inverse problems , the above described scenario defines an inverse source problem , which we call for the purpose of this paper the inverse problem of communication. the goal is to find a source distribution at the base station which satisfies the data at the users : the state equation relating sources to data is given by the symmetric hyperbolic system ( [ sym.1 ] ) , ( [ sym.2 ] ) .notice that the basic operator in ( [ opt.2 ] ) is unknown to us since we do not know the complicated medium in which the waves propagate .if the operator ( together with ) would be known at the base station by some means , the inverse source problem formulated above could be solved using classical inverse problems techniques , which would be computationally expensive but in principle doable . in our situation, we are able to do physical experiments , which amounts to applying the communication operator to a given signal .determining the operator explicitly by applying it to a set of basis functions of would be possible , but again too expensive .we will show in the following that , nevertheless , many of the classical solution schemes known from inverse problems theory can be applied in our situation even without knowing this operator .the basic tool which we will use is a series of time - reversal experiments , applied to carefully designed signals at the users and the base station .a practical template for an * iterative scheme for finding an optimal signal * at the base station can be envisioned as follows .user starts the communication process by emitting an initial signal into the complex environment .this signal , after having propagated through the complex environment , finally arrives at the base station and is received there usually as a relatively long and complicated signal due to the multiple scattering events it experienced on its way .when the base station receives such a signal , it processes it and sends a new signal back through the environment which is received by all users . after receiving this signal , allusers become active .the user compares the received signal with the pilot signal .if the match is not good enough , it processes the received signal in order to optimize the match with the pilot signal when receiving the next iterate from the base station .all other users identify the received signal as unwanted noise , and process it with the goal to receive in the next iterate from the base station a signal with lower amplitude , such that it does not interfere with their own communications .all users send now their processed signals , which together define , back to the base station .the base station receives them all at the same time , again usually as a long and complicated signal , processes this signal and sends a new signal back into the environment which ideally will match the desired signals at all users better than the previously emitted signal .this iteration stops when all users are satisfied , i.e. , when user receives a signal which is sufficiently close to the pilot signal , and the energy or amplitude of the signals arriving at the other users has decreased enough in order not to disturb their own communications . after this learning process of the channel has been completed , the user can now start communicating safely with the base antenna using the chosen pilot signal for decoding the received signals .similar schemes have been suggested in in a single - step fashion , and in performing multiple steps of the iteration .the main questions to be answered are certainly which signals each user and each base antenna needs to emit in each step , how these signals need to be processed , at which stage this iteration should be terminated , and which optimal solution this scheme is expected to converge to ._ one goal of this paper is to provide a theoretical framework for answering these questions by combining basic concepts of inverse problems theory with experimental time - reversal techniques . _a standard approach for solving problem ( [ opt.2 ] ) in the situation of noisy data is to look for the least - squares solution in order to practically find a solution of ( [ mat.2 ] ) , we introduce the cost functional in the * basic approach * we propose to use the * gradient method * for finding the minimum of ( [ mat.3 ] ) . in this method , in each iteration a correction is sought for a guess which points into the negative gradient direction of the cost functional ( [ mat.3 ] ) . in other words , starting with the initial guess , the iteration of the gradient method goes as follows : where is the step - size at iteration number . noticethat the signals are measured at the base station , whereas the difference is determined at the users .in particular , is the pilot signal only known by the user who defined it , combined with zero signals at the remaining users . is the signal received by the users at the -th iteration step .we see from ( [ mat.4 ] ) that we will have to apply the adjoint operator repeatedly when implementing the gradient method . in this sectionwe provide practically useful expressions for applying this operator to a given element of .[ lemma.adj.1 ] we have _ proof : _ this follows from . [ theorem.adj.1 ] we have _ proof : _ the proof is given in appendix a. next , we want to find an expression for , , where is the adjoint of the operator .[ theorem.adj.2 ] let be the solution of the adjoint symmetric hyperbolic system .\ ] ] then * proof : * the proof is given in appendix b. this procedural characterization of the adjoint operator is often used in solution strategies of large scale inverse problems , where it naturally leads to so - called backpropagation strategies. see for example and the references given there .notice that in the adjoint system ( [ adj.3])([adj.5 ] ) final value conditions are given at in contrast to ( [ sym.1])([sym.4 ] ) where initial value conditions are prescribed at .this corresponds to the fact that time is running backward in ( [ adj.3])([adj.5 ] ) and forward in ( [ sym.1])([sym.4 ] ) .in the following we want to define an operator such that holds . we will call this operator the *acoustic time - reversal operator*. we will also define the *acoustic time - reversal mirrors* and , which act on the signals instead of the sources or fields . we consider the acoustic system with ] .we want to calculate the action of the _ adjoint operator _ on a vector .[ theorem.atrm.1 ] let and let be the solution of the adjoint system with ] .then we have _ proof:_this theorem is just an application of theorem [ theorem.adj.2 ] to the acoustic symmetric hyperbolic system . for the convenience of the reader , we will give a direct proof as well in appendix c. we define the * acoustic time - reversal operator * by putting for all ( and similarly for all ) we define the * acoustic time - reversal mirrors * and by putting for all and all the following lemma is easy to verify .[ lemma.atrm.1 ] we have the following commutations [ theorem.atrm.2 ] for we have _ proof:_for the proof it is convenient to make the following definition . *acoustic time - reversal experiment : * for given define and and perform the following physical experiment with ] , and , in addition , reverse the directions of the velocities .we call this experiment the acoustic time - reversal experiment. notice that the time is running forward in this experiment. the solution of this experiment can obviously be represented by in order to show that the so defined time - reversal experiment is correctly modelled by the adjoint system derived above , we make the following change in variables : which just corresponds to the application of the operator to .we have ] and zero boundary conditions .we want to calculate the action of the _ adjoint operator _ on a vector .[ theorem.etrm.1 ] let and let be the solution of the adjoint system with ] and zero boundary conditions .doing this experiment means to process the data in the following way : time - reverse all data according to , ] . in these variablesthe time - reversal system ( [ etrm.11])([etrm.13 ] ) gets the form taking into account the definition of and , we see that where and solve the adjoint system ( [ etrm.4])([etrm.6 ] ) .therefore , according to theorem [ theorem.etrm.1 ] : since we have with ( [ etrm.14 ] ) , ( [ etrm.15 ] ) also the theorem is proven . + for electromagnetic waves , there is formally an alternative way to define the _ electromagnetic time - reversal operator _ ,namely putting for all accompanied by the analogous definitions for the _ electromagnetic time - reversal mirrors_. with these alternative definitions , theorem [ theorem.etrm.2 ] holds true as well , with only very few changes in the proof .which form to use depends mainly on the preferred form for modelling applied antenna signals in the given antenna system .the first formulation directly works with applied electric currents , whereas the second form is useful for example for magnetic dipole sources .define , , for the acoustic case , and , , for the electromagnetic case .we call the * time - reversal operator * and , the * time - reversal mirrors*. we combine the results of lemma [ lemma.atrm.1 ] and lemma [ lemma.etrm.1 ] into the following lemma .[ lemma.tra.1 ] we have the following commutations moreover , combining theorem [ theorem.atrm.2 ] and theorem [ theorem.etrm.2 ] we get [ theorem.tra.1 ] for we have with this , we can prove the following theorem which provides the fundamental link between time - reversal and inverse problems .[ theorem.tra.2 ] we have _ proof : _ recall that the adjoint operator can be decomposed as . with theorem [ theorem.adj.1 ] , theorem [ theorem.tra.1 ] , and lemma [ lemma.tra.1 ] , it follows therefore that which proves the theorem . the above theorem provides a direct link between the adjoint operator , which plays a central role in the theory of inverse problems , and a physical experiment modelled by .the expression defines a time - reversal experiment. we will demonstrate in the following sections how we can make use of this relationship in order to solve the inverse problem of communication by a series of physical time - reversal experiments .we only mention here that the above results hold as well for * elastic waves * with a suitable definition of the * elastic time - reversal mirrors*. we leave out the details for brevity .the results achieved above give rise to the following experimental procedure for applying the gradient method ( [ mat.4 ] ) to the inverse problem of communication as formulated in sections [ opt.sec ] and [ mat.sec ] .first , the pilot signal is defined by user as described in ( [ mat.1 ] ) .moreover , we assume that the first guess at the base station is chosen to be zero . then , using theorem [ theorem.tra.2 ] , we can write the gradient method ( [ mat.4 ] ) in the equivalent form or , expanding it , in a more detailed form , we arrive at the following * experimental procedure for implementing the gradient method * , where we fix in this description for all iteration numbers for simplicity . 1 .the user chooses a pilot signal which he wants to use for communicating with the base station .the objective signal at all users is then . the initial guess at the base station is defined to be zero , such that .2 . the user initiates the communication by sending the time - reversed pilot signal into the environment .this signal is .all other users are quiet .3 . the base station receives the pilot signal as .it time - reverses this signal and sends this time - reversed form , namely , back into the medium .the new signal arrives at all users as .all users compare the received signals with their components of the objective signal .they take the difference , and time - reverse it .they send this new signal back into the medium . 5 . the base station receives this new signal , time - reverses it , adds it to the previous signal , and sends the sum back into the medium as . 6 .this iteration is continued until all users are satisfied with the match between the received signal and the objective signal at some iteration number .alternatively , a fixed iteration number can be specified a - priori for stopping the iteration .needless to say that , in practical implementations , the laboratory time needs to be reset to zero after each time - reversal step .the experimental procedure which is described above is practically equivalent to the experimental procedure which was suggested and experimentally verified in .therefore , our basic scheme provides an alternative derivation and interpretation of this experimental procedure .we mention that several refinements of this scheme are possible and straightforward .for example , a weighted inner product can be introduced for the user signal space which puts different preferences on the satisfaction of the user objectives during the iterative optimization process .for example , if the importance of suppressing interferences with other users is valued higher than to get an optimal signal quality at the specified user , a higher weight can be put into the inner product at those users which did not start the communication process . a user who does not care about these interferences , simply puts a very small weight into his component of the inner product of .notice that there is no mechanism directly built into this procedure which prevents the energy emitted by the base antenna to increase more than the communication system can support .for example , if the subspace of signals is not empty , then it might happen that during the iteration described above ( e.g. due to noise ) an increasing amount of energy is put into signals emitted by the base station which are in this subspace and which all produce zero contributions to the measurements at all users .more generally , elements of the subspace of signals for a very small threshold , might cause problems during the iteration if the pilot signal chosen by the user has contributions in the subspace ( i.e. in the space of all with ) .this is so because in the effort of decreasing the mismatch between and , the base antenna might need to put signals with high energy into the system in order to get only small improvements in the signal match at the user side . since the environment ( and therefore the operator ) is unknown a - priori , it is difficult to avoid the existence of such contributions in the pilot signal .one possible way to prevent the energy emitted by the base station to increase artificially would be to project the signals onto the orthogonal complements of the subspaces or ( if they are known or can be constructed by some means ) prior to their emission .alternatively , the iteration can be stopped at an early stage before these unwanted contributions start to build up .( this in fact has been suggested in ) . in the following subsectionwe introduce an alternative way of ensuring that the energy emitted by the base station stays reasonably bounded in the effort of fitting the pilot signal at the users .consider the regularized problem with some suitably chosen regularization parameter . in this problem formulation a trade - off is sought between a signal fit at the user side and a minimized energy emission at the base station .the trade - off parameter is the regularization parameter .instead of ( [ mat.3 ] ) we need to consider now the negative gradient direction is now given by , such that the regularized iteration reads : where we have replaced by .the time - reversal iteration can be expanded into the following practical scheme comparing with ( [ gmc.2 ] ) , we see that the adaptations which need to be applied in the practical implementation for stabilizing the basic algorithm can easily be done .in this section we want to propose an alternative scheme for solving the inverse problem of communication . as mentioned above, a major drawback of the basic approach ( [ mat.2 ] ) is that the energy emitted by the base station is not limited explicitly when solving the optimization problem .the regularized version presented above alleviates this problem .however , we want to mention here that , under certain assumptions , there is an alternative scheme which can be employed instead and which has an energy constraint directly built in . under the formal assumption that there exists at least one ( and presumably more than one ) solution of the inverse problem at hand ( i.e. the formally underdetermined case ), we can look for the * minimum norm solution * in hilbert spaces this solution has an explicit form .it is here , the operator acts as a filter on the pilot signal . instead of sending the pilot signal to the base station , the users send the filtered version of it .certainly , a method must be found in order to apply the filter to the pilot signal .one possibility of doing so would be to try to determine the operator explicitly by a series of time - reversal experiments on some set of basis functions of , and then invert this operator numerically. however , this might not be practical in many situations .( it certainly would be slow and it would involve a significant amount of signal - processing , which we want to avoid here . ) therefore , we propose an alternative procedure .first , we notice that there is no need to determine the whole operator , but that we only have to apply it to one specific signal , namely .let us introduce the short notation in this notation , we are looking for a signal such that .we propose to solve this equation in the least squares sense : moreover , as a suitable method for practically finding this solution , we want to use the * gradient method*. starting with the initial guess , the gradient method reads where is the adjoint operator to and is again some step - size . expanding this expression , and taking into account and ,we arrive at in the practical implementation , we arrive at the following iterative scheme : this iteration can be implemented by a series of time - reversal experiments , without the need of heavy signal - processing. the final step of the above algorithm amounts to applying to the result of the gradient iteration for calculating , which yields then .this will then be the signal to be applied by the base station during the communication process with the user .in some situations it might be expected that the operator is ill - conditioned , such that its inversion might cause instabilities , in particular when noisy signals are involved . for those situations , a regularized form ofthe minimum norm solution is available , namely where denotes the identity operator in and is some suitably chosen regularization parameter .the necessary adjustments in the gradient iteration for applying to are easily done .we only mention here the resulting procedure for the implementation of this gradient method by a series of time - reversal experiments : again , the last step shown above is a final application of to the result of the gradient iteration for calculating , which yields then .this will then be the signal to be applied by the base station during the communication process with the user .we have introduced above the regularized least squares solution of the inverse problem of communication , namely with being the regularization parameter . in hilbert spaces ,the solution of ( [ rls.1 ] ) has an explicit form .it is where is the identity operator in .it is therefore tempting to try to implement also this direct form as a series of time - reversal experiments and compare its performance with the gradient method as it was described above . as our last strategywhich we present in this paper , we want to show here that such an alternative direct implementation of ( [ rls.1 ] ) is in fact possible .notice that in ( [ rls.1 ] ) the filtering operator is applied at the base station , in contrast to the previous case where the user signal was filtered by the operator .analogously to the previous case , we need to find a practical way to apply this filter to a signal at the base station .we propose again to solve the equation in the least squares sense , where .defining and using and , we arrive at the following * gradient iteration * for solving problem ( [ rls.3 ] ) : this gives rise to the following practical implementation by a series of time - reversal experiments : will then be the signal to be applied by the base station during the communication process with the user .in many practical applications , only partial measurements of the whole wave - field are available .for example , in ocean acoustics often only pressure is measured , whereas the velocity field is not part of the measurement process .similarly , in wireless communication only one or two components of the electric field might be measured simultaneously , but the remaining electric components and all magnetic components are missing .we want to demonstrate in this section that all results presented above are valid also in this situation of partial measurements , with the suitable adaptations .mathematically , the measurement operator needs to be adapted for the situation of partial measurements .let us concentrate here on the special situation that only one component of the incoming wave field is measured by the users and the base station .all other possible situations will then just be combinations of this particular case .it might also occur the situation that users can measure a different partial set of components than the base station .that case also follows directly from this canonical situation .we introduce the new signal space at the base station )^{{k}} ] all fields along this boundary are identically zero .this is expressed by the boundary conditions given in ( [ sym.4 ] ) and ( [ adj.5 ] ) .let be a solution of ( [ sym.1]),([sym.2]),([sym.4 ] ) , and a solution of ( [ adj.3])([adj.5 ] ) .then the first term on the left hand side of ( [ pt2.1 ] ) and the third term on the right hand side cancel each other because of ( [ sym.1 ] ) .the second term on the left hand side and the first term on the right hand side cancel each other because of ( [ adj.3 ] ) .the -term and the term vanish due to ( [ adj.4 ] ) and ( [ sym.2 ] ) , respectively , and the boundary integral vanishes because of ( [ sym.4 ] ) and ( [ adj.5 ] ) . the remaining terms ( i.e. the third term on the left hand side and the second term on the right hand side ) can be written as with as defined in ( [ adj.6 ] ) . prove the lemma by using _greens formula _ : \,d{{\bf x}}dt\ ] ] \,d{{\bf x}}dt + \,\int_{0}^{t}\int_{\omega}\left [ { { \bf q}}_{{{\bf v}}}{{\bf v}}_a \,+\ , { { \bf q}}_pp_a \right]\,d{{\bf x}}dt\ ] ] \,d{{\bf x}}dt\ ] ] \,d{{\bf x}}dt + \,\int_{0}^{t}\int_{\omega}\left [ { { \bf q}}_{{{\bf v}}}{{\bf v}}_a \,+\ , { { \bf q}}_pp_a \right]\,d{{\bf x}}dt\ ] ] \,d{{\bf x}}\,+\,\int_{\omega } \kappa \big [ ( p_{f}p_{a})({{\bf x}},t )-(p_{f}p_{a})({{\bf x}},0)\big]\,d{{\bf x}}.\ ] ] this equation has the form ( [ pt2.1 ] ) .notice that we have augmented green s formula in ( [ pt3.1 ] ) , as already shown in ( [ pt2.1 ] ) , by some terms which appear in identical form on the left hand side and on the right hand side . the first term on the left hand side of equation ( [ pt3.1 ] ) and the third term on the right hand sidecancel each other due to ( [ atrm.1]),([atrm.2 ] ) . the second term on the left hand side and the first term on the right hand sidecancel each other because of ( [ atrm.4]),([atrm.5 ] ) .the ( )-terms and the ( )-terms vanish due to ( [ atrm.6 ] ) , ( [ atrm.3 ] ) , respectively , and the boundary terms vanish because of zero boundary conditions .we are left over with the equation\,d{{\bf x}}dt \,=\,\int_{0}^{t}\int_{\omega}\left [ { { \bf q}}_{{{\bf v}}}{{\bf v}}_a \,+\ , { { \bf q}}_p p_a \right]\,d{{\bf x}}dt\ ] ] defining by ( [ atrm.7 ] ) , this can be written as therefore , is in fact the adjoint of , and the lemma is proven .we prove the lemma by using _ green s formula _ : \,d{{\bf x}}dt\ ] ] \,d{{\bf x}}dt \,+\,\int_{0}^{t}\int_{\omega}\left[{{\bf q}}_e { { \bf e}}_a\,+\,{{\bf q}}_h{{\bf h}}_a \right]\,d{{\bf x}}dt\ ] ] \,d{{\bf x}}dt\ ] ] \,d{{\bf x}}dt \,+\,\int_{0}^{t}\int_{\omega}\left[{{\bf q}}_e { { \bf e}}_a\,+\,{{\bf q}}_h{{\bf h}}_a \right]\,d{{\bf x}}dt\ ] ] \,d{{\bf x}}\,+\,\int_{\omega}\mu\big [ ( { { \bf h}}_{f}{{\bf h}}_{a})(t)- ( { { \bf h}}_{f}{{\bf h}}_{a})(0)\big]\,d{{\bf x}}\ ] ] this equation has the form ( [ pt2.1 ] ) .notice that we have augmented green s formula in ( [ pt4.1 ] ) , as already shown in ( [ pt2.1 ] ) , by some terms which appear in identical form on the left hand side and on the right hand side . the first term on the left hand side of equation ( [ pt4.1 ] ) and the third term on the right hand sidecancel each other because of ( [ etrm.1 ] ) and ( [ etrm.2 ] ) . the second term on the left hand side and the first term on the right hand sidecancel each other because of ( [ etrm.4 ] ) , ( [ etrm.5 ] ) .the ( )-terms and the ( )-terms vanish due to ( [ etrm.3 ] ) and ( [ etrm.6 ] ) .the boundary terms vanish because of zero boundary conditions .we are left over with the equation \,d{{\bfx}}dt \,=\ , \int_{0}^{t}\int_{\omega}\left[{{\bf q}}_e { { \bf e}}_a\,+\,{{\bf q}}_h{{\bf h}}_a \right]\,d{{\bf x}}dt .\ ] ] defining by ( [ etrm.7 ] ) , this can be written as therefore , is in fact the adjoint of , and the lemma is proven .99 kuperman w a , hodgkiss w s , song h c , akal t , ferla c , and jackson d r 1998 phase conjugation in the ocean : experimental demonstration of an acoustic time reversal mirror _ j. acoust .* 103 * 25 - 40 | we establish a direct link between the time - reversal technique and the so - called adjoint method for imaging . using this relationship , we derive new solution strategies for an inverse problem which arises in telecommunication . these strategies are all based on iterative time - reversal experiments , which try to solve the inverse problem _ experimentally _ instead of computationally . we will focus in particular on examples from underwater acoustic communication and wireless communication in a multiple - input multiple - output ( mimo ) setup . |
it is common for dynamical systems to have two or more coexisting attractors . in predicting the long - term behavior of a such a system , it is important to determine sets of initial conditions of orbits that approach each attractor ( i.e. , the basins of attraction ) .the boundaries of such sets are often fractal ( , chapter 5 of , and references therein ) .the fine - scale fractal structure of such a boundary implies increased sensitivity to errors in the initial conditions : even a considerable decrease in the uncertainty of initial conditions may yield only a relatively small decrease in the probability of making an error in determining in which basin such an initial condition belongs . for discussion of fractal basin boundaries in experiments ,see chapter 14 of .thompson and soliman showed that another source of uncertainty induced by fractal basin boundaries may arise in situations in which there is slow ( adiabatic ) variation of the system .for example , consider a fixed point attractor of a map ( a node ) . asa system parameter varies slowly , an orbit initially placed on the node attractor moves with time , closely following the location of the solution for the fixed point in the absence of the temporal parameter variation . as the parameter varies , the node attractor may suffer a saddle - node bifurcation .for definiteness , say that the node attractor exists for values of the parameter in the range , and that the saddle - node bifurcation of the node occurs at .now assume that , for a parameter interval ] .the map }(x) ] by adding a function ( which depends on a parameter ) that will cause a saddle - node bifurcation of one of the attracting fixed points but not of the other two [ see figs .1(a ) and 1(b ) ] .we investigate }(x)+\mu\sin(3\pi x),\quad \mbox{where } \quad g(x)=3.832\,x(1-x).\end{aligned}\ ] ] numerical calculations show that the function satisfies all the conditions of the saddle - node bifurcation theorem for having a backward saddle - node bifurcation at and .figure 2(a ) displays how the basins of the three attracting fixed points of the map change with variation of . for third iterate of the logistic map is unperturbed , and it has three attracting fixed points whose basins we color - coded with blue , green and red . for every value of , the red region ] is the set of initial conditions attracted to the middle stable fixed point which we denote .the blue region ] of the newly created stable fixed point immediately has infinitely many disjoint intervals and its boundary displays fractal structure . according to the terminology of robert et al . , we may consider this bifurcation an example of an ` explosion ' .figure 3 graphs the computed dimension of the fractal basin boundary versus the parameter . for , we observe that appears to be a continuous function of .park et al . argue that the fractal dimension of the basin boundary near , for , scales as with the dimension at ( is less than the dimension of the phase space ) , and a positive constant .figure 3 shows that the boundary dimension experiences a discontinuous jump at the saddle - node bifurcation when .we believe that this is due to the fact that the basin ] , denoted , as a 0 , and the attractor of the red region ] and ] and ] , sufficiently far from the fractal basin boundary , and that is not too small ( i.e. , ) . if one changes the horizontal scale of fig .6(a ) from to [ see fig .6(b ) ] , the complex band structure appears asymptotically periodic .furthermore , we find that the period in of the structure in fig .6(b ) asymptotically approaches as becomes small . in order to explain this result, we again consider the map , the local approximation of in the region of the saddle - node bifurcation .equations can be approximated by we perform the following numerical experiment .we consider orbits of our approximate two dimensional map given by eq .starting at .we define a final state function of an orbit swept with parameter in the following way .it is 0 if the orbit has at least one iterate in a specified fixed interval far from the saddle - node bifurcation , and is 1 , otherwise .in particular , we take the final state of a swept orbit to be 0 if there exists such that , and to be 1 otherwise .figure 6(c ) graphs the corresponding numerical results .similar to fig .6(b ) , we observe periodic behavior in with period .in contrast to fig .6(b ) where the white band structure seems fractal , the structure within each period in fig .6(c ) consists of only one interval where the final state is 0 and one interval where the final state is 1 .this is because is a single interval , while the green basin [ denoted 0 in fig .6(b ) ] has an infinite number of disjoint intervals and a fractal boundary ( see fig . 2 ) .+ with the similarity between figs .6(b ) and 6(c ) as a guide , we are now in a position to give a theoretical analysis explaining the observed periodicity in . in particular , we now know that this can be explained using the canonical map , and that the periodicity result is thus universal [ i.e. , independent of the details of our particular example , eq . ]. for slow sweeping ( i.e. , small ) , consecutive iterates of in the vicinity of and differ only slightly , and we further approximate the system by the following ricatti differential equation , the solution of eq . can be expressed in terms of the airy functions and and their derivatives , denoted by and , where and is a constant to be determined from the initial condition .we are only interested in the case of slow sweeping , , and ( which is the stable fixed point of destroyed by the saddle - node bifurcation at ) .in particular , we will consider the case where and ( i.e. , ) .using to solve for yields \gg 1 ] , which can be satisfied even if ] or ^ 2\gg \xi_f ] .the difference ] in eq .can be neglected , and we get . figure 7 graphs numerical results of ^{-1} ] , which agrees well with the prediction of the above analysis and our numerical value for at the bifurcation , .an alternate point of view on this scaling property is as follows . for ( i.e. , ) and slow sweeping ( i.e. , small ) , the orbit closely follows the stable fixed point attractor of , until , and the saddle - node bifurcation takes place . however , due to the discreteness of , the first nonnegative value of depends on and ( see fig .now consider two values of , one satisfying , and another satisfying . because and are very close ( for large )_ and _ both lead to pass through ( one at time , and the other at time ) , it is reasonable to assume that their orbits for are similar ( except for a time shift ) ; i.e. , they go to the same attractor .thus , the period of is approximately .+ we now consider the intervals of between the centers of consecutive wide white bands in fig .figure 9 graphs the calculated fractal dimension of the boundary between white bands in these consecutive intervals versus their center value of . from fig . 9, we see that as increases , the graph of the fractal dimension does not converge to a definite value , but displays further structure .nevertheless , numerics show that as becomes large ( i.e. , in the range of ) , varies around the value 0.952 .this is consistent with the numerics presented in fig .4(b ) which graphs the dimension of the fractal basin boundary for the time - independent map , at fixed values of the parameter where . thus , for large , provides an estimate of the dimension of the fractal basin boundary in the absence of sweeping at .+ we now discuss a possible experimental application of our analysis .the conceptually most straightforward method of measuring a fractal basin boundary would be to repeat many experiments each with precisely chosen initial conditions . by determining the final attractor corresponding to each initial condition , basins of attractioncould conceivably be mapped out .however , it is commonly the case that accurate control of initial conditions is not feasible for experiments . thus , the application of this direct method is limited , and , as a consequence , fractal basin boundaries have received little experimental study , in spite of their fundamental importance . if a saddle - node bifurcation occurs on the fractal basin boundary , an experiment can be arranged to take advantage of this . in this case , the purpose of the experiment would be to measure the dimension as an estimate of the fractal dimension of the basin boundary .the measurements would determine the final attractor of orbits starting at the attractor to be destroyed by the saddle - node bifurcation , and swept through the saddle - node bifurcation at different velocities ( i.e. , the experimental data corresponding to the numerics in fig . 6 ) .this does not require precise control of the initial conditions of the orbits .it is sufficient for the initial condition to be in the basin of the attractor to be destroyed by the saddle - node bifurcation ; after enough time , the orbit will be as close to the attractor as the noise level allows .then , the orbit may be swept through the saddle - node bifurcation .the final states of the orbits are attractors ; in their final states , orbits are robust to noise and to measurement perturbations .the only parameters which require rigorous control are the sweeping velocity ( i.e. , ) and the initial value of the parameter to be swept ( i.e. , ) ; precise knowledge of the parameter value where the saddle - node bifurcation takes place ( i.e. , ) is not needed .[ it is also required that the noise level be sufficiently low ( see sec .[ sec : noise ] ) . ]a question of interest is how much time it takes for a swept orbit to reach the final attracting state .namely , we ask how many iterations with are needed for the orbit to reach a neighborhood of the attractor having the green basin .due to slow sweeping , the location of the attractor changes slightly on every iterate .if is a fixed point attractor of ( with constant ) , then a small change in the parameter , yields a change in the position of the fixed point attractor , we consider the swept orbit to have reached its final attractor if consecutive iterates differ by about ( which is proportional to ) . for numerical purposes , we consider that the orbit has reached its final state if . in our numerical experiments ,this condition is satisfied by every orbit before reaches its final value .we refer to the number of iterations with needed to reach the final state as the _ capture time _ of the corresponding orbit .figure 10 plots the capture time by the attractor [ having the green basin in fig . 2 ] versus for a range corresponding to one period of the structure in fig .no points are plotted for values of for which the orbit reaches the attractor .the capture time graph has fractal features , since for many values of the orbit gets close to the fractal boundary between ] . using the fact that the final destination of the orbit versus is asymptotically periodic[ see fig .6(b ) ] , we can provide a further description of the capture time graph .we consider the series of the largest intervals of for which the orbit reaches the attractor [ see fig .6(b ) ; we refer to the wide white band around and the similar ones which are ( asymptotically ) separated by an integer number of periods ] .orbits swept with at the centers of these intervals spend only a small number of iterations close to the common fractal boundary of ] .thus , the capture time of such similar orbits does not depend on the structure of the fractal basin boundary .we use eq . as an approximate description of these orbits .a swept orbit reaches its final attracting state as becomes large .then , the orbit is rapidly trapped in the neighborhood of one of the swept attractors of .thus , we equate the argument of the airy function in the denominator to its first root [ see ] , solve for , and substract ( the time for to reach the bifurcation value ) .this yields the following approximate formula for the capture time where is the largest root of the airy function .thus , we predict that for small , a log - log plot of the capture time of the selected orbits versus is a straight line with slope -1/3 .figure 11 shows the corresponding numerical results .the best fitting line ( not shown ) has slope -0.31 , in agreement with our prediction .we now consider the addition of noise .thus , we change our swept dynamical system to where is random with uniform probability density in the interval ] , and the time step used is .figure 16(a ) shows the dependence of the probability of approaching the attractor represented as a 1 versus the noise amplitude for three specially selected values of ( centers of white bands in the structure of fig .15 where the swept orbit reaches the attracting state represented by 1 ) spread over one decade . figure 16(b ) shows collapse of the data in fig .16(a ) to a single curve when the noise amplitude is rescaled by , as predicted by our previous one - dimensional analysis ( sec .[ sec : noise ] ) .thus , we believe that the scaling properties of the indeterminate saddle - node bifurcation we found in one - dimensional discrete maps are also shared by higher dimensional flows .in this section we consider the case of a one dimensional map having two attractors a and b , one of which ( i.e. , a ) exists for all ] ) .when an orbit is initially on b , and is slowly increased through , the orbit will always go to a ( which is the only attractor for ) .however , it is possible to distinguish between two ( or more ) different ways of approaching a. [ in particular , we are interested in ways of approach that can be distinguished in a coordinate - free ( i.e. , invariant ) manner . ] as we show in this section , the way in which a is approached can be indeterminate . in this case, the indeterminacy is connected with the existence of an invariant nonattracting cantor set embedded in the basin of a for . as an illustration ,we construct the following model calculations show that satisfies all the requirements of the saddle - node bifurcation theorem for undergoing a backward saddle - node bifurcation at and .figure 17(a ) shows the graph of versus at .figure 17(b ) shows how the basin structure of the map varies with the parameter . for positive values of , has only one attractor which is at minus infinity .the basin of this attractor is the whole real axis .as decreases through , a new fixed point attractor is created at .the basin of attraction of this fixed point has infinitely many disjoint intervals displaying fractal features [ indicated in black in fig .this is similar to the blue basin ] is decreasing and diverges to minus infinity .for each value of , let be the fixed point of to the right of at which .a point is colored green if its trajectory diverges to minus infinity and it passes through the interval , and it is colored red if its trajectory diverges to minus infinity and it does not pass through the interval .denote the collection of points that are colored green by ] .using the methods and techniques of , it can be shown that the collection of points which are common boundary points of ] is a cantor set ] .figure 18(b ) is a zoom of figure 18(a ) in the region of the saddle - node bifurcation . for values of , in the vicinity of ,one notices a fractal alternation of red and green stripes .the green and red stripe structure in fig .18(b ) shares qualitative properties with the structure in fig .all the analysis in sec .[ sec2 ] can be adapted straightfowardly to fit this situation .figure 19 shows how the chaotic saddle of the map varies with .the chaotic saddle is generated numerically using the pim - triple method .for an explanation of this method see nusse and yorke . using arguments similar to those in sec .[ sec : scdim ] , we predict that changing the horizontal axis of fig . 19 from to makes the chaotic saddle asymptotically periodic .numerical results confirming this are presented in fig .for given by , we were able to find a parameter value where changing the horizontal axis of fig .19 from to [ see fig . 20(b ) ] apparently makes the chaotic saddle asymptotically periodic [ with a different period than that of fig .20(a ) ] . as in the case discussed in sec .[ sec2 ] , past the saddle - node bifurcation of at , infinitely many other saddle - node bifurcations of periodic orbits take place on the invariant cantor set ] also contains a cantor set . by coloring this whole segment green ,this information is lost .therefore , the coloring scheme should be adapted if one wants to have the whole invariant cantor set represented for every .for example , if a trajectory that diverges to minus infinity contains a point that is greater than then the initial point is colored green , if a trajectory that diverges to minus infinity contains a point that is greater than but not greater than then the initial point is colored yellow .a point is colored red , if its trajectory diverges to minus infinity and does not have a point that is greater than . then the collection of boundary points ( a point is a boundary point if every open neighborhood of contains points of at least two different colors ) is a cantor set that contains the cantor set described above . | we analyze situations where a saddle - node bifurcation occurs on a fractal basin boundary . specifically , we are interested in what happens when a system parameter is slowly swept in time through the bifurcation . such situations are known to be indeterminate in the sense that it is difficult to predict the eventual fate of an orbit that tracks the pre - bifurcation node attractor as the system parameter is swept through the bifurcation . in this paper we investigate the scaling of ( 1 ) the fractal basin boundary of the static ( i.e. , unswept ) system near the saddle - node bifurcation , ( 2 ) the dependence of the orbit s final destination on the sweeping rate , ( 3 ) the dependence of the time it takes for an attractor to capture a swept orbit on the sweeping rate , and ( 4 ) the dependence of the final attractor capture probability on the noise level . with respect to noise , our main result is that the effect of noise scales with the 5/6 power of the parameter drift rate . our approach is to first investigate all these issues using one - dimensional map models . the simplification of treatment inherent in one dimension greatly facilitates analysis and numerical experiment , aiding us in obtaining the new results listed above . following our one - dimensional investigations , we explain that these results can be applied to two - dimensional systems . we show , through numerical experiments on a periodically forced second order differential equation example , that the scalings we have found also apply to systems that result in two dimensional maps . |
hydrodynamical and microphysical processes of baryonic matter play an important role in structure formation at the smaller sub cluster scales . indeed , microphysics can play a more important role than gravity , especially when the cooling time of the gas is much shorter than the dynamical or hubble times .protogalactic clouds , which may form , for example , at high redshifts in cdm models can collapse through cooling instabilities to form an early generation of stars .the feedback from primordial stars can change the physical state of the pre galactic medium and thus have considerable influence over the subsequent formation of stars and galaxies and the general state of the intergalactic medium ( couchman & rees 1986 ; tegmark et al .microphysical processes are also very important at the center most regions of cosmological sheets or pancakes. originally studied by zeldovich ( 1970 ) in the context of neutrino - dominated cosmologies , sheets are ubiquitous features in nonlinear structure formation simulations of cdm - like models with gas , and manifest on a spectrum of length scales and formation epochs .cooling processes occur on very short time scales at the center most densest parts of the pancake structures where stars and galaxies can form from the fragmentation of the gas .( bond et al .1984 ; shapiro & struck marcell 1985 ; yuan , centrella & norman 1991 ; anninos & norman 1996 ) .it is well known that when nonequilibrium atomic reactions are properly taken into account , the cooling time can be shorter than the hydrogen recombination time and that gas which cools to will likely cool faster than it can recombine .the effect of this nonequilibrium cooling is to leave behind a greater residual of free electrons and ions , as compared to the equilibrium case .the free electrons can be captured by neutral hydrogen to form that subsequently produce hydrogen molecules .if large concentrations of molecules can form , the cooling is dominated by the vibrational / rotational modes of molecular hydrogen which acts to efficiently cool the gas to about , thereby reducing the jeans mass of the gas .hydrogen molecules can therefore play a crucial role in the formation of stars as they provide the means for cloud fragments to collapse and dissipate their energy . however , the typically high computational requirements and technical difficulties needed to solve the chemical rate equations relevant for production in hydrodynamic flows have forced previous authors to impose simplifying assumptions such as the steady state shock condition which reduces the problem to zero dimension . in this case , only the time development of the hydrodynamic and thermodynamic variables are solved ( izotov & kolesnick 1984 ; maclow & shull 1986 ; shapiro & kang 1987 ; kang & shapiro 1992 ) .all of these studies have consistently found that the mass fraction of can reach behind sufficiently strong shocks , which is adequate to cool the gas to temperatures of order well within a hubble time .more recently haiman , thoul & loeb ( 1995 ) have investigated the formation of low mass objects in one dimensional numerical calculations .they confirm the importance of in the collapse of spherically symmetric isolated objects at high redshifts .although much insight has been gained about the chemical aspects of molecular hydrogen formation and cooling , it remains to incorporate chemical reaction flows in realistic cosmological models .this paper discusses a method that we have developed for solving the kinetic rate equations with multi species chemistry in nonequilibrium and self consistently with the hydrodynamic and equations in an expanding flrw universe .the method is based on a backward differencing formula ( bdf ) for the required stability when solving stiff sets of equations , and is designed for both accuracy but especially speed so that it may be used in three dimensional codes with a minimal strain on computational resources . in all , we solve for 28 kinetic reactions including collisional and radiative processes for nine different species : , and , which we track individually with their unique mass transport equations .we have also implemented a comprehensive model for the radiative cooling of the gas that includes atomic line excitation , recombination , collisional ionization , free - free transitions , molecular line excitations , and compton scattering of the cosmic background radiation by electrons .the set of hydrodynamic , , kinetic and cosmological equations that we solve are summarized in [ sec : equations ] .section [ sec : numerical ] discusses the bdf method and its integration into the hydrodynamic solver .several tests of our code are presented in [ sec : codetests ] , including 1d radiative shock waves , 2d cosmological sheets , and fully 3d simulations of cdm cosmological evolutions in which we compare the bdf method to results obtained when the packaged routine lsodar ( hindmarsh 1983 ; petzold 1983 ) is substituted in its place .we provide concluding remarks in [ sec : summary ] .finally we note that a companion paper has been written which discusses the chemical model and cooling functions in more detail ( abel et al .1996 ) . in that paper, we motivate the model and argue its comprehensive nature in the choice of reactions , insofar as the formation of hydrogen molecules in cosmological environments is concerned .we also provide more up to date and accurate fits to the different rate coefficients . for completeness ,we tabulate the reaction list and cooling processes in appendices a and b of this paper , but leave the rate coefficients to abel et al .the hydrodynamical equations for mass , momentum and energy conservation in an expanding frw universe with comoving coordinates are + 5{\frac{\dot{a}}{a}}{\rho_b}{{v}_{b , i } } = - { \frac{1}{a^2}}{\frac{\partial p}{\partial x_i } } - { \frac{\rho_b}{a^2}}{\frac{\partial \phi}{\partial x_i } } , \label{hydromom}\ ] ] where , and are the baryonic density , pressure and specific internal energy defined in the proper reference frame , is the comoving peculiar baryonic velocity , is the comoving gravitational potential that includes baryonic plus dark matter contributions , is the cosmological scale factor , and and are the microphysical cooling and heating rates . the equations for collisionless dark matter in comoving coordinates are the baryonic and dark matter components are coupled through poisson s equation for the gravitational potential where is the total density and is the proper background density of the universe .the cosmological scale factor is given by einstein s equation ^{1/2}\ ] ] where is the density parameter including both baryonic and dark matter contributions , is the density parameter attributed to the cosmological constant , and is the present hubble constant .in addition to the usual hydrodynamic equations ( [ hydromass ] ) ( [ hydroenergy ] ) , we must also solve equivalent mass conservation equations for the densities of each of the nine separate atomic and molecular species that we track where the signs of each term on the right hand side depend on whether the process creates or destroys the species .the are rate coefficients for the two body reactions and are functions of the gas temperature .explicit analytic fits for these coefficients over a broad range of temperatures , and a general discussion of the relevant chemical reactions , can be found in abel et al .( 1996 ) . in all, we include 28 rate coefficients , one for each of the chemical reactions shown in appendix a. the in equation ( [ hydrospecies ] ) are integrals due to photoionizations and photodissociations where is the intensity of the radiation field , is the flux , are the cross - sections for the photoionization and photodissociation processes , and are the frequency thresholds for the respective processes .we note that the nine equations represented by ( [ hydrospecies ] ) are not all independent . the baryonic matter is composed of hydrogen and helium with a fixed primordial hydrogen mass fraction of .hence we have the following three conservation equations where is the number density of free electrons and the proton mass . to complete the set of equations ( [ hydromass ] ) ( [ hydroenergy ] ), we must also specify the equation of state appropriate for an ideal gas where is the ratio of specific heats for the baryonic matter , is boltzmann s constant , is the gas temperature , and are the number densities for each of the different species .we also need to provide the necessary cooling and heating functions to the right - hand - side of equation ( [ hydroenergy ] ) where is the compton cooling ( or heating ) due to interactions of free electrons with the cosmic microwave background radiation , and are the cooling rates from two - body interactions between species and .the are integrals due to photoionizing and photodissociating heating we include a total of fourteen processes in the cooling function and three processes for heating . the physical mechanisms and mathematical expressions for each process are given in appendix b.it is well known that the differential equations describing non - equilibrium atomic and molecular rate reactions can exhibit variations on extreme time scales .characteristic creation and destruction scales can differ by many orders of magnitude among the different species and reactions . as a result , explicit schemes for integration can be unstable unless unreasonably small time steps ( smaller than the shortest dynamical times in the reaction flow ) are taken , which makes any multi - dimensional computation prohibitively expensive . for this reason implicit methods are preferred for stiff sets of equations .these methods generally involve a newton s iterative procedure to achieve convergence , and for large dimensional jacobian matrices these implicit methods can also be very time consuming .a number of packaged routines exist which are based on identifying the disparity in time scales among the species and switching between stiff and nonstiff solvers .an example of such a package is the livermore solver for ordinary differential equations with automatic method switching for stiff and nonstiff problems lsodar ( hindmarsh 1983 ; petzold 1983 ) .however , an implementation of this solver in multi - dimensions is extremely costly in computer time and an alternative numerical scheme is desirable for fully three - dimensional calculations where computational speed is crucial .we use an operator and directional splitting of the hydrodynamic equations ( [ hydromass ] ) ( [ hydroenergy ] ) and ( [ hydrospecies ] ) to update the fourteen state variables , , and .six basic steps are utilized .first , the source step accelerates the fluid velocity due to pressure gradients and gravity , and modifies the velocity and energy equations to account for artificial viscosity where is a second rank tensor representing the artificial viscous stresses ( stone & norman 1992 ) .a staggered mesh scheme is utilized whereby the scalar variables , , , and the artificial viscosity are zone centered , while the velocities are located at the zone interfaces . the pressure ,potential , and viscosity gradients are thus naturally aligned with the momentum terms . in the second cooling / heating step ,the energy changes are computed from `` pdv '' work and radiative cooling and heating from microphysical processes we discuss solving this equation further in the following subsection .the third expansion step updates all state variables from the terms arising from the expansion of the universe the homogeneous nature of the expansion allows a simple solution , although a more generalized procedure is required for the energy equation in which an effective adiabatic index must be defined to eliminate the pressure term in the case of ionized gas ( anninos & norman 1994 ) .the fourth or transport step solves the advection terms several different monotonic schemes have been implemented , including donor cell , van leer , and piecewise parabolic advection .all results presented here use the second order van leer method which has the best accuracy - to - efficiency performance .a fifth step evolves the densities of the separate species according to the chemistry of the collisional and radiative kinetic equations the methods for solving these equations is the focus of the next subsection .the total baryonic density and the density of each individual species are updated independently in the expansion , transport and chemistry steps .we are thus able to monitor the accuracy of our methods by evaluating the constraint equations for hydrogen , helium and charge conservation , equations ( [ conserveh ] ) ( [ conservene ] ) .over the course of a typical 3d calculation of approximately one - thousand time steps , we find _ maximum _ errors , which are mostly concentrated at the shock fronts , to be of order 10 to 30% .however , for increased stability and accuracy , we introduce a sixth step in our scheme to enforce the constraint equations at every time step .an important point that one must consider in taking this approach is that the errors can be larger than the concentration of those species that are depleted .it is therefore necessary to modify only the concentrations of the dominant species . for a hot gas in which and are mostly ionized, the hydrogen and helium constraints should be solved for the ionized components and . in cold ( )gas the neutral hydrogen and neutral helium concentrations must be adjusted to satisfy the constraints . in this way, we are guaranteed to make only small ( typically much less than one percent ) fractional changes to any of the species at each timestep .more specific details of the numerical methods used in the hydrodynamic updates ( [ split1 ] ) ( [ split4 ] ) can be found elsewhere ( stone & norman 1992 , anninos et al . 1994 ) .here we emphasize the solution to the chemistry step ( [ split5 ] ) .notice that equations ( [ split5 ] ) can be written schematically as where and is the atomic mass number of the element and is the proton mass .the are the collective source terms responsible for the creation of the species .the second terms involving represent the destruction mechanisms for the species and are thus proportional to .equation ( [ model ] ) suggests a backward difference formula ( bdf ) can be used in which all source terms are evaluated at the advanced time step .discretization of ( [ model ] ) yields lower order backward differentiation methods when applied to problems of the form are stiffly stable .this rather restrictive stability property is highly desirable when solving sets of stiff equations ( oran & boris 1987 ) .we have tried other less stable methods including higher order multi - step predictor - corrector schemes , various runge - kutta and adams - bashforth algorithms , and a newton s procedure to solve the backwards differenced linearized equations .all of these alternative schemes have either proven to be unstable , less accurate or more expensive computationally compared to the simple bdf method .the solver can be optimized further by noting that the intermediaries and in the molecular hydrogen production processes have large rate coefficients and low concentrations .they are thus very sensitive to small changes in the more abundant species .on the other hand , the low concentrations of and implies that they do not significantly influence the more abundant electron , hydrogen and helium concentrations .this suggests that the nine species can be grouped into two categories : fast and slow reacting .the fast reacting group , comprised of and , can be decoupled from the slower network and treated independently since the kinetic time scales for these species are much shorter than the characteristic times of the other seven species and the cosmological or gravitational times . and thus be considered in equilibrium at all times , independent of the hydrodynamic state variables .the expressions for the equilibrium abundances of and can be reduced by recognizing that reaction ( 19 ) , according to appendix a , can be neglected as a small order correction to , due to the low concentrations of both species and . neglecting reaction ( 19 ), the equilibrium abundance of can be written independent of where the variables are the rate coefficients with subscripts referring to the reaction number in appendix a. then given , the equilibrium abundance of can be written with no additional assumptions as the separation into fast and slow reacting systems helps to further increase the accuracy and stability of the bdf method when applied only to the slower network over the longer characteristic time scales required by the hubble , hydrodynamic courant , and gravitational free - fall times in cosmological simulations . due to the intrinsic nonlinearity of equation ( [ model ] ) , not all source terms can be evaluated at the advanced time levels .significant errors ( of order 20% as measured by the final fractional abundance of hydrogen molecules ) can be introduced if the source terms and are evaluated at the current time level at which the species data is known .improvements to this crude approximation can be made by sequentially updating each species in order , rather than updating all species simultaneously from the data at the past time step .for example , the order in which we solve the rate equations was determined finally through experimentation to be , , , , and , followed by the algebraic equilibrium expressions for and , then in the end updating also using the bdf scheme ( [ bdf ] ) .the updated concentrations of the ( and previous ) species are used as source terms in the equation for .further improvements in accuracy and stability can be made by mimicking more closely a fully bdf scheme by subcycling the rate solve over a single hydrodynamic courant time step .the subcycle time steps are determined so that the maximum fractional change in the electron concentration is limited to 10% per timestep , ie . with .we note that this same subcylcing procedure can be used to update the energy in the cooling / heating step in equation ( [ split2 ] ) .the equation of state ( [ eqos ] ) couples the energy and pressure through the gas temperature , and although we have tried a newton - raphson iterative procedure , we found it to converge very slowly or sometimes not at all because the cooling / heating rates are strongly nonlinear and non monotonic functions of temperature ( anninos & norman 1994 ) .for this reason we solve equation ( [ split2 ] ) with an explicit method that subcycles the cooling source terms .the timesteps for each subcyle are determined as , where as in the rate equation solve .this algorithm has been tested to be both fast and accurate .the robustness of our methods has been verified by switching the order in which the rate equations are solved relative to the other updates .we have also experimented with the sensitivity to the temperature ( ie .time centered , retarded and advanced temperatures ) used in updating the rate equations .the results are stable and unchanged under all these sorts of permutations .the numerical scheme described in section [ sec : numerical ] has been implemented in two separate codes : zeus-2d and hercules .zeus-2d is a two dimensional eulerian hydrodynamics code originally developed by stone & norman ( 1992 ) and modified for cosmology by anninos & norman ( 1994 ) .hercules is a three - dimensional hydrodynamics code derived from a 3d version of zeus , but modified for cosmology and generalized to include a hierarchical system of nested grids by anninos et al .extensive tests of the two codes can be found in the references provided above . in this paper , we present tests only for the new additions to the codes , namely the multi species chemistry and the non equilibrium cooling . due to the intrinsic complexity of such systems ,the diversity of tests is rather limited .we consider two crucial and relevant ( for cosmology ) tests : radiative shock waves and cosmological sheets .in addition we have also developed an independent method of solution that is based on the well - tested packaged solver lsodar .fully three - dimensional calculations of large scale structure formation are presented , comparing results from our method to that of lsodar .chevalier & imamura ( 1982 ) and imamura , wolff & durisen ( 1984 ) have demonstrated through both analytic and numerical calculations that the fundamental mode of oscillations in one dimensional radiative shocks with cooling rates are unstable if . because the cooling rate is determined from the density of electrons and ions , a comparison to their published work provides an excellent test of both our cooling algorithm and the reaction network .the initial data for this test is characteristic of pre - shock flows expected from the collapse of zeldovich pancakes corresponding to a uniform flow of gas along the direction .reflection boundary conditions are imposed at and we use 100 zones to resolve a spatial extent of .we assume only bremsstrahlung cooling of the form in appendix b , which has an exponent of .a shock wave forms at the wall ( ) and propagates outward at a velocity . as the heated gas cools, the shock begins to lose pressure support and slows down . because the cooling rate is proportional to ( ie . ) , the cooled gas experiences an accelerated energy loss as it gets denser .eventually the higher density gas nearest the wall loses pressure support and the shock collapses and re - establishes a new pressure equilibrium closer to the wall .the shock front then begins again to propagate away from the wall , repeating the cycle of oscillations .this behavior is shown in figure [ fig : radshk ] where the shock position is plotted as a function of time .our numerical results indicate that the fundamental mode is indeed stable and damped , consistent with the analytic results of chevalier and imamura ( 1982 ) and the numerical simulations of imamura et al .( 1984 ) .the shock jump conditions provide a more quantitative check of our numerical results .for the choice of initial data ( [ initial_data ] ) , the jump conditions give and , in excellent agreement with our numerical results and .we can estimate the maximum distance the shock front will travel as where is the cooling time and is the shock speed .substituting the bremsstrahlung cooling formula at the temperature predicted by the jump conditions and assuming a fully ionized gas with and hydrogen mass fraction , gives .this is again consistent with our numerical result .chevalier and imamura also characterize their linearized analytic solutions by the frequency of oscillations in units of where is the average shock front position . defining the period of perturbations as the time for the shock to first collapse back to the wall, we find and , again in good agreement .we use the zeldovich ( 1970 ) solution to set up a linearized single mode perturbation for the collapse of gas in one dimension ^{-1},\label{pertd}\end{aligned}\ ] ] where and are the comoving positions and velocities , the proper density , the unperturbed coordinate , , the comoving wavelength of perturbation , the present hubble constant , and the redshift corresponding to the collapse time .parameters for the calculations presented here are the following at the higher temperatures ( ) characteristic of shock heated gas in high velocity pancake structures , the kinetic time scales are extremely short compared to the hubble time . neglecting the photoionization processes ,collisional ionization equilibrium is then a good approximation to the following pairs of reactions for a fully ionized gas , we have where is the mass fraction of hydrogen . the corresponding equilibrium fractional abundances from reactions ( [ eq_first ] ) ( [ eq_last ] )can then be written as which are functions only of the gas temperature . in figure[ fig : pancake1 ] we show the mass fractions throughout the pancake structure at redshift . at this time, the shock front is located at a distance of approximately from the central plane at , and is propagating outward at an average comoving velocity of about 110 ( 355 relative to the infalling gas ) .two distinct cooling layers form , as evidenced by the thick solid line representing the gas temperature .the gas between first cools mostly by atomic processes to a temperature of about . the second colder layer at results from cooling by hydrogen molecules which form from the residual electrons leftover from the nonequilibrium cooling through the first plateau at .( we refer the reader to anninos & norman ( 1996 ) for further details concerning the chemistry , dynamics and radiative cooling of cosmological sheets . ) although nonequilibrium effects are important as the gas cools through , collisional ionization equilibrium is a good approximation to the species concentrations in the hot gas between .the different symbols plotted across this region in figure [ fig : pancake1 ] represent the equilibrium abundances given by equations ( [ eq1 ] ) ( [ eq5 ] ) . differences between the numerical results and the equilibrium estimates are less than 0.5% for all species except which differ by roughly 5% .however , the larger discrepancy in the mass fraction is not due to numerical errors , but to the inadequacy of equation ( [ eq5 ] ) to fully describe the kinetics .a more accurate equilibrium ratio is derived by considering all the reactions involving equation ( [ eq6 ] ) agrees with our numerical results to within about 0.1% .we have also run this same problem using the lsodar routines in place of the bdf method to solve for all nine species in full non - equilibrium .a comparison of the two results at redshift is shown in figure ( [ fig : pancake2 ] ) .the symbols represent the bdf calculation and the various line types are the lsodar results for the different species .the two methods agree to within about 0.5% throughout all the different pancake layers , hot and cold .notice also the excellent agreement in the mass fractions of and , which is further justification of the hybrid model ( in which the fast reacting species are singled out over cosmological dynamical times to be in equilibrium ) .finally we point out that the fractional abundance of hydrogen molecules that form in the central cooled gas is consistent with the steady state shock calculations of shapiro & kang ( 1987 ) .a final test is presented for an actual three dimensional cosmological calculation in which we compare our method of solving the rate equations to that of lsodar .both accuracy and computational efficiency are stressed in this comparison .the simulation is performed for a flat ( ) model universe with baryon mass fraction and hubble constant with .the baryonic matter is composed of hydrogen and helium in cosmic abundance with a hydrogen mass fraction . the initial data is the harrison - zeldovich power spectrum modulated with a transfer function appropriate for cold dark matter ( cdm ) adiabatic fluctuations and normalized to a bias factor of .we begin the simulations at redshift and evolve to the present time at .the computational box size is set to ( comoving ) resolved by cells .our calculation thus has a spatial grid resolution of and baryonic mass resolution of which is just marginally adequate to resolve cooling flows and the formation of hydrogen molecules .the computational demands of lsodar implemented in three dimensions prohibits comparative calculations of much higher resolution .we have run two separate simulations with identical initial conditions and model parameters .the only difference in the two runs is the rate equation solver : in the one case we use our bdf method , in the other lsodar .the results of comparison are shown in figures [ fig : hist ] to [ fig : cont2 ] .figure [ fig : hist ] shows the cell distributions ( defined by counting up the number of cells at a particular binned range of values ) for three key variables that would be particularly sensitive to errors in the rate solver : ( a ) gas temperature which is constructed from the concentrations of all the species combined , and ( b ) molecular hydrogen mass fraction and electron number density fraction .all results are shown at redshift .the prominent peaks in the and electron fraction distributions correspond closely to values initialized at the start of the runs .because most of the box volume is comprised of cosmic voids that do not undergo shock heating , the molecular hydrogen and electron fraction do not change significantly in most of the cells .the relative deviations between the bdf and lsodar results are 0.018 , 0.039 and 0.13 for the gas temperature , molecular hydrogen and electron fractions , respectively .we note , however , that deviations are due in part to the final output redshifts not being exactly the same in the two runs ; the relative difference is about 0.01 .the larger differences found here ( as compared to the cosmological sheet calculations ) can also be attributed to the coarse grid resolution and the resulting large courant time step in the hydrodynamical calculation .the agreements improve with resolution .figure [ fig : cont ] shows contour graphs of the fractional volume of those cells with a particular combination of temperature and baryonic density , electron number density fraction and molecular hydrogen mass fraction .two sets of graphs are presented : the three plots in the first column are results from the bdf method , the second column are the lsodar results .we note that the sharp boundaries in the electron contours at the level correspond to the initialized fraction at : ( peebles 1993 ) .the volume weighted distribution is concentrated at the initial value for since there is no mechanism to efficiently create nor destroy molecules at the low temperatures of the voids , and slightly lower than the initial value for the electron fraction since the expansion ( cosmological and gravitational ) of the voids continues to cool and recombine the gas .contours of the spatial distribution of and are shown in figure [ fig : cont2 ] .the data is projected ( and averaged ) along the -axis at redshift .the first row is the electron fraction , the second molecular hydrogen .the first column are the bdf results , the second lsodar .notice that hydrogen molecules form preferentially within the high dense filamentary structures but mostly in the knot like intersections of the filaments .these are the highest density regions where the gas cools rapidly and the electrons are depleted by recombination .peaks in the concentration therefore correspond to valleys in the electron distribution . in comparing results from the two solvers , bdf versus lsodar, we see the distributions in both figures [ fig : cont ] and [ fig : cont2 ] are basically the same .in addition to the accuracy of the bdf method , another very important point that should be stressed is the amount of computational time required to solve the rate equations .the bdf run takes about 1.2 cpu hours with the full reaction network on the ncsa convex c3880 , in contrast to the equivalent lsodar calculation which takes about 16.7 hours to complete on the same machine .the speedup of the bdf method over lsodar is roughly a factor of 14 .we have developed and tested a new scheme to solve a system of stiff kinetic equations appropriate for chemical reaction flows in cosmological structure formation .twenty eight chemical reactions for collisional and radiative processes are included in our model , which tracks nine separate atomic and molecular species : , , , , , , , , and .the reaction network is solved in a self - consistent manner with the hydrodynamic , n - body and cosmological expansion equations , and the accuracy of the solver has been verified by performing a series of test - bed calculations that includes radiative shock waves , cosmological sheets and monitering the conservation constraints . we have also implemented a publicly available and well - tested solver called lsodar in place of our scheme and made direct comparisons of the different results in one , two and three dimensions .we find our methods are both fast and accurate , making fully three - dimensional calculations of non - equilibrium cosmological reaction flows feasible .we have incorporated the species solver into two separate cosmological hydrodynamic codes : a two dimensional ratioed grid code to model zeldovich pancakes , and a more general three dimensional nested grid cosmological code hercules .applications to date include investigations of star and galaxy formation in cosmological sheets ( anninos & norman 1996 ) , primordial star formation in cdm models ( zhang et al .1996 ) , and simulations of the ly forest ( zhang , anninos & norman 1995 ; charlton et al .1996 ) . in the future, we plan to extend this work and develop a more sophisticated treatment of radiation to account for self shielding and to more accurately model the microphysics of optically thick gas .this work is done under the auspices of the grand challenge cosmology consortium ( gc ) and supported in part by nsf grant asc-9318185 .the calculations were performed on both the c90 at the pittsburgh supercomputing center and the convex c3880 at the national center for supercomputing applications at the university of illinois .the following is a list of all chemical reactions that we include in our calculations .further discussions and justification of the completeness of this set of reactions can be found in abel et al .there we also include explicit formulae for the different rate coefficients .= ( 16 ) = = + = = + = + ( 1 ) + + 2 + ( 2 ) + + + ( 3 ) + + 2 + ( 4 ) + + + ( 5 ) + + 2 + ( 6 ) + + + + ( 7 ) + + + ( 8) + + + ( 9 ) + + + ( 10 ) + + + ( 11 ) + + + ( 12 ) + + + ( 13 ) + + ( 14 ) + + 2 + ( 15 ) + + + ( 16 ) + + ( 17 ) + + + ( 18 ) + + ( 19 ) + + + + + ( 20 ) + + + ( 21 ) + + + ( 22 ) + + + ( 23 ) + + + ( 24 ) + + + ( 25 ) + + + ( 26 ) + + + ( 27 ) + + ( 28 ) + + the cooling rates and photoionization cross sections included in our calculations .we use units of for the rates , for the cross sections , and degrees kelvin for temperature . also , , with being the threshold frequencies of the species , and , and are the rate coefficients for the ionizing chemical reactions ( 1 ) , ( 3 ) and ( 5 ) listed in appendix a. = ( black 1981 ; cen 1992 ) : + + + + + * collisional ionization cooling * ( shapiro & kang 1987 ; cen 1992 ) : + + + + + + * recombination cooling * ( black 1981 ; spitzer 1978 ) : + + + ~ \exp{(-470000/t)}~n_e n_{he^+} ] + + * compton cooling or heating * ( peebles 1971 ) : + ~n_e ] + ] + | we have developed a method of solving for multi - species chemical reaction flows in non equilibrium and self consistently with the hydrodynamic equations in an expanding flrw universe . the method is based on a backward differencing scheme for the required stability when solving stiff sets of equations and is designed to be efficient for three - dimensional calculations without sacrificing accuracy . in all , 28 kinetic reactions are solved including both collisional and radiative processes for the following nine separate species : , , , , , , , , and . the method identifies those reactions ( involving and ) ocurring on the shortest time scales , decoupling them from the rest of the network and imposing equilibrium concentrations to good accuracy over typical cosmological dynamical times . several tests of our code are presented , including radiative shock waves , cosmological sheets , conservation constraints , and fully three - dimensional simulations of cdm cosmological evolutions in which we compare our method to results obtained when the packaged routine lsodar is substituted for our algorithms . |
we address the logarithmic korteweg de vries ( log - kdv ) equation derived in the context of solitary waves in granular chains with hertzian interaction forces : the log kdv equation ( [ logkdv ] ) has a two - parameter family of gaussian solitary waves where is a symmetric standing wave given by global solutions to the log kdv equation ( [ logkdv ] ) were constructed in in the energy space by a modification of analytic methods available for the log nls equation ( also reviewed in section 9.3 in ) . in the energy space , the following quantities for the momentum and energy , and dx\ ] ] are non - increasing functions of time .uniqueness , continuous dependence , and energy conservation are established in under the additional condition , which is not satisfied in the neighborhood of the family of gaussian solitary waves given by ( [ soliton - orbit ] ) and ( [ gaussian ] ) . as a result , orbital stability of the gaussian solitary waves was not established for the log kdv equation ( [ logkdv ] ) , in a sharp contrast with that in the log nls equation established in .a possible path towards analysis of orbital stability of gaussian solitary waves is to study their linear and spectral stability by using the linearized log kdv equation where is the schrdinger operator with a harmonic potential given by the differential expression the linearized log kdv equation ( [ linlogkdv ] ) arises at the formal linearization of the log kdv equation ( [ logkdv ] ) at the perturbation .the schrdinger operator is the hessian operator of the second variation of at .although in ( [ energy ] ) is not a functional at , the second variation of is well defined at by which is formally conserved in the time evolution of ( [ linlogkdv ] ) . with new estimates to be obtained for the linearized log kdv equation ( [ linlogkdv ] ) ,we may hope to develop an ultimate solution of the outstanding problem on the orbital stability of the gaussian solitary waves . indeed ,if we set for the solution to the log kdv equation ( [ logkdv ] ) , we obtain an equivalent evolution equation where the linearized part coincides with ( [ linlogkdv ] ) and the nonlinear term is given by .\ ] ] it is clear that the nonlinear term does not behave uniformly in unless decays at least as fast as in ( [ gaussian ] ) . on the other hand , if , where is a bounded function in its variables , then , where is analytic in for any .therefore , obtaining new estimates for the linearized log kdv equation ( [ linlogkdv ] ) in a function space with gaussian weights may be useful in the nonlinear analysis of the log kdv equation ( [ logkdv - w ] ) .the spectrum of in consists of equally spaced simple eigenvalues which include exactly one negative eigenvalue with the eigenvector ( defined without normalization ) .therefore , is not convex at in .nevertheless , is positive in the constrained space which corresponds to the fixed value in ( [ momentum ] ) at the linearized approximation .several results were obtained for the linearized log kdv equation ( [ linlogkdv ] ) . in , _ linear orbital stability _ of gaussian solitary waves was obtained in the following sense : for every , there exists a unique global solution of the linearized log kdv equation ( [ linlogkdv ] ) which satisfies the following bound for some -independent positive constant .this result was obtained in from the conservation of in the time evolution of smooth solutions to the linearized log kdv equation ( [ linlogkdv ] ) , the symplectic decomposition of the solution , into the translational part and the residual part , and the coercivity of in the squared norm in the sense for some positive constant .the first two facts are rather standard in energy methods for linear pdes , whereas the last fact , that is , the inequality ( [ coercivity - assumed ] ) , should not be taken as granted . in ,the nonzero spectrum of the linear operator was studied by using the fourier transform that maps the third - order differential operator in physical space into a second - order differential operator in fourier space .indeed , the fourier transform applied to the linearized log kdv equation ( [ linlogkdv ] ) yields the time evolution in the form where is the fourier image of operator given by by reducing the eigenvalue problem for to the symmetric sturm liouville form , it was found in that the spectrum of in is purely discrete and consists of a double zero eigenvalue and a symmetric sequence of simple purely imaginary eigenvalues such that the double zero eigenvalue corresponds to the jordan block whereas the purely imaginary eigenvalues correspond to the eigenfunctions , which are smooth in but decay algebraically as .the fourier transform of is supported on the half - line and decays like a gaussian function at infinity .it follows from the spectrum of in that the gaussian solitary waves are _ spectral stable_. the eigenfunctions of were also used in for spectral decompositions in the constrained space in order to provide an alternative proof of the _ linear orbital stability _ of the gaussian solitary waves .this alternative technique still relies on the conjecture of the coercivity of in the squared norm , that is , on the inequality ( [ coercivity - assumed ] ) . because of the algebraic decay of the eigenfunctions of , it is not clear if a function of that decays like the gaussian function as can be represented as series of eigenfunctions .numerical simulations were undertaken in to illustrate that solutions to the linearized log kdv equation ( [ linlogkdv ] ) with gaussian initial data did not spread out as the time variable evolves .nevertheless , the solutions exhibited visible radiation at the left slopes .the present work is developed to obtain new estimates for the linearized log kdv equation ( [ linlogkdv ] ) . in the first part of this work, we rely on the basis of hermite functions in -based sobolev spaces and analyze the discrete operators that replace the differential operators . in the second part , we obtain dissipative estimates on the evolution of the linearized log kdv equation ( [ linlogkdv ] ) by representing solutions in terms of a convolution with the gaussian solitary wave . the paper is structured as follows .section 2 sets up the basic formalism of the hermite functions and reports useful technical estimates .section 3 is devoted to the proof of the coercivity bound ( [ coercivity - assumed ] ) .as explained above , this coercivity bound implies _ linear orbital stability _ of the gaussian solitary wave in the constrained space and it is assumed to be granted in .the proof of coercivity relies on the decomposition of in terms of the hermite functions .section 4 is devoted to the analysis of linear evolution expressed in terms of the hermite functions .it is shown that this evolution reduces to the self - adjoint jacobi difference operator with the limit circle behavior at infinity . as a result, a boundary condition is needed at infinity in order to define the spectrum of the jacobi operator and to obtain the norm - preserving property of the associated semi - group . both _linear orbital stability _ and _ spectral stability _ of gaussian solitary waves ( [ gaussian ] ) is equivalently proven by using the jacobi difference operator . in section 5 ,we give numerical approximations of eigenvalues and eigenvectors of the jacobi difference equation .we show numerically that there exist subtle differences between the representation of eigenvectors of in the physical space and the representation of these eigenvectors by using decomposition in terms of the hermite functions .section 6 reports weighted estimates for solutions to the linearized log kdv equation ( [ linlogkdv ] ) by using a convolution representation with the gaussian weight .we show that the convolution representation is invariant under the time evolution of the linearized log kdv equation ( [ linlogkdv ] ) , which is expressed by a dissipative operator on a half - line .the semi - group of the fundamental solution in the norm decays to zero exponentially fast as time goes to infinity .section 7 concludes the paper with discussions of further prospects .* notations : * we denote with the sobolev space of -times weakly differentiable functions on the real line whose derivatives up to order are in .the norm for in the sobolev space is equivalent to the norm in the lebesgue space .we denote with the space of square integrable functions with the weight .the set consists of all non - negative integers , whereas the set includes only positive integers .the sequence space includes squared summable sequences , whereas contains finite ( compactly supported ) sequences .* acknowledgements . *the author thanks gerald teschl and thierry gallay for help on obtaining results reported in sections 4 and 6 , respectively .the research of the author is supported by the nserc discovery grant .we recall definitions of the hermite functions ( * ? ? ? * chapter 22 ) : where denote the set of hermite polynomials , e.g. , hermite functions satisfy the schrdinger equation for a quantum harmonic oscillator : at equally spaced energy levels . by the sturm liouville theory , the set of hermite functions forms an orthogonal and normalized basis in .in connection to the self - adjoint operator given by ( [ schrodinger ] ) , we obtain the eigenfunctions of , from the correspondence . with proper normalization , we define it follows from the well - known relations for hermite polynomials that functions in the sequence satisfy the differential relations the following elementary result is needed in further estimates .[ lemma - f - sequence ] let be given by then , there is a positive constant such that we write .\ ] ] by taylor series , for every and every , there is such that furthermore , we recall euler s constant given by the limit since is bounded as , the estimate ( [ taylor ] ) and the limit ( [ euler - gamma ] ) yield the bound for some positive constant . substituting ( [ bound - log ] ) into ( [ f - m - exp ] ) proves the desired bound ( [ bound - decay ] ) .the following technical result is needed for the proof of coercivity of the energy function .[ lemma - projections ] let by defined by .then , there is a positive constant such that multiplying the differential relation ( [ diff - hermite ] ) by and integrating by parts , we obtain integrating directly , we compute furthermore , using ( [ difference - integral ] ) at , we also compute thanks to orthogonality of hermite functions , the right - hand side of ( [ difference - integral ] ) is zero for and the numerical sequence satisfies the recurrence equation starting with the initial values for and in ( [ f-0 ] ) and ( [ f-1 ] ) . the recurrence equation ( [ difference - second - order ] ) admits the exact solution applying the bound ( [ bound - decay ] ) of lemma [ lemma - f - sequence ] with , or , yields the bound ( [ projection - decay ] ) .in order to prove the coercivity bound ( [ coercivity - assumed ] ) , we define the -compatible squared norm in space , dx\ ] ] the second variation is defined by ( [ energy - second - variation ] ) . the following theorem yields the coercivity bound for the energy function , which was assumed in without a proof .[ theorem - coercivity ] there exists a constant such that for every satisfying the constraints it is true that where .the upper bound in ( [ coercivity ] ) follows trivially from the identity whereas the lower bound holds if there is a constant such that for every satisfying constraints ( [ constraints ] ) , it is true that by the spectral theorem , we represent every by where the vector belongs to .it follows from the first constraint in ( [ constraints ] ) that . using the norm in ( [ norm - h-1 ] ), we obtain therefore , and coercivity ( [ coercivity - l-2 ] ) is proved if we can show that is bounded by up to a multiplicative constant . to show this, we use the second constraint in ( [ constraints ] ) .since , as it follows from ( [ f-1 ] ) , we have by lemma [ lemma - projections ] , there is a positive constant such that which follows from convergence of .hence , by using cauchy schwarz inequality in ( [ projection-2 ] ) , we obtain so that the bound ( [ coercivity - l-2 ] ) follows .the statement of the theorem is proven .the time evolution of the linearized log kdv equation ( [ linlogkdv ] ) is considered in the constrained energy space given by ( [ constrained - space ] ) . for a vector , we use the decomposition involving the hermite functions , by using and the differential relations ( [ diff - hermite ] ) , the evolution problem for the vector is written as the lattice differential equation it follows from ( [ lattice - c ] ) for that if ( so that ) , then and for every . if , then it follows from ( [ lattice - c ] ) for that the time evolution of a projection of to ( which is proportional to the translational mode ) is given by the projection is decoupled from the rest of the system ( [ lattice - c ] ) . therefore , introducing for , we close the evolution system ( [ lattice - c ] ) at the lattice differential equation since , then , so that we can introduce , with the vector . the sequence satisfies the evolution system in the skew - symmetric form the evolution system ( [ lattice - a ] ) can be expressed in the symmetric form by using the transformation the new sequence satisfies the evolution system written in the operator form where is the jacobi operator defined by the jacobi difference equation is . according to the definition in section 2.6 of , the jacobi operator is said to have a limit circle at infinity if a solution of with is in for some . by lemma 2.15 in ,this property remains true for all .the following lemma shows that this is exactly our case .[ lemma - circle ] the jacobi operator defined by ( [ jacobi - f ] ) has a limit circle at infinity .let us consider the case and define a solution of with . the numerical sequence satisfies the recurrence relation starting with .then , for even , whereas for odd is given by the exact solution by lemma [ lemma - f - sequence ] with and , there exists a positive constant such that . by lemma 2.16 in ,the jacobi operator with the domain is self - adjoint if for all , where the discrete wronskian is given by and . in order to define a self - adjoint extension of the jacobi operator with the limit circle at infinity , we need to define a boundary condition as follows : where is real . by lemma 2.17 and theorem 2.18 in , the operator , where and represents a self - adjoint extension of the jacobi operator with the limit circle at infinity .moreover , by lemma 2.19 in , the real spectrum of in is purely discrete . since there is at most one linearly independent solution of the jacobi difference equation thanks to the recurrence relation in ( [ jacobi - f ] ) , each isolated eigenvalue of the real spectrum of is simple . by lemma 2.20 in , all self - adjoint extensions of are uniquely defined by the choice in ( [ boundary - condition ] ) spanned by a linear combination of two linearly independent solutions of .since the value of plays no role for the jacobi operator thanks again to the recurrence relation in ( [ jacobi - f ] ) and the value can be uniquely normalized to , we have a unique choice for given by the solution of with . combining these facts together, we have obtained the following result .[ lem - jacobi ] let be a unique solution of with determined in the proof of lemma [ lemma - circle ] .then , with the domain in ( [ domain - condition ] ) is a unique self - adjoint extension of the jacobi operator given by ( [ jacobi - f ] ) .moreover , the spectrum of consists of a countable set of simple real isolated eigenvalues .the following theorem and corollary provide the _ linear orbital stability _ of gaussian solitary waves ( [ gaussian ] ) expressed by using the decomposition in terms of hermite functions .the same result was obtained in by using alternative techniques involving either the energy method or the spectral decompositions .[ theorem - linear - evolution ] for every , there exists a unique solution to the evolution system ( [ lattice - a ] ) for every satisfying .the semi - group property of the solution operator both for and associated with the linear system ( [ lattice - f ] ) follows from the result of lemma [ lem - jacobi ] and the classical semi - group theory .the result is transferred to the sequence by using the transformation ( [ transformation - a ] ) .for every given by ( [ constrained - space ] ) , there exists a unique solution to the linearized log kdv equation ( [ linlogkdv ] ) for every satisfying . by using the transformations and for and the decomposition ( [ decomposition - lin - log - kdv ] ) , we obtain where is set uniquely in .recall that is an invariant reduction of the linearized log kdv equation ( [ linlogkdv ] ) .the assertion of the corollary follows from theorem [ theorem - linear - evolution ] and the equivalence ( [ equivalence - a - energy ] ) . at the first glance , the detached equation ( [ projection - c-1 ] )might imply that the projection to the translational mode may grow at most linearly as in the energy space with conserved .however , it follows directly from the linearized log kdv equation ( [ linlogkdv ] ) for the solution that therefore , we obtain as in the proof of theorem [ theorem - coercivity ] that where both terms are globally bounded for all .we discuss here numerical approximations of eigenvalues in the spectrum of the self - adjoint operator constructed in lemma [ lem - jacobi ] .let denote the simple real eigenvalues of with the ordering these real eigenvalues of transform to eigenvalues of the spectral problem by using the decomposition ( [ decomposition - lin - log - kdv ] ) .therefore , the result of lemma [ lem - jacobi ] also provides an alternative proof of the _ spectral stability _ of the gaussian solitary waves ( [ gaussian ] ) , which is also established in .however , by comparing the eigenvectors obtained in the two alternative approaches , we will see some sharp differences in the definition of function spaces these eigenvectors belong to . to proceed with numerical approximations, we note that if is a solution of like in the proof of lemma [ lemma - circle ] , then for even .let us denote for . from the bound ( [ bound - v ] ), we note that as .let be a solution of for .then , we denote and for .it follows from the definition ( [ jacobi - f ] ) that and satisfy the coupled system of difference equations : starting with .the discrete wronskian ( [ wronskian ] ) is now explicitly computed as since generally as , by applying lemma [ lemma - f - sequence ] with and to the first equation of system ( [ coupled - difference ] ) , the limit exists and is generally nonzero . moreover, the sign alternation of and ensures that the sequence is sign - definite for large enough , so that the limit is actually .this is confirmed in figure [ fig - wronskian](a ) , which shows the wronskian sequence given by ( [ wronskian - numerics ] ) for .as for .( b ) oscillatory behavior of versus .,title="fig : " ] as for .( b ) oscillatory behavior of versus .,title="fig : " ] computing numerically by truncation of at a sufficiently large , e.g. at , we plot versus on figure [ fig - wronskian](b ) .oscillations of are observed and the first two zeros of are located at these values are nicely compared to the first two eigenvalues computed in for : the numerical approximations confirm that the eigenvalues obtained by using the jacobi difference equation are the same as the eigenvalues obtained in from the sturm liouville problem derived in the fourier space .numerically , we find for the first two zeros of the limiting wronskian that the decay rate of the sequence remains generic : but the decay rate of the sequence becomes faster : let us now recall the correspondence of eigenvectors of the jacobi difference equation and eigenvectors of the linearized log kdv operator . from the previous transformations ,we obtain where are respectively the odd and even components of the eigenvector with respect to .thanks to the decay of the sequences and , we note that but that .therefore , generally defined by ( [ linearized - operator ] ) .thus , the eigenvector given by ( [ decomposition - numerics ] ) does not solve the eigenvalue problem in the classical sense compared to the eigenvectors constructed in with the fourier transform . in order to clarify the sense for the eigenvectors given by ( [ decomposition - numerics ] ), we denote and project the eigenvalue problem to and .the projection is uniquely found by which can also be obtained from the projection equation ( [ projection - c-1 ] ) . the component satisfies formally . after separating the even and odd parts of the eigenvalue problem ,we obtain the coupled system as we have indicated above , it is difficult to prove that each term of the coupled system ( [ linearized - formal ] ) belongs to if and are given by the decomposition ( [ decomposition - numerics ] ) in terms of the hermite functions . in order to formulate the coupled problem ( [ linearized - formal ] ) rigorously , we would like to show that the components of the eigenvector belong to and satisfy the coupled system where each term of system ( [ linearized - classical ] ) is now defined in . to show ( [ function - space ] ) and ( [ linearized - classical ] ) , we proceed as follows .according to ( [ y - odd - even ] ) and ( [ y - spaces ] ) , is odd , hence and the first constraint in ( [ function - space ] ) is satisfied . since the kernel of is spanned by the odd function , we have so that and as is given by the first equation in ( [ linearized - classical ] ) . similarly , from ( [ y - odd - even ] ) and ( [ y - spaces ] ) , we have so that the second equation in ( [ linearized - classical ] ) implies that . hence , the second constraint in ( [ function - space ] ) is satisfied .thus , the coupled system ( [ linearized - classical ] ) is well defined for the eigenvector of the eigenvalue problem defined in the function space ( [ function - space ] ) .note that the formulation ( [ linearized - classical ] ) also settles the issue of zero eigenvalue , which should not be listed as an eigenvalue of the problem .indeed , the first equation ( [ linearized - classical ] ) with implies , hence , where is a solution of .thus , the existence of the eigenvector for the eigenvalue of the jacobi difference equation does not imply the existence of the zero eigenvalue in the proper formulation ( [ function - space])([linearized - classical ] ) of the system .the same result can be obtained from the projection equation ( [ projection - eigenvalue - c-1 ] ) . if , then , which corresponds to the zero solution of the jacobi difference equation .delicate analytical issues in the decomposition ( [ decomposition - numerics ] ) involving hermite functions are likely to be related to the fact that eigenvectors of the eigenvalue problem decay algebraically as , while the decay of each hermite function in the decomposition ( [ y - odd - even ] ) is given by a gaussian function .for the kdv equation with exponentially decaying solitary waves , the exponentially weighted spaces were used to introduce effective dissipation in the long - time behavior of perturbations to the solitary waves and to prove their asymptotic stability . for the log kdv equation with gaussian solitary waves, it makes sense to introduce gaussian weights in order to obtain a dissipative evolution of the linear perturbations .here we show how the gaussian weights can be introduced for the linearized log kdv equation ( [ linlogkdv ] ) .let us represent a solution to the linearized log kdv equation ( [ linlogkdv ] ) in the following form with where are new variables to be found .it is clear that the representation ( [ convolution - y ] ) imposes restrictions on the class of functions of in the energy space .we will show that these restrictions are invariant with respect to the time evolution of the linearized log kdv equation ( [ linlogkdv ] ) .we assume sufficient smoothness and decay of the variable . by using the explicit computation with and ,we obtain dz \\ & = & \int_{-\infty}^0 w(t , z ) \left [ -u_0(x - z ) - z u_0'(x - z ) + \frac{1}{4 } z^2 u_0(x - z ) \right ] dz.\end{aligned}\ ] ] integrating by parts , we further obtain dz.\end{aligned}\ ] ] also recall that and . bringing together the left - side and the right - side of the linearized log kdv equation ( [ linlogkdv ] ) under the decomposition ( [ convolution])([convolution - y ] ) , we obtain the system of modulation equations and the evolution problem where the linear operator with is given by since is a regular singular point of the differential operator , no boundary condition is needed to be set at .we show that the differential operator is dissipative in .[ lemma - dissipative ] for every , we have ^ 2 + \int_{-\infty}^0 \left [ z ( \partial_z w)^2 + \frac{1}{4 } z w^2 \right ] dz { \leqslant}-\frac{1}{2 } \| w \|_{l^2(\mathbb{r}^-)}^2.\ ] ] the proof of the equality follows from integration by parts for every : dz & = & \left [ -z w w_z - w^2 + \frac{1}{8 } z^2 w^2 \right ] \biggr|_{z\to -\infty}^{z = 0 } + \int_{-\infty}^0 \left [ z ( \partial_z w)^2 + \frac{1}{4 } z w^2 \right ] dz \\ & = & - [ w(0)]^2 + \int_{-\infty}^0 \left [ z ( \partial_z w)^2 + \frac{1}{4 } z w^2 \right ] dz.\end{aligned}\ ] ] this yields the equality in ( [ h - form ] ) .the inequality in ( [ h - form ] ) is proved from the younge inequality where is at our disposal .picking yields the inequality in ( [ h - form ] ) .the semi - group theory for dissipative operators is fairly standard , so we assume existence of a strong solution to the evolution problem ( [ heat ] ) for every .the next result shows that this solution decays exponentially fast in the norm .[ cor - dissipation ] let be a solution of the evolution problem ( [ heat ] ) .then , the solution satisfies the decay behavior ( [ decay - l-2 ] ) is obtained from a priori energy estimates .indeed , it follows from ( [ h - form ] ) that gronwall s inequality yields the bound ( [ decay - l-2 ] ) .we recall that the solution needs to satisfy the constraint .the constraint is invariant with respect to the time evolution of the linearized log kdv equation ( [ linlogkdv ] ) .these properties are equivalently represented in the decomposition ( [ convolution])([convolution - y ] ) , according to the following lemma .[ lemma - constraint ] for every , we have where is constant in .moreover , if , then and furthermore , is constant in . as an alternative derivation, one can compute from the evolution problem ( [ heat ] ) that and then use the first modulation equation in system ( [ modulation - equation ] ) for integration in . if , then and .this yields uniquely by applying the decay bound ( [ decay - l-2 ] ) and the cauchy schwarz inequality , we obtain the decay bound ( [ decay - a ] ) . [ cor - modulation ]if , then there is such that as .it follows from the bound ( [ decay - a ] ) that if , then as . since decays to zero exponentially fast , the assertion of the corollary follows from integration of the second modulation equation in system ( [ modulation - equation ] ) .besides scattering to zero in the norm , the global solution of the evolution problem ( [ heat ] ) also scatters to zero in the norm .the following lemma gives the relevant result based on a priori energy estimates .[ lemma - scattering ] let be a smooth solution of the evolution problem ( [ heat ] ) in a subset of . then, there exist positive constants and such that the proof is developed similarly to the estimates in lemma [ lemma - dissipative ] and corollary [ cor - dissipation ] but the estimates are extended for . differentiating ( [ dissipative - operator ] ) in , multiplying by , and integrating by parts , we obtain for smooth solution : \biggr|_{z \to -\infty}^{z = 0 } + \int_{-\infty}^0 z w_{zz}^2 dz + \frac{3}{4 } \int_{-\infty}^0 z w_z^2 dz \\ & = & - \frac{3}{2 } [ \partial_z w(0)]^2 + \frac{1}{4 } [ w(0)]^2 + \int_{-\infty}^0 z w_{zz}^2 dz + \frac{3}{4 } \int_{-\infty}^0 z w_z^2 dz.\end{aligned}\ ] ] as a result , smooth solutions to the evolution problem ( [ heat ] ) satisfy the differential inequality ^ 2 + \int_{-\infty}^0 z w_{zz}^2 dz + \frac{3}{4 } \int_{-\infty}^0 z w_z^2 dz.\end{aligned}\ ] ] by using young s inequality , we estimate ^ 2 = 2 \int_{-\infty}^0 z w w_zdz { \leqslant}\beta^2 \| w_z\|_{l^2(\mathbb{r}^-)}^2 + \beta^{-2 } \| w\|_{l^2(\mathbb{r}^-)}^2\ ] ] and where and are to our disposal .picking and assuming , we close the differential inequality as follows thanks to the exponential decay in the bound ( [ decay - l-2 ] ) , we can rewrite the differential inequality in the form { \leqslant}\frac{1}{2 \beta^2 } \| w(0 ) \|_{l^2(\mathbb{r}^-)}^2 e^{-\frac{\beta^2}{2 } t}.\end{aligned}\ ] ] integrating over time , we finally obtain where is fixed arbitrarily .thus , the norm of the smooth solution to the evolution problem ( [ heat ] ) decays to zero exponentially fast as .the bound ( [ decay - l - infty ] ) follows by the sobolev embedding of to . combining the results of this section , we summarize the main result on the dissipative properties of the solutions to the linearized log kdv equation ( [ linlogkdv ] ) represented in the convolution form ( [ convolution])([convolution - y ] ) .[ theorem - convolution ] assume that the initial data is represented by the convolution form ( [ convolution])([convolution - y ] ) with some , , and .there exists a solution of the linearized log kdv equation ( [ linlogkdv ] ) represented in the convolution form ( [ convolution])([convolution - y ] ) with unique and .moreover , there is a such that the existence result follows from the existence of the semi - group to the evolution problem ( [ heat ] ) and the ode theory for the system of modulation equations ( [ modulation - equation ] ) . since , the scattering result ( [ scattering - result ] ) follows from the generalized younge inequality , as well as the results of corollary [ cor - dissipation ] , lemma [ lemma - constraint ] , corollary [ cor - modulation ] , and lemma [ lemma - scattering ] .we have obtained new results for the linearized log kdv equation . by using hermite function decompositions in section 4 ,we have shown analytically how the semi - group properties of the linear evolution in the energy space can be recovered with the jacobi difference operator .we have also established numerically in section 5 the equivalence between computing the spectrum of the linearized operator with the jacobi difference equation and that with the differential equation . finally , we have used in section 6 the convolution representation with the gaussian weight to show that the solution to the linearized log kdv equation can decay to zero in the norms .it may be interesting to compare these results with the fourier transform method used in the previous work . from analysis of eigenfunctions of the spectral problem , it is known that the eigenfunctions are supported on a half - line in the fourier space . the decomposition ( [ decomposition - lin - log - kdv ] ) in terms of the hermite functions in the physical space can be written equivalently as the decomposition in terms of the hermite functions in the fourier space .the jacobi difference equation representing the spectral problem does not imply generally that the decomposition in the fourier space returns an eigenfunction supported on a half - line .this property is not explicitly seen in the computation of eigenvectors with the jacobi difference operator .another interesting observation is as follows .the linear evolution of the linearized log kdv equation in the fourier space ( [ linlogkdv - fourier ] ) can be analyzed separately for and . since the time evolutionis given by the linear schrdinger - type equation , the fundamental solution is norm - preserving in the energy space . if the gaussian weight is introduced on the positive half - line as follows : then the time evolution is defined in the fourier space by , where the linear operator is given by if and in ( [ h - form ] ) and ( [ h - operator - fourier ] ) are extended on the entire line , then and are fourier images of each other .thus , a very similar introduction of the gaussian weights ( except , of course , the domains in the physical and fourier space ) may result in either dissipative or norm - preserving solutions of the linearized log kdv equation .although the results obtained in this work give new estimates and new tools for analysis of the linearized log kdv equation , it is unclear in the present time how to deal with the main problem of proving orbital stability of the gaussian solitary waves in the nonlinear log kdv equation .this challenging problem will remain open to new researchers . | the logarithmic kdv ( log kdv ) equation admits global solutions in an energy space and exhibits gaussian solitary waves . orbital stability of gaussian solitary waves is known to be an open problem . we address properties of solutions to the linearized log kdv equation at the gaussian solitary waves . by using the decomposition of solutions in the energy space in terms of hermite functions , we show that the time evolution is related to a jacobi difference operator with a limit circle at infinity . this exact reduction allows us to characterize both spectral and linear orbital stability of solitary waves . we also introduce a convolution representation of solutions to the log kdv equation with the gaussian weight and show that the time evolution in such a weighted space is dissipative with the exponential rate of decay . |
support vector machines [ svm ; ] are a popular tool for classification .two important aspects contributed a lot to this popularity .first , support vector machines handle high - dimensional , low sample size data very well , in terms of computational efficiency as well as prediction quality .therefore , they are well suited to tackle , for example , microarray data containing thousands of gene expression levels ( high dimensionality ) for a limited number of subjects ( low sample size ) ; see , for example , and .second , support vector machines allow for incorporating kernel functions via the so - called kernel trick .this way nonlinearity in the data can be handled , for example , using a polynomial or a gaussian kernel . moreover ,nonnumerical data can be modeled by designing an appropriate kernel function using a priori biological information about the data at hand .this strategy is reported to perform very well , for instance , in protein homology detection , for example , fisher svm [ ] , pairwise svm [ ] , spectrum kernel [ ] , mismatch kernel [ ] and local alignment kernel [ ] . for high - dimensional and complex data sets ,the assumption of clean , independent and identically distributed samples is not always appropriate . in and , for instance , several samples are regarded as suspicious .a potential drawback of support vectormachines is the sensitivity to an even very small number of outliers .outlier detection is thus important and many approaches have been proposed in the literature .although often useful , these methods come with some important drawbacks as well : * as discussed by , many techniques are limited to situations where the sample size exceeds the dimension , thus excluding modern high - dimensional data analysis .* several types of outliers exist .algorithms such as those proposed by , and focus on samples that are potentially mislabeled .however , not every outlier is a mislabeled observation and vice versa : a sample can be correctly labeled yet behave in a completely different way than its group members .such discrimination between several types of outliers is usually not provided . *most algorithms basically provide a ranking of the samples according to potential mislabeling .however , intuitively it is not always clear how many of the top ranked samples are serious outlier candidates .automatic cut - off procedures often turn out too conservative ( not detecting all outliers ) or too aggressive ( pointing out good samples as outliers ) . *the role of the kernel is highly undervalued .some methods [ , ] do not use support vector machines or kernels at all . use support vector machines , but restrict themselves to a linear kernel and even a constant regularization parameter , whereas optimization of hyperparameters through cross validation is preferred . in order to avoid some of these difficulties, we propose an outlier map for svm classification .outlier maps ( also called diagnostic plots ) are quite common in multivariate statistics , for example , for linear regressionand linear principal component analysis [ , ] .the idea is to start from a robust method guaranteeing resistance to potential outliers .based on this robust fit , appropriate measures of interest ( e.g. , residuals in regression ) are computed and plotted . in this paper a similar idea is developed for providing an outlier map which is easy to interpret , distinguishes different types of potential outliers , and works for any type of kernel .on the -axis of this map we put the stahel donoho outlyingness . in section [sect : sd ] we explain how to compute this outlyingness measure in a general kernel induced feature space . on the -axis of the outlier map we put the value of the classification function of a trimmed support vector machine .more details on this robustified svm are given in section [ sect : sdsvm ] .the main part of the paper is section [ sect : defom ] , where the outlier map is defined and illustrated in a simple two - dimensional example . in section [ sect : examples ] the outlier map is discussed in high - dimensional real life examples .let be a data set of -dimensional samples . in multivariate statistics the stahel donoho outlyingness of sample [ , ] is defined by with a robust univariate estimator of location and a univariate estimator of spread .popular choices are , for instance , the median for and the median absolute deviation ( mad ) for .the set is a set of directions in . in practice , this set is often constructed by selecting directions orthogonal to subspaces containing observations if is sufficiently small .another possibility is taking times a direction through randomly chosen observations .this strategy works in any dimension and since we will extend the outlyingness to high - dimensional kernel spaces , this is the strategy of our choice .the stahel donoho outlyingness plays a crucial role in several multivariate robust algorithms , for example , covariance estimation [ ] and pca [ ] .first we note that this outlyingness measure can be computed in an arbitrary kernel induced feature space .let be elements in a set .let be an appropriate kernel function with corresponding feature space and feature map such that the inner product between feature vectors in can be computed by : denote the matrix containing as entry .this matrix is called the kernel matrix . a typical kernel method such as svm consists of applying a linear method in the feature space such that the computations only depend on pairwise inner products and thus on the kernel matrix [ ] .we now show that the stahel donoho outlyingness ( [ eq : stdorig ] ) can be computed in such a manner .let be the direction in through feature vectors and : the projection of a feature vector onto the direction is then since the squared norm of an element equals the inner product of the element with itself , we have that the vector denotes the vector with entry equal to , entry equal to and all other entries equal to .then denote the vector containing the projections of all feature vectors onto the direction through feature vectors and : note that only the kernel matrix is needed and not the explicit feature vectors to compute the projections . from these projectionsthe stahel donoho outlyingness of a feature vector in can be calculated as follows : again and are univariate robust estimators of location and scale . from this point on we always take note that in ( [ eq : stdkern ] ) we have to check directions to find the maximum , where denotes the number of observations in the data set. then all directions through observations are considered .if is too large , a random subset of directions can be taken .typically a few hundred is already enough to provide a good approximation [ ] . in our implementationwe use the full set if .otherwise we select directions at random .let us now turn to the typical svm setup .let be a data set of training samples in some set and let be a kernel function .let be the corresponding labels : if sample belongs to the negative group , if sample belongs to the positive group. denote by the number of samples with label and the number of samples with label .the following algorithm basically trims a fraction of the data with largest outlyingness and trains a standard svm on the remaining samples. we will refer to this algorithm as sd - svm ( sd stands for stahel donoho ) : 1 .set .denote and ( denotes the largest integer smaller than ) .trimming step : consider only the inputs with group label . compute the stahel donoho outlyingness for every sample in this set using ( [ eq : stdkern ] ) .retain the observations with smallest outlyingness .denote this set of size as .analoguously obtain the set containing the samples with group label with smallest outlyingness .training step : train a standard svm on the reduced training set .thus solve \\[-8pt ] & & \mbox{subject to}\qquad 0\leq\alpha_i\leq c \quad\mbox{and}\quad \sum_{x_i\in t}\alpha_i y_i=0.\nonumber\end{aligned}\ ] ] the classifying function is given by to predict the group membership of a sample , one takes . notethat the computations in the training step are exactly the same as for an ordinary svm .the only difference is that the reduced set containing the observations with smallest outlyingness is used , in order to avoid negative effects from possible outliers .the regularization parameter in ( [ eq : sdsvmalpha ] ) is sometimes set to as a default value .however , it is preferable to optimize the value of .sd - svm is of course compatible with any type of model selection strategy : it suffices to add the model selection strategy to the training step ( step 3 ) of the algorithm outlined in section [ sect : sdsvm ] . in all the examples of this paper ,10-fold cross - validation was used to optimize . to illustrate sd - svm ,consider the following simple experiment : samples ( negative group ) are generated , each with independent standard normal components .another samples ( positive group ) are generated with independent normal components with mean . in a second setupthe same data is used with additional outliers : samples are added to the negative group with independent normal components with mean . to the positive group , samples are added with independent normal components with mean . in both situationssd - svm with a linear kernel is applied for several values of .the fraction of misclassifications on newly generated test data is computed .figure [ fig : sim ] shows boxplots over simulation runs . .] in the case without outliers the number of misclassifications increases as decreases .this is quite expected since a lower means more trimming , which is unnecessary in this case since all samples are nicely generated from two gaussian distributions .thus , it is no surprise that a classical svm ( ) performs best .however , a relatively small amount of outliers ( 8 out of 58 ) changes things completely ( right - hand side of figure [ fig : sim ] ) .a classical svm ( ) is no better than guessing anymore ( more than 50 misclassifications ) .sd - svm with is not good enough either , since the trimming percentage is still smaller than the percentage of outliers .only if is chosen small enough , good performance is obtained .thus , a small provides protection against outliers at the cost of a slightly worse classification performance at uncontaminated data . for the outlier map it is most important to avoid the huge effects of outliers , whereas the small effect of unnecessary trimming is practically invisible .therefore , a default choice of turns out to be a good choice for the construction of the outlier map , and we retain this choice throughout the remainder of the paper .the following visualization is proposed : 1 .make a scatterplot of the outlyingness and the value of the classifier .thus , for , plot pairs where is the stahel donoho outlyingness of sample computed in the trimming step of the algorithm and can be calculated from ( [ eq : sdsvmf ] ) .2 . plot the inputs with group labels as circles and those with group labels as crosses .add a solid vertical line at horizontal coordinate .consider a simple example in dimensions as follows : observations are generated from a bivariate gaussian distribution with mean and identity covariance matrix .they have group label .thirty observations are generated from a bivariate gaussian distribution with mean and identity covariance matrix .they receive group label .apart from these observations , more are added , representing several types of outliers : data points ( denoted 6163 ) are placed around position with label .two observations ( denoted 64 and 65 ) with label are placed around .one point ( denoted ) is placed at position with label . a two - dimensional view of the data is given in figure [ fig : fig1](a ) .the solid line represents the sd - svm classification boundary with a linear kernel . despite the outliers in the data , sd - svm still manages to separate both groups quite nicely . c -dimensional classification problem .the solid line is the sd - svm classifying line .corresponding outlier map visualizing the two main groups and the different types of outliers.,title="fig : " ] + ( a ) + -dimensional classification problem .the solid line is the sd - svm classifying line .corresponding outlier map visualizing the two main groups and the different types of outliers.,title="fig : " ] + ( b ) + figure [ fig : fig1](b ) shows the corresponding outlier map . on the vertical axis one reads the stahel donoho outlyingness .observations and are positioned in the center of their respective group .their outlyingness is indeed small .observations further away from the group center have a larger outlyingness , for example , , and . on the horizontal axis the value of the classifying function as in ( [ eq : sdsvmf ] ) can be read .the sign of this function determines the predicted group labels .the vertical line at divides the plot in two parts : every point left of the line is classified into the negative group by sd - svm and every point on the right is classified into the positive group .we can now see , for instance , that observation is a misclassification : it belongs to the positive group , but receives group label since it lies on the left of the vertical line in figure [ fig : fig1](a ) .the absolute value of the -coordinate in the diagnostic plot represents a distance to the classification boundary . in figure[ fig : fig1](a ) it can be seen , for example , that observations and are almost equally distant from the negative group center , but observation is much closer to the classification line .this information can be found in the outlier map in figure [ fig : fig1](b ) as well , since both have almost the same outlyingness ( vertical axis ) , but is much closer to the vertical line than sample ( horizontal axis ) .the outliers in the data can be detected and characterized too .observations 6163 are outlying with respect to the other data points in their group , which is clearly indicated by their large outlyingness. however , both samples still follow the classification rule .indeed , both are lying on the right side in figure [ fig : fig1](b ) .samples 64 and 65 , on the other hand , are outlying with respect to the other observations in their group as well as with respect to the classification line : their outlyingness is large and the value of the classification function is negative , although it should have been positive to obtain a correct classification . finally consider observation .its not extremely outlying with respect to the other data points in the positive group .however , taking the negative group and the classification line into account , it seems to share more characteristics with the negative group than with its own positive group colleagues .in the outlier map this is revealed by a moderate outlyingness and by its position almost in the middle of the left side of the vertical line .the first example considers a data set by .the data consist of microarrays from 128 different individuals with acute lymphoblastic leukemia ( all ) , publicly available in the all package in the software environment r. the number of gene expressions at each individual equals .there are adult patients with t - cell all and with b - cell all .figure [ fig : leuk ] presents the outlier map for svm with a linear kernel applied to this data set .it turns out that the data is well classified and that there are no samples with a very large outlyingness .both t - cell and b - cell form homogeneous groups as one would like when applying a linear svm .thus , the outlier map immediately shows that the data is clean and one can safely proceed analysis without worrying about outliers .the breast cancer data set from contains tumor samples that are either positive ( er ) or negative ( er ) to estrogen receptor .the expression levels of genes are given for each sample .for a linear kernel the corresponding outlier map is shown in figure [ fig : breast](a ) .samples , and immediately catch the eye .their outlyingness is unusually large . in samples and were already rejected and taken out of the analysis due to failed array hybridization .also sample was characterized as unusual .it was the only sample in the er group for which the out of sample prediction was highly unreliable in the analysis performed by .the samples and attract attention as well .they have a large outlyingness and both are clearly misclassified .it turns out that for this data the group membership er or er was determined not only by immunohistochemistry at time of diagnosis , but also by later immunoblotting . for samples and both methods returned different results . show via statistical analysis that the initial labeling er for and er for is probably wrong and that the immunoblotting results are more appropriate .this is clearly confirmed by the outlier map .it is worth noting that the same data set was analyzed in , where a comparison was made between a proposed stability criterion , a simple leave - one - out criterion and the algorithm from .however , none of these methods was able to detect the clear outliers discussed so far .five more suspicious samples were indicated in : , , , and . in figure[ fig : breast](b ) these samples are shown on a zoom - in from the full outlier map into the region on the vertical axis . except for , these samples are suspicious in the sense that they are not confidently classified , since the value of the classifying function is close to .it is no surprise that these samples are found by the algorithms compared in , since those methods are designed to detect potentially mislabeled samples .also note that some of these mislabeling detection algorithms pointed out samples and as suspicious , although these samples were not considered in . from the outlier mapit can be seen that and are indeed wrongly classified by sd - svm .c are outlying but well classified .samples and are slightly outlying with respect to their groups , but are clearly wrongly classified .this suggests that they are mislabeled rather than erroneous , confirming the original analysis by west et al .same plot , but zoomed - in at the region on the vertical axis for better visibility .observations flagged by algorithms searching for mislabelings are shown .,title="fig : " ] + ( a ) + are outlying but well classified . samples and are slightly outlying with respect to their groups , but are clearly wrongly classified .this suggests that they are mislabeled rather than erroneous , confirming the original analysis by west et al .same plot , but zoomed - in at the region on the vertical axis for better visibility .observations flagged by algorithms searching for mislabelings are shown .,title="fig : " ] + ( b ) + the colon cancer data set from contains gene expression levels for tumor samples and normal samples .the outlier map with a linear kernel is shown in figure [ fig : alon ] . in the tumor group t2 ,t33 , t36 and t30 are misclassified .sample t37 is classified correctly , but with low confidence : it is very close to the classification boundary . in the normal group n8 and especially n34 and n36 are the suspicious cases that behave differently from the other normal samples .the 8 aforementioned samples plus sample n12 were identified as possible outliers in the original paper by for biological reasons . thus , 8 out of 9 true outliers can be identified on the outlier map , only leaving n12 undetected . however , in none of the methods that were compared could detect n12 . moreover , the stability criterion proposed by malossini et al . was unable to detect t37 and n8 too and incorrectly pointed at n2 and n28 as possibly suspicious samples .also note the interesting sample t6 . from the outlier mapwe see that this sample is classified correctly and with much confidence .nevertheless , its outlyingness with respect to the other tumor samples is rather large .this means that t6 behaves quite differently than the other tumor samples , but without distorting the classification . in malossinimost of the methods analyzed did not detect t6 at all .again this is no surprise since methods such as the stability criterion of malossini et al .specifically focus on mislabeled observations , whereas t6 is certainly not mislabeled .only the outlier detection method of is able to detect t6 , but does rather poorly on the other samples detecting only 5 out of 9 true outliers .the protein data set taken from contains protein sequences of the essentially ubiquitous glycolytic enzyme 3-phosphoglycerate kinase ( 3-pgk ) in three domains : archaea , bacteria and eukaryota . the data set is available in the protein classification benchmark collection at http://net.icgeb.org ( accession number pcb00015 ) .we consider here classification task number 10 where the positive group consists of 35 eukaryota .the negative group consists of 4 archaea and 40 bacteria . to classify these two groups of protein sequences , we use svm with the local alignment kernel [ ] .default parameter values were used : gap opening penalty 11 , gap extension penalty 1 , scaling parameter 0.5 .the outlier map is shown in figure [ fig : protein ] .one observes that the positive group of eukaryota is very heterogeneous as several clusters appear .these clusters all have a biological interpretation , as the group of eukaryota contains several subgroups of different phyla .for instance , observations 2931 are from the phylum of alveolata. samples 1317 are the euglenozoa . note that 18 ( named q8srz8 ) , which belongs to the fungi , was clustered in the group of euglenozoa by pollack et al . ;this is actually confirmed by the outlier map .finally , samples 33 and 34 are outlying with respect to the positive group .they form , together with 32 , the group of stramenopiles .note that the different behavior of sample 32 from its fellow stramenopiles is again a confirmation of the analysis by pollack et al .: their clustering method assigned 32 ( named q8h721 ) in the main group of eukaryota metazoa .also , in the outlier map 32 is situated in the main group , whereas 33 and 34 form a separate cluster . in the positive groupthe heterogeneity is less clear , although the 4 archaea ( 3639 ) do have the largest outlyingness compared to the other samples which are all bacteria .an outlier map is proposed for support vector machine classification .if the outlier map shows two homogeneous and well classified groups , one can safely proceed analysis without worrying about outliers .however , in some situations this may not be the case and the outlier map can be a simple and useful tool to detect this . moreover , the outlier map can be drawn for any choice of kernel , including rather exotic ones such as used in protein analysis .it can also be helpful to gain insight in the type of outliers , for example , whether outliers are mislabeled observations or not , or whether the outliers are isolated errors or rather a small subgroup of the group structure considered .this is important to know how to proceed analysis .if the outliers are truly erroneous observations , one should not take them into account to build a classifier , and one can manually discard them from the data set or apply a robust classifier . if the outliers are mislabeled observations , one probably should re - examine the labeling and change the label of the outlier if this seems indeed appropriate . if the outliers form a small subgroup of the data , one might reconsider the use of a binary classifier and turn to a more appropriate modeling technique . in any event, the outlier map can be helpful for practitioners of svm classification to make such decisions .alon , u. , barkai , n. , notterman , d. a. , gish , k. , ybarra , s. , mack , d. and levine , a. j. ( 1999 ) .broad patterns of gene expression revealed by clustering of tumor and normal colon tissues probed by oligonucleotide arrays .sci . _ * 96 * 64756750 .chiaretti , s. , li , x. , gentleman , r. , vitale , a. , vignetti , m. , mandelli , f. , ritz , j. and foa , r. ( 2004 ) . gene expression profile of adult t - cell acute lymphocytic leukemia identifies distinct subsets of patients with different response to therapy and survival . _ blood _ * 103 * 27712778 .furey , t. s. , cristianini , n. , duffy , d. , bednarski , w. , schummer , m. and haussler , d. ( 2000 ) .support vector machine classification and validation of cancer tissue samples using microarray expression data. _ bioinformatics _ * 16 * 906914 .kadota , k. , tominaga , d. , akiyama , y. and takahashi , k. ( 2003 ) . detecting outlying samples in microarray data : a critical assessment of the effect of outliers on sample classification ._ chem - bio .j. _ * 3 * 3045 .leslie , c. , eskin , e. and noble , w. s. ( 2002 ) .the spectrum kernel : a string kernel for svm protein classification . in _ proceedings of the pacific symposium on biocomputing 2002 _( r. b. altman , a. k. dunker , l. hunter , k. lauerdale and t. e. klein , eds . ) 564575 .world scientific , hackensack , nj . leslie , c. , eskin , e. , weston , j. and noble , w. s. ( 2003 ) .mismatch string kernels for svm protein classification . in_ advances in neural information processing systems _ ( s. becker , s. thrun and k. obermayer , eds . ) * 15 * 14411448 . mit press , cambridge , ma .li , l. , darden , t. a. , weinberg , c. r. , levine , a. j. and pedersen , l. g. ( 2001 ) .gene assessment and sample classification for gene expression data using a genetic algorithm / k - nearest neighbor method . _high throughput screen . _ * 4 * 727739 .liao , l. and noble , w. s. ( 2002 ) . combining pairwise sequence similarity and support vector machines for remote protein homology detection . in_ proceedings of the sixth international conference on computational molecular biology _ ( t. lengauer , ed . ) 225232 .acm press , new york .pochet , n. , de smet , f. , suykens , j. a. k. and de moor , b. ( 2004 ) .systematic benchmarking of microarray data classification : assessing the role of nonlinearity and dimensionality reduction ._ bioinformatics _ * 20 * 31853195. pollack , j. d. , li , q. and pearl , d. k. ( 2005 ) .taxonomic utility of a phylogenetic analysis of phosphoglycerate kinase proteins of archaea , bacteria and eukaryota : insights by bayesian analyses .evol . _ * 35 * 420430 . west , m. , blanchette , c. , dressman , h. , huang , e. , ishida , s. , spang , r. , zuzan , h. , marks , j. r. and nevins , j. r. ( 2001 ) . predicting the clinical status of human breast cancer by using gene expression profiles ._ * 98 * 1146211467 . | support vector machines are a widely used classification technique . they are computationally efficient and provide excellent predictions even for high - dimensional data . moreover , support vector machines are very flexible due to the incorporation of kernel functions . the latter allow to model nonlinearity , but also to deal with nonnumerical data such as protein strings . however , support vector machines can suffer a lot from unclean data containing , for example , outliers or mislabeled observations . although several outlier detection schemes have been proposed in the literature , the selection of outliers versus nonoutliers is often rather ad hoc and does not provide much insight in the data . in robust multivariate statistics outlier maps are quite popular tools to assess the quality of data under consideration . they provide a visual representation of the data depicting several types of outliers . this paper proposes an outlier map designed for support vector machine classification . the stahel donoho outlyingness measure from multivariate statistics is extended to an arbitrary kernel space . a trimmed version of support vector machines is defined trimming part of the samples with largest outlyingness . based on this classifier , an outlier map is constructed visualizing data in any type of high - dimensional kernel space . the outlier map is illustrated on 4 biological examples showing its use in exploratory data analysis . . |
retrospective estimates of influenza activity ( ili activity level , as reported by the cdc ) were produced using our model , argo , for the time period of 2009 - 03 - 29 to 2015 - 07 - 11 , assuming we had access only to the historical cdc s ili reports up to the previous week of estimation .we compared argo s estimates with the ground truth : the cdc - reported weighted ili activity level , published typically with one or two weeks delay , by calculating a collection of accuracy metrics described in the materials section .these metrics include the root mean squared error ( rmse ) , mean absolute error ( mae ) , mean absolute percentage error ( mape ) , correlation with estimation target , and correlation of increment with estimation target . for comparison , we calculated these accuracy metrics for ( a ) gft estimates ( accessed on 2015 - 07 - 11 ) , ( b ) estimates produced using the method of santillana et al .2014 , ( c ) estimates produced by combining gft with an ar(3 ) autoregressive model , ( d ) estimates produced with an ar(3 ) autoregressive model , and ( e ) a naive method that simply uses the value of the prior week s cdc s ili activity level as the estimate for the current one . for fair comparison , all benchmark models ( b d )are dynamically trained with a two - year moving window .table 1 summarizes these accuracy metrics for all estimation methods for multiple time periods .the first column shows that argo s estimates outperform all other alternatives , in every accuracy metric for the whole time period .the other columns of table 1 show the performance of all the methods for the 2009 off - season h1n1 flu outbreak , and each regular flu season since 2010 .the panels of figure [ fig : all_pred ] display the estimates against the observed cdc - reported ili activity level .close inspection shows that , in the post-2009 regular flu seasons , argo uniformly outperformed all other alternative estimation methods in terms of root mean squared error , mean absolute error , mean absolute percentage error , and correlation .argo avoids the notorious over - shooting problem of gft , as seen in figure [ fig : all_pred ] . during the 2009 off - seasonh1n1 flu outbreak , argo had the smallest mean absolute percentage error . in terms of rootmean squared error and mean absolute error , argo ( relative rmse = 0.640 , relative mae = 0.584 ) had the second best performance , under - performing slightly only to gft+ar(3 ) model ( relative rmse = 0.580 , relative mae = 0.570 ) . in terms of correlation ,argo ( r=98.5% ) had similar performance to ( the potentially in - sample data of ) gft ( r=98.9% ) and gft+ar(3 ) model ( r=98.6% ) , while outperforming all the other alternatives . to assess the statistical significance of the improved prediction power of argo, we constructed a 95% confidence interval for the relative efficiency of argo compared to other benchmark methods .the relative efficiency of method 1 to method 2 is the ratio of the true mean squared error of method 2 to that of method 1 , which can be estimated by its observed value ( see eq ) ; its confidence interval can be constructed by stationary bootstrap of the error residual time series .table [ tab : relative_efficiency ] shows that argo is estimated to be at least twice as efficient as any other alternative and the improvement in accuracy is highly statistically significant .it is well - known that cdc reports undergo revisions , weeks after their initial publication , that respond to internal consistency checks and lead to more accurate estimates of patients with ili symptoms seeking medical attention .thus , the available historical cdc information , in a given week , is not necessarily as accurate as it will be .we tested the effect of using ( potentially inaccurate ) unrevised information by obtaining the historical unrevised and revised reports , and the dates when the reports were revised , from the cdc website for the time period of our study .we used only the information that would have been available to us , at the time of estimation , and produced a time series of estimates for the whole time period described before .we compared our estimates to all other methods and found that argo still outperformed them all .moreover , the values of all five accuracy metrics for argo essentially did not change , suggesting a desirable robustness to revisions in cdc s ili activity reports .the results are shown in table s1 in the supporting information .we faced an additional challenge in producing real - time estimates for the latest portion of the 2014 - 2015 flu season . at the time of writingthis article , the only data available to us for the week of march 28 , 2015 and later came from the _ google trends _ website .the information from _ google trends _ has even lower quality than from _ google correlate _ and changes every week .these undesired changes affected the quality of our estimates . in order to assess the stability of argo in the presence of these variations in the data, we obtained the search frequencies of the same query terms from _ google trends _website on 25 different days during the month of april 2015 , and produced a set of 25 historical estimates using argo .the results of the accuracy metrics associated to these estimates are shown in table s2 in the supporting information .this table shows that , despite the observed variation in the _ google trends _ data , argo is threefold more stable than the method of , and still outperforms on average any other method ..estimate of relative efficiency of argo compared to other models with 95% confidence interval ( ci ) .relative efficiency being larger than one suggests increased predictive power of argo compared to the alternative method .[ cols=">,>,>",options="header " , ]the results presented here demonstrate the superiority of our approach both in terms of accuracy and robustness , when compared to all existing flu tracking models based on google searches .the value of these results is even higher given the fact that they were produced with low quality input variables .it is highly likely that our methodology would lead to even more accurate results if we were given access to the input variables that google uses to calculate their estimates .the combination of seasonal flu information with dynamic reweighting of search information , appears to be a key factor in the enhanced accuracy of argo .the level of ili activity last week typically has a significant effect on the current level of ili activity , and ili activity half a year ago and/or one year ago could provide further information , as shown in figure s1 of the supporting information , which reflects a strong temporal auto - correlation .the integration of time series information leads to a smooth and continuous estimation curve and prevents undesired spikes .however , simply adding gft to an autoregressive model is suboptimal compared to argo , because simply treating gft as an individual variable is incapable to adjust for time series information at the resolution of individual query terms , and many terms included in gft may no longer provide extra information once time series information is incorporated . in fact , once the time series information is included , fewer google search query terms remain significant .for example , among 100 _ google correlate _ query terms , argo selected 14 terms on average each week , whereas the method of and gft selected 38 and 45 terms each week on average , respectively .the combination of argo s smoothness and sparsity lead to a substantial reduction on the estimation error , as observed in tables 1 and 2 , where argo shows improved performance in all evaluation metrics over the whole time period and is twice as efficient as gft+ar(3 ) .our methodology allows us to transparently understand how google search information and historical flu information complement one another .time series models tend to be slow in response to sudden observed changes in cdc s ili activity level .the ar(3 ) model shows this `` delaying '' effect , despite its seemingly good correlation .google searches , on the other hand , are better at detecting sudden ili activity changes , but are also very sensitive to public s over - reaction . to investigate further the responsiveness ( co - movement ) of argo towards the change in ili activity , we calculated the correlation of increment between each estimation model and cdc s ili activity level .the correlation of increment between two time series and is defined as , which measures how well captures the changes in .table 1 shows that argo has similar capability in capturing the changes in ili level to that of gft and the method of , while outperforming the time series model ar(3 ) uniformly .time series information ( seasonality ) tends to pull argo s estimate towards the historical level .this was evident at the onset of the off - season h1n1 flu outbreak ( week ending at 05/02/2009 ) , which resulted in argo s under - estimation .argo self - corrected its performance the following week by shifting a portion of model weights from the time series domain to the google searches domain .inversely , at the height of 2012 - 13 season , argo , gft and the method of all missed the peak due to an unprecedented surge of search activity .argo achieved the fastest self - correction by redistributing the weights not only across google terms but also across time series terms , missing the peak by only 1 week , as opposed to 2 weeks for and about 4 weeks for gft .it is important to note that while we have used cdc s ili as our gold standard for influenza activity in the us population , and data from google correlate / trends as our independent variables , our methodology can be immediately adapted to any other suitable ili gold standard and/or set of independent variables .while argo displays a clear superiority over previous methods , it is not fail - proof .since it relies on the public s search behavior , any abrupt changes to the inner works of the search engine or any changes in the way health - related search information is displayed to users will affect the accuracy of our methodology .we expect that argo will be fast at correcting itself if any such change takes place in the future . as in any predictive method ,the quality of past performance does not guarantee the quality of future performance . in this article, we fixed the search query terms after 2010 so as to directly compare our results with gft , which kept the same query terms since 2010 ; future application of argo may update search terms more frequently .argo can be easily generalized to any temporal and spatial scales for a variety of diseases or social events amenable to be tracked by internet searches or services .further improvements in influenza prediction may come from combining multiple predictors constructed from disparate data sources .after the submission of this article , google announced that gft would be discontinued and that their raw data would be made accessible to selected scientific teams .this announcement happened soon after the gft team published a manuscript that proposed a new time - series based method for the ( now discontinued ) gft engine .this new development makes our contribution timely and useful in providing a transparent method for disease tracking in the future .all data used in this article are publicly available .therefore , irb approval is not needed . to avoid forward - looking information in our out - of - sample predictions , and to make the search term selection in our approach consistent with the main revision to gft immediately after the h1n1 pandemic, we obtained the highest correlated terms to the cdc s ili using google correlate ( www.google.com/trends/correlate ) for two different time periods .for the first time period ( pre - h1n1 period ) , we inserted only cdc s ili data from jan 2004 to march 28 , 2009 into google correlate , and used the resulting most highly correlated search terms as independent variables for our out - of - sample predictions for the time period april 4 , 2009 - may 22 , 2010 .for the second time period ( post - h1n1 ) , we inserted only cdc s ili data from jan 2004 to may 22 , 2010 into google correlate to select new search terms as done in .these last search terms were used as independent variables for all subsequent predictions presented in this work .tables s4 and s5 in the supporting information show all query terms identified . for the pre - h1n1 period ( the first time period ) ,the terms from google correlate include spurious ( or over - fitted ) terms like `` march vacation '' or `` basketball standings '' , as discussed in .however , figure s1 in the supporting information shows that these spurious terms were often not selected by argo , i.e. , argo would give them zero weights , demonstrating its robustness . for the post - h1n1 time period ,the updated query terms from google correlate include mostly flu - related terms ( see table s5 in supporting information ) .this suggests that spurious terms were `` filtered - out '' by including off - season flu data . for the time period of march 28 , 2015 up to the date of submission of this article ,we acquired search frequencies for this set of query terms from google trends ( www.google.com/trends . date of access : july 11 , 2015 ) as google correlate only provides data up to march 28 , 2015 at the time of writing this article .google correlate standardizes the search volume of each query to have mean zero and standard deviation one across time and contains data only from 2004 to mar 2015 . to make google correlate data compatible with google trends data , we linearly transformed the google correlate data to the same scale of 0 to 100 in our analysis .we used google correlate data up to its last available date , and then switched to google trends data afterwards .this is indicated in figure [ fig : all_pred ] by different shades of the background .we used the latest version of google flu trends ( 4th version , revised in oct 2014 ) weekly estimates of ili activity level as one of our comparison methods .gft is available at www.google.org/flutrends/us/data.txt ( date of access : 2015 - 07 - 11 ) .we use the weighted version of cdc s ili activity level as the estimation target ( available at gis.cdc.gov/grasp/fluview/fluportaldashboard.html .date of access : 2015 - 07 - 11 ) .the weekly revisions of cdc s ili are available at the cdc website for all recorded seasons ( from week 40 of a given year to week 20 of the subsequent year ) .for example , ili report revision at week 50 of season 2012 - 13 is available at www.cdc.gov/flu/weekly/weeklyarchives2012-2013/data/senallregt50.htm ; ili report revision at week 9 of season 2014 - 15 is available at www.cdc.gov/flu/weekly/weeklyarchives2014-2015/data/senallregt09.html .our model argo is motivated by a hidden markov model .the _ logit_-transformed cdc - reported ili activity level is the intrinsic time series of interest .we impose an autoregressive ( ar ) model with lag on it , which implies that the collection of vectors is a markov chain ( this captures the clinical fact that flu lasts for a period , but not indefinitely ) .the vector of _log_-transformed normalized volume of google search queries at time , , depends only on the ili activity at the same time , ( this follows the intuition that flu occurrence causes people to search flu related information online ) . the markovian property on block leads to the ( vector ) hidden markov model structure . our formal mathematical assumptions are : + ( 1 ) + ( 2 ) + ( 3 ) conditional on , is independent of + where , and is the covariance matrix . to make the variables more normal, we transform the original ili activity level from ] to using the log function , obtaining the .the log function is appropriate because google search frequencies usually have exponential growth rate near peaks and are artificially scaled to ] , which can be estimated by the 95% confidence interval can be constructed by time series stationary bootstrap method , where the replicated time series of the error residual is generated using geometrically distributed random blocks with mean length 52 ( which corresponds to one year ) .we obtain the basic bootstrap confidence interval for and then recover the original scale by exponentiation .the non - parametric bootstrap confidence interval takes the autocorrelation and cross - correlation of the errors into account , and is insensitive to the mean block length .s. c. kou s research is supported in part by nsf grant dms-1510446 . 10 ginsberg j et al .( 2009 ) detecting influenza epidemics using search engine query data .457:10121014 .polgreen pm , chen y , pennock dm , nelson fd , weinstein ra ( 2008 ) using internet searches for influenza surveillance .47(11):14431448 .yuan q et al .( 2013 ) monitoring influenza epidemics in china with search query from baidu . 8(5):e64323 .paul mj , dredze m , broniatowski d ( 2014 ) twitter improves influenza forecasting . .mciver dj , brownstein js ( 2014 ) wikipedia usage estimates prevalence of influenza - like illness in the united states in near real - time .10(4):e1003581 .santillana m , nsoesie eo , mekaru sr , scales d , brownstein js ( 2014 ) using clinicians search query data to monitor influenza epidemics .59(10):14461450 .wesolowski a et al .( 2014 ) commentary : containing the ebola outbreak the potential and challenge of mobile network data . .chan eh , sahai v , conrad c , brownstein js ( 2011 ) using web search query data to monitor dengue epidemics : a new model for neglected tropical disease surveillance . 5(5):e1206 .preis t , moat hs , stanley he ( 2013 ) quantifying trading behavior in financial markets using google trends .bollen j , mao h , zeng x ( 2011 ) twitter mood predicts the stock market .2(1):18 .wu l , brynjolfsson e ( 2015 ) _ the future of prediction : how google searches foreshadow housing prices and sales_. in _ economic analysis of the digital economy _avi goldfarb sg , tucker c. ( university of chicago press ) , pp .89118 .helft m ( 2008 ) google uses searches to track flu s spread ( the new york times ) .butler d ( 2013 ) when google got flu wrong . 494(7436):155 .cook s , conrad c , fowlkes al , mohebbi mh ( 2011 ) assessing google flu trends performance in the united states during the 2009 influenza virus a ( h1n1 ) pandemic . 6(8):e23610 .lazer d , kennedy r , king g , vespignani a ( 2014 ) the parable of google flu : traps in big data analysis. 343(6176):12031205 .santillana m , zhang dw , althouse bm , ayers jw ( 2014 ) what can digital disease detection learn from ( an external revision to ) google flu trends ? 47(3):341347 .stefansen c ( 2014 ) google flu trends gets a brand new engine .googleresearch.blogspot.com/2014/10/google-flu-trends-gets-brand-new-engine.html .\(2014 ) influenza ( seasonal ) , fact sheet number 211 . accessed april , 2015 .shaman j , karspeck a ( 2012 ) forecasting seasonal outbreaks of influenza .109(50):2042520430 .lipsitch m , finelli l , heffernan rt , leung gm , redd sc ( 2011 ) improving the evidence base for decision making during a pandemic : the example of 2009 influenza a / h1n1 .9(2):89115 .nsoesie eo , brownstein js , ramakrishnan n , marathe mv ( 2014 ) a systematic review of studies on forecasting the dynamics of influenza outbreaks .8(3):309316 .chretien jp , george d , shaman j , chitale ra , mckenzie fe ( 2014 ) influenza forecasting in human populations : a scoping review . 9(4):e94130 .nsoesie e , mararthe m , brownstein j ( 2013 ) forecasting peaks of seasonal influenza epidemics .soebiyanto rp , adimi f , kiang rk ( 2010 ) modeling and predicting seasonal influenza transmission in warm regions using climatological parameters .5(3):e9450 .shaman j , karspeck a , yang w , tamerius j , lipsitch m ( 2013 ) real - time influenza forecasts during the 20122013 season .yang w , lipsitch m , shaman j ( 2015 ) inference of seasonal and pandemic influenza transmission dynamics .112(9):27232728 .paolotti d et al .( 2014 ) web - based participatory surveillance of infectious diseases : the influenzanet participatory surveillance experience .20(1):1721 .dalton c et al .( 2009 ) flutracking : a weekly australian community online survey of influenza - like illness in 2006 , 2007 and 2008 .33(3):31622 .smolinski ms et al .( 2015 ) flu near you : crowdsourced symptom reporting spanning two influenza seasons .( 0 ) e1-e7 .althouse bm , ng yy , cummings da ( 2011 ) prediction of dengue incidence using search query surveillance .5(8):e1258 .ocampo aj , chunara r , brownstein js ( 2013 ) using search queries for malaria surveillance , thailand .12(1):390 .scarpino sv , dimitrov nb , meyers la ( 2012 ) optimizing provider recruitment for influenza surveillance networks .8(4):e1002472 .davidson mw , haim da , radin jm ( 2015 ) using networks to combine `` big data '' and traditional surveillance to improve influenza predictions . 5 .burkom hs , murphy sp , shmueli g ( 2007 ) automated time series forecasting for biosurveillance .26(22):42024218 .everitt bs , skrondal a ( 2002 ) the cambridge dictionary of statistics . .politis dn , romano jp ( 1994 ) the stationary bootstrap .89(428):13031313 .tsukayama h ( 2014 ) google is testing live - video medical advice ( the washington post ) .accessed on april 20 , 2015 .gianatasio d ( 2014 ) how this agency cleverly stopped people from googling their medical symptoms : the right ads at the right time ( adweek , online ) .accessed on april 20 , 2015 .yang ac , tsai sj , huang ne , peng ck ( 2011 ) association of internet search trends with suicide death in taipei city , taiwan , 20042009 . 132(1):179184. cavazos - rehg pa et al .( 2014 ) monitoring of non - cigarette tobacco use using google trends . .tibshirani r ( 1996 ) regression shrinkage and selection via the lasso .58(1):267288 .hoerl ae , kennard rw ( 1970 ) ridge regression : biased estimation for nonorthogonal problems .12(1):5567 .zou h , hastie t ( 2005 ) regularization and variable selection via the elastic net .67(2):301320 .lampos v , miller ac , crossan s , stefansen c ( 2015 ) advances in nowcasting influenza - like illness rates using search query logs . .5:12760 santillana m , nguyen at , dredze m , paul mj , nsoesie e , brownstein js ( 2015 ) combining search , social media , and traditional data sources to improve influenza surveillance , 11(10):e1004513 * supporting information *details of our methodology are presented as follows .first , the predictive distribution in the formulation of the argo model and the corresponding assumptions are described ; second , the statistical strategy to determine the hyper parameters of the argo model is explained ; third , the results of two sensitivity analysis aimed at testing the robustness of the argo methodology(a ) with respect to subsequent revisions of cdc s ili activity reports , and ( b ) with respect to observed variation of the input variables coming from _ google trends _ data are presented ; fourth , the exact search query terms identified by google correlate with different data access dates are presented ; fifth , a heatmap showing the coefficients for the time series and google search terms dynamically trained by argo is included .to improve normality for both the input variables and the dependent variables , the cdc - reported ili activity level was _logit_transformed , and the linearly normalized volume of google search queries were _ log_transformed . to avoid taking the log of 0 , we add a small number before the log - transformation .these transformations led to two sets of variables , the intrinsic ( influenza epidemics activity ) time series of interest , and the ( google search ) variable vector at time ( that depends only on ) , respectively .our formal mathematical assumptions are : 1 . 2 . 3 .conditional on , is independent of where , and is the covariance matrix . the predictive distribution is given by which is a normal distribution , whose mean is a linear combination of and , and whose variance is a constant .the optimized parameters of the argo model , , , are obtained by the training period consists of a two year ( weeks ) rolling window that immediately precedes the desired date of estimation .the hyper parameters are .we tested the performance of argo with the following specifications of hyper parameters : 1 .restrict and , cross validate on .this is our proposed argo with the same penalty for google search terms and autoregressive lags.[item : same_l1 ] 2 .restrict , cross validate on .this is argo with separate penalties for google search terms and autoregressive lags.[item : sep_l1 ] 3 .restrict and , cross validate on .this is argo with the same penalty for google search terms and autoregressive lags.[item : same_l2 ] 4 .restrict , cross validate on .this is argo with separate penalties for google search terms and autoregressive lags.[item : sep_l2 ] 5 .restrict , cross validate on .this is argo with the same elastic net ( both and ) penalty for google search terms and autoregressive lags.[item : enet ] table [ tab : hyperpar ] summarizes the in - sample estimation performance for our proposed argo , together with the other specifications of hyper parameters .it is apparent from the table that the penalty generally outperforms penalty .the penalty tends to shrink the coefficients of unnecessary independent variables to be exactly zero , and thus eliminates redundant information ; on the other hand , the penalty can only shrink the coefficients to be close to zero . as a result, penalized coefficients are not as sparse as their counterparts .furthermore , from table [ tab : hyperpar ] , we see that argo with separate penalties ( specification [ item : sep_l1 ] ) outperforms argo with separate penalties ( specification [ item : sep_l2 ] ) , in terms of both root mean squared error and mean absolute error .similarly , argo with the same penalty ( specification [ item : same_l1 ] ) outperforms argo with the same penalty ( specification [ item : same_l2 ] ) , in terms of both root mean squared error and mean absolute error .the elastic net model , which combines penalty and penalty , does not provide any error reduction . in the cross - validation process of setting for the elastic net model, 70 weeks out of 116 in - sample weeks showed that the smallest cross - validation mean error when restricting ( i.e. zero penalty ) is within one standard deviation of the global smallest cross - validation mean error , suggesting that restricting penalty term to be zero ( i.e. ) will introduce little bias .therefore , for the simplicity and sparsity of the model , we drop the penalty terms and use only penalty .next we want to decide between the remaining two specifications , argo with separate penalties ( specification [ item : sep_l1 ] ) , and argo with the same penalty ( specification [ item : same_l1 ] ) .one might argue that google search terms and autoregressive lags are different sources of information and thus should have different penalties .however , empirical evidence in table [ tab : hyperpar ] shows that , again , giving extra flexibility to does not generate improvement compared to fixing . in the cross - validation process of setting for separate penalties ,99 weeks out of 116 in - sample weeks showed that the smallest cross - validation mean error when restricting ( i.e. same penalty ) is within one standard deviation of the global smallest cross - validation mean error .this may well be due to the gain from variance reduction when imposing the restriction . based on the same simplicity and sparsity consideration, we finally decided to restrict and in the setting of hyper parameters for argo .within a flu season , cdc reports are constantly revised to improve their accuracy as new information is incorporated .thus , cdc s weighted ili figures displayed in previously published reports may change in subsequent weeks . as a consequence , in a given week the available cdc ili information from the most recent weeks may be inaccurate . to test the robustness of argo in the presence of these revisions andmimic the real - time tracking in our retrospective predictions , we trained argo and all other alternative models based on the following schedule .suppose is the cdc - reported ili activity level of week accessed at week . since cdc s ili activity report is typically delayed for one week , on week the historical ili activity level data we have is .due to revisions , ili activity level of week accessed at different weeks may be different but will converge to a finalized value eventually .hence , to avoid using forward looking information , in week , we train all models with the ili activity level accessed at that week . in this sense , any future revision beyond week will not be incorporated in the training at week . yet for the accuracy metrics , the estimation target remains the finalized the ili activity level ( ) .table [ tab : results ] shows the estimation results when using the aforementioned schedule .note that argo still outperforms all other alternative models .moreover , the absolute values of all four accuracy metrics for argo trained this way essentially do not change compared to argo trained with finalized ili activity level in the main text , indicating the robustness of argo . the weekly revisions of cdc s ili activity reports are available at cdc website from week 40 of the year to week 20 of the subsequent year for all seasons studied in this article .for example , ili activity level revisions at week 50 of season 2012 - 2013 are available at http://www.cdc.gov/flu/weekly/ weeklyarchives2012- 2013/data / senallregt50.htm ; ili activity report revision at week 9 of season 2014 - 2015 is available at http://www.cdc.gov /flu / weekly / weeklyarchives2014 - 2015/data / senallregt09.html ( the webpage has suffix `` htm '' for seasons before 2014 - 2015 and suffix `` html '' for 2014 - 2015 season ) . in this retrospective case study , when the revisions of ili activity level were not available for a particular week during off - season period , the finalized ili activity level was used instead ._ google trends _ historical data constantly change as a consequence of re - normalizations and algorithm updates .to study the robustness of argo to _ google trends _ data revisions , we obtained the search frequencies of the search query terms identified by _ google correlate _ on may 22 , 2010 ( see figure 2 in the main text and table [ tab : phrases ] below ) from the _ google trends _ website ( http://www.google.com/trends ) on 25 different days in april 2015 .we studied the variability of argo s performance when using these 25 different versions of _ google trends _ data as input variables for the common time period of sep 28 , 2014 to mar 29 , 2015 .we studied the 2014 - 15 flu season only partially ( up to march 2015 ) because this is the longest study period covered by all the obtained versions of _ google trends _ data , at the time ( may 1 , 2015 ) of the first submission of this article .we want to emphasize that google correlate data were only available up to feb 2014 when accessed in april 2015 . despite the inevitable variation to the revision of the low - quality data from _ google trends _ ,argo still achieves considerable stability compared to the method of santillana et al . during this time period .table [ tab : sensi_gt ] suggests that argo is threefold more robust than the method of .the incorporation of time series information helps argo achieve the stability . as an extreme example , ar(3 )model focuses entirely on the time series information and is thus independent of _ google trends _ data revisions .gft , formulated with the original search variables as inputs , is by construction insensitive to the changes in _ google trends _ data . for this portion of the study, we included the signal from gft for context only and we treat it as exogenous in our analysis .based on the results from previous time periods , it is highly likely that if we had access to google s internal raw data ( i.e. , historical search volume for disease - related phrases ) we would have achieved the same stability as well . yet even with these low - quality data , argo outperforms gft uniformly on all versions of data in terms of both root mean squared error and mean absolute error .tables [ tab : phrases09 ] and [ tab : phrases ] list the search query phrases identified by google correlate as of march 28 , 2009 and of may 22 , 2010 , respectively .the march 2009 version included spurious terms such as `` college.basketball.standings '' , `` march.vacation '' , `` aloha.ski '' , `` virginia.wrestling '' , etc .these spurious terms did not appear in the may 2010 version .figure [ fig : coef ] shows the coefficients for the time series and google search terms dynamically trained by argo via a heatmap .the level of ili activity last week is seen to have a significant effect on the current level of ili activity , and ili activity half a year ago and/or one year ago could provide further information as the figure shows . among _google correlate _ query terms , argo selected 14 terms out of 100 on average each week . | accurate real - time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives . we propose an influenza tracking model , argo ( autoregression with google search data ) , that uses publicly available online search data . in addition to having a rigorous statistical foundation , argo outperforms all previously available google - search based tracking models , including the latest version of google flu trends , even though it uses only low - quality search data as input from publicly available google trends and google correlate websites . argo not only incorporates the seasonality in influenza epidemics but also captures changes in people s online search behavior over time . argo is also flexible , self - correcting , robust , and scalable , making it a potentially powerful tool that can be used for real - time tracking of other social events at multiple temporal and spatial resolutions . this is the preprint of the paper published at pnas : dx.doi.org/10.1073/pnas.1515373112 . there are some minor differences between this preprint and the published paper . big data sets are constantly generated nowadays as the activities of millions of users are collected from internet - based services . numerous studies have suggested great potential of these big data sets to detect / manage epidemic outbreaks ( influenza , ebola , dengue ) , predict changes in stock prices and housing prices , etc . in 2009 , google flu trends ( gft ) , a digital disease detection system that uses the volume of selected google search terms to estimate current influenza - like illnesses ( ili ) activity , was identified by many as a good example of how big data would transform traditional statistical predictive analysis . however , significant discrepancies between gft s flu estimates and those measured by the centers for disease control ( cdc ) in subsequent years led to considerable doubt about the value of digital disease detection systems . while multiple articles have identified methodological flaws in gft s original algorithm and have led to incremental improvements , a statistical framework that is theoretically sound and capable of accurate estimation is still lacking . here we present such a framework that culminates in a new method that outperforms all existing methodologies for tracking influenza activity using internet search data . influenza outbreaks cause up to 500,000 deaths a year worldwide , and an estimated 3,000 to 50,000 deaths a year in the usa . our ability to effectively prepare for and respond to these outbreaks heavily relies on the availability of accurate real - time estimates of their activity . existing methods to predict the timing , duration and magnitude of flu outbreaks remain limited . well - established clinical methods to track flu activity , such as the cdc s ilinet , report the percentage of patients seeking medical attention with ili symptoms ( www.cdc.gov/flu/ ) . while cdc s % ili is only a proxy of the flu activity in the population , it can help officials allocate resources in preparation for potential surges of patient visits to hospital facilities . see for further discussion . cdc s ili reports have a delay of one to three weeks due to the time for processing and aggregating clinical information . this time lag is far from optimal for decision - making purposes . in order to alleviate this information gap , multiple methods combining climate , demographic and epidemiological data with mathematical models have been proposed for real - time estimation of flu activity . in recent years , methods that harness internet - based information have also been proposed , such as google , yahoo , and baidu internet searches , twitter posts , wikipedia article views , clinicians queries , and crowd sourced self - reporting mobile apps such as influenzanet ( europe ) , flutracking ( australia ) , and flu near you ( usa ) . among them , gft has received most attention and has inspired subsequent digital disease detection systems . interestingly , google has never made their raw data public , thus , making it impossible to reproduce the exact results of gft . we highlight three limitations of the original gft algorithm , previously identified in . first , it was shown that a static approach , which does not take advantage of newly available cdc s ili activity reports as the flu season evolves , produced model drift , leading to inaccurate estimates . second , the idea of aggregating the multiple query terms ( the independent variables in the gft model ) into a single variable did not allow for changes in people s internet search behavior over time ( and thus changes in query terms abilities to track flu ) to be appropriately captured . third , gft ignored the intrinsic time series properties , such as seasonality of the historical ili activity , thus overlooking potentially crucial information that could help produce accurate real time ili activity estimates . the new methodology presented here produces robust and highly accurate ili activity level estimates by addressing the three aforementioned shortcomings of the multiple gft engines . in addition , we provide a theoretical framework that , for the first time , justifies the prevailing usage of linear models in the digital disease detection literature by incorporating causality arguments through a hidden markov model . this theoretical framework contains as a special case the model developed in . our new model not only achieves the goal of ( a ) dynamically incorporating new information from cdc reports as they become available and ( b ) automatically selecting the most useful google search queries for estimation as in , but also largely improves estimation by ( c ) including the long - term cyclic information ( seasonality ) from past flu seasons on record as input variables , and ( d ) using a two - year moving window ( which immediately precedes the desired date of estimation ) for the training period to capture the most recent changes in people s search patterns and time series behavior . our methodology efficiently builds a prediction model from individual search frequency as well as the past records of ili activity . it utilizes both sources of information more efficiently than simply combining gft with autoregressive terms as suggested in , since gft is not optimally aggregated to provide additional information on top of time series information . furthermore , we provide a quantitative efficiency metric that measures the statistical significance of the improvement of our methodology over other alternatives . for example , our method is twice as accurate as the method that combines gft with autoregressive terms ( see table [ tab : relative_efficiency ] ) . finally , even though we use as input only the publicly available , low - quality data from the _ google correlate _ and _ google trends _ websites , our method has significant improvement over the latest version of gft . we name our model argo , which stands for autoregression with google search data . statistically speaking , argo is an autoregressive model with google search queries as exogenous variables ; argo also employs ( and potentially ) regularization in order to achieve automatic selection of the most relevant information . |
the stochastic block model is one of the oldest and most ubiquitous models for studying clustering and community detection .it was first introduced by holland et al . more than thirty years ago and since then it has received considerable attention within statistics , computer science , statistical physics and information theory .the model defines a procedure to generate a random graph : first , each of the nodes is independently assigned to one of communities , where is the probability of being assigned to community .next , edges are sampled independently based on the community assignments : if nodes and belong to communities and respectively , the edge occurs with probability independent of all other edges , where is an symmetric matrix .the goal is to recover the community structure either exactly or approximately from the graph . since its introduction, the stochastic block model has served as a testbed for the diverse range of algorithms that have been developed for clustering and community detection , including combinatorial methods , spectral methods , mcmc , semidefinite programs and belief propagation .recently , the stochastic block model has been thrust back into the spotlight , the goal being to establish tight thresholds for when community detection is possible , and to find algorithms that achieve them . towards this end ,decelle et al . made some fascinating conjectures ( which have since been resolved ) in the case of two equal - sized communities with constant average degree , that we describe next . throughout this paper , we will focus on the two - community case and use and to denote the within - community and between - community connection probabilities respectively .we will be interested in the sparse setting where .moreover we set , so that the two communities are roughly equal - sized , and we assume ( although we discuss the case in appendix [ sec : dissortative ] ) .we will use to denote the corresponding random graph model .the setting of parameters above is particularly well - motivated in practice , where a wide variety of networks have been observed to have average degree that is bounded by a small constant .it is important to point out that when the average degree is constant , it is impossible to recover the community structure exactly in the stochastic block model because a constant fraction of nodes will be isolated .instead , our goal is to recover a partition of the nodes into two communities that has non - trivial agreement ( better than random guessing ) with the true communities as , and we refer to this as _ partial recovery_. if then partial recovery in is possible , and if then partial recovery is information - theoretically impossible .this conjecture was based on deep but non - rigorous ideas originating from statistical physics , and was first derived heuristically as a stability criterion for belief propagation .this threshold also bears a close connection to known thresholds in the broadcast tree model , which is another setting in which to study partial recovery .we formally define the broadcast tree model in section [ sec : prelim ] , but it is a stochastic process described by two parameters and .kesten and stigum showed that partial recovery is possible if , and much later evans et al . showed that it is impossible if .the connection between the two models is that in the stochastic block model , the local neighborhood of a node resembles the broadcast tree model , and this was another compelling motivation for the conjecture . in an exciting sequence of developments , mossel , neeman andsly proved a lower bound that even distinguishing from an erds rnyi graph is information - theoretically impossible if , by a careful coupling to the broadcast tree model .subsequently mossel , neeman and sly and massouli independently gave matching algorithms that achieve partial recovery up to this threshold , thus resolving the conjecture !mossel , neeman and sly later showed that for some constant , if then belief propagation works and moreover the agreement of the clustering it finds is the best possible .in fact , many other sorts of threshold phenomena have been found in different parameter regimes .abbe , bandeira and hall studied exact recovery in the logarithmic degree setting and showed that it is efficiently possible to recover the two communities exactly if and information - theoretically impossible if .abbe and sandon gave a precise characterization of when exact recovery is possible in the general stochastic block model for more than two communities with arbitrary relative sizes and arbitrary connection probabilities .this abundance of sharp thresholds begs a natural question : how robust are these reconstruction thresholds ?it is clear that if one substantially changes the distributional model , the thresholds themselves are likely to change .however there is a subtler issue .the algorithms that achieve these thresholds may in principle be over - fitting to a particular distributional model .random graphs are well - known to have rigid properties , such as sharp laws for the distribution of subgraph counts and a predictable distribution of eigenvalues .real - world graphs do not have such properties . in a remarkable paper , blum and spencer introduced the semirandom model as an intermediary between average - case and worst - case analysis , to address such issues .the details of the model vary depending on the particular optimization problem , and since we will focus on clustering we will be most interested in the variant used by feige and kilian .the adversary above is called ` monotone ' because it is restricted to making changes that seem to be helpful .it can only strengthen ties within each community , and break ties between them .. if we had we could define the adversary in the opposite way ( which we analyze in appendix [ sec : dissortative ] ) . ]the key is that a monotone adversary can break the sorts of rigid structures that arise in random graphs , such as predictable degree distributions and subgraph counts .an algorithm that works in a semirandom model can no longer rely on these properties . instead of a graph containing_ only _ random edges , all we are assured of is that it contains _ some _ random edges . in this paper, we use semirandom models as our notion of ` robustness . 'many forms of robustness exist in the literature for example , the independent work gives algorithms that are robust to non - monotone changes in addition to monotone changes . with most robustness models , any algorithm will break down after enough errors , and one can compare algorithms based on how many errors they can tolerate . in contrast , semirandom models distinguish between algorithms _qualitatively _ : as we will see , entire classes of algorithms continue to work under any number of monotone changes , while others do not .feige and kilian showed that semidefinite programs for _ exact recovery _ in the stochastic block model continue to work in the semirandom model in fact they succeed up to the threshold for the random model .since then , there have been many further developments including algorithms that work in semirandom models for planted clique , unique games , various graph partitioning problems , correlation clustering and the general planted partition model for all and for all , with , but the number of communities and their relative sizes are arbitrary constants . ] ( and in some cases for even more powerful adversaries ) .a common theme is that if you have a semidefinite program that works in some stochastic setting , then it often extends almost automatically to the semirandom setting .so are there semidefinite programs that achieve the same sharp partial recovery results as mossel , neeman and sly and massouli , and that extend to the semirandom model too ? or is there a genuine gap between what is achievable in the random vs. the semirandom setting ? recall that in the semirandom block model , a monotone adversary observes a sample from the stochastic block model and is then allowed to add edges within each community and delete edges that cross between communities .we will design a particularly simple adversary to prevent an algorithm from utilizing the paths of length two that go from a ` ' node to a ` ' node and back to a ` ' node , where the middle node has degree two .our adversary will delete any such path it finds ( with some additional technical conditions to locally coordinate these modifications ) , and our main result is that , surprisingly , this simple modification strictly changes the partial recovery threshold .we will state our bounds in terms of the average degree and ` noise ' , in which case the threshold becomes .note that this threshold requires .then : [ thm : intro - main ] for any , there exists so that and hence partial recovery in the stochastic block model is possible , and yet there is a monotone adversary so that partial recovery in the semirandom model is information - theoretically impossible .a common tool in the algorithms of mossel , neeman and sly and of massouli is the use of non - backtracking walks and spectral bounds for their transition matrices .our adversary explicitly deletes a significant number of these walks .this simple modification not only defeats these particular algorithms , but we can show that _ no _ algorithm can achieve partial recovery up to the threshold . to the best of our knowledge ,this is the first explicit separation between what is possible in the random model vs. the semirandom model , for _ any _ problem with a monotone adversary .we show a complementary result , that no monotone adversary can make the problem too much harder .various semidefinite programs have been designed for partial recovery .these algorithms work in the constant - degree regime , but are not known to work all the way down to the information - theoretic threshold . in particular , we follow gudon and vershynin and show that their analysis ( with a simple modification ) works as is for the semirandom model .this shows that semidefinite programs not only give rise to algorithms that work for the semirandom model in the exact recovery setting , but also for partial recovery too under some fairly general conditions .[ thm : intro - robust - sdp ] let and .there is a constant so that if and then partial recovery is possible in the semirandom block model .moreover it can be solved in polynomial time .our robustness proof only applies to a particular form of sdp analysis .given a different proof that the sdp succeeds in the random model for a larger range of parameters than above , there would be no guarantee that the sdp also succeeds in the semirandom model for that range of parameters .hence we can not formally conclude that it is impossible for the sdp to reach the information - theoretic threshold in the random model , though our results are suggestive in this direction .we remark that each possible monotone adversary yields a new distribution on planted community detection problems .hence , we can think of the algorithm in the theorem above as one that performs almost as well as information - theoretically possible across an entire family of distributions , simultaneously .this is a major advantage to algorithms based on semidefinite programming , and points to an interesting new direction in statistics .can we move beyond average - case analysis ?can we find robust , semirandom analogues to some of the classical , average - case thresholds in statistics ?the above two theorems establish upper and lower bounds for this semirandom threshold , that show that it is genuinely different from the average - case threshold .the usual notion of a threshold makes sense when we have exact knowledge of the process generating the instances we wish to solve .but when we lack this knowledge , semirandom models offer an avenue for exploration that can lead to new algorithmic and statistical questions . along the way to proving our main theorem, we show a random vs. semirandom separation for the broadcast tree model too .we define this model in section [ sec : prelim ] . in short ,it is a stochastic process in which each node is given one of two labels and gives birth to nodes of the same label and nodes of the different label , with the goal being to guess the label of the root given the labels of the leaves .there are many ways we could define a monotone adversary , and we focus on a particularly weak one that is only allowed to cut edges between nodes with different labels .we call this the cutting adversary , and we prove : [ thm : intro - treehard ] for any , there exists so that and hence partial recovery in the broadcast tree model is possible , and yet there is a monotone cutting adversary for which partial recovery in the semirandom broadcast tree model is information - theoretically impossible .furthermore we analyze the recursive majority algorithm and show that it is robust to an even more powerful class of monotone adversaries , which are allowed to entirely control the subtree at a node whose label is different than its parent .we call this a strong adversary , and we prove : [ thm : intro - recmaj ] if and then partial recovery in the broadcast tree model is possible , even with respect to a strong monotone adversary , where ` ' is taken as .these results highlight another well - studied model where the introduction of a monotone adversary strictly changes what is possible .nevertheless there is an algorithm that succeeds across the entire range of distributions that arise from the action of a monotone adversary , simultaneously .interestingly , our robustness results can also be seen as a justification for why practitioners use recursive majority at all .it has been known for some time that recursive majority does not achieve the kesten stigum bound the threshold for reconstruction in the broadcast tree model although taking the majority vote of the leaves does .the advantage of recursive majority is that it is robust to very powerful adversaries while majority is not , and this only becomes clear when studying these algorithms through semirandom models !here we formally define the models we will be interested in , as well as the notion of partial recovery .recall that denotes the stochastic block model on two communities with so that the communities are roughly equal sized .we will encode community membership as a _ label _ on each node .we will also refer to this as a _ spin _, following the convention in statistical physics .this numeric representation has the advantage that we can ` add ' spins in order to take the majority vote , and ` multiply ' them to compute the relative spin between a pair of nodes. we will be interested in the sparse regime where the graph has constant average degree , and we will assume .next , we formally define partial recovery in the stochastic block model . throughout this paper will will be interested in how our algorithms perform as ( number of nodes ) goes to .we say that an event holds a.a.s .( asymptotically almost surely ) if it holds with probability as .similarly , we say that an event happens for a.a.e .( asymptotically almost every ) if it holds with probability over a random choice of .we say that an assignment of spins to the nodes achieves * -partial recovery * if at least of these spins match the true spins , or at least match after a global flip of all the spins .moreover , an algorithm that outputs a vector of spins ( indexed by nodes ) is said to _ achieve partial recovery _ and there exists such that achieves -partial recovery a.a.s .in the limit . next , we define the broadcast tree model ( which we introduced informally earlier ) . the broadcast tree model is a stochastic process that starts with a single root node at level whose spin is chosen uniformly at random . each node in turn gives birth to same - spin children and opposite - spin children , where is the poisson distribution with expectation .this process continues until level at which point it stops , and the nodes at level are called the _ leaves_. ( the nodes on level that by chance do not give birth to any children are not considered leaves , even though they are leaves in the graph - theoretic sense . )an algorithm observes the spins at the leaf nodes and the topology of the tree , and the goal is to recover the root spin : an algorithm that outputs a spin is said to _ achieve partial recovery _ on the tree if there exists such that with probability at least , as .the reparameterization in terms of becomes particularly convenient here : each node gives birth to children , so is the average branching factor .moreover , each child has probability of having spin opposite to that of its parent .it is known that taking the majority vote of the leaves is optimal in theory , in the sense that it achieves partial recovery for and that for , partial recovery is information - theoretically impossible .this is called the kesten stigum bound , and it can also be interpreted as a condition on the second - largest eigenvalue of an appropriately defined transition matrix .there are many other natural variants of the broadcast tree model , that are more general instances of multi - type branching processes . however , even for simple extensions , the precise information - theoretic threshold is still unknown . in the potts model , where nodes are labeled with one of labels , sly showed that the kesten stigum bound is not tight , as predicted by mzard and montanari . and for an asymmetric extension of the binary model above , borgs et al . showed that the kesten stigum bound is tight for some settings of the parameters . in our setting, this historical context presents a substantial complication because if we apply a monotone adversary to a broadcast tree model and it results in a complex propagation rule , there may not be good tools to prove that partial recovery is impossible .there is a close connection between the stochastic block model and the broadcast tree model , since the local neighborhood of any vertex in the graph is locally tree - like .hence , if our goal is to prove a random vs. semirandom gap for community detection , a natural starting point is to establish such a gap for the broadcast tree model .it turns out that there is more than one natural way to define a monotone adversary for the broadcast tree model , but it will not be too difficult to establish lower bounds for either of them .the more difficult task is in finding an adversary that can plausibly be coupled to a corresponding adversary in the stochastic block model , and this will require us to put many sorts of constraints on the type of adversary that we should use to obtain a separation for the broadcast tree model . in the broadcast tree model , we will work with two notions of a monotone adversary .one is weak and will be used to show our separation results : our other adversary is stronger , and we will establish upper bounds against this adversary ( with the recursive majority algorithm ) to give a strong recoverability guarantee : an upper bound against this latter model amounts to a recovery guarantee without any assumptions as to what happens after a ` mutation ' of labels for example , a genetic mutation might affect reproductive fitness and change the birth rule for the topology. other variants of monotone adversaries could also be justified .[ [ majority - fails ] ] majority fails + + + + + + + + + + + + + + it is helpful to first see how adversaries in these models might break existing algorithms .recall that in the broadcast tree model , taking the majority vote of the leaves yields an algorithm that works up to the kesten stigum bound , and this is optimal since reconstruction is impossible beyond this . in the language of and ,each node gives birth to nodes of the same label and nodes of the opposite label .hence in a depth tree we expect leaves , but the total bias of the spins can be recursively computed as . the fact that majority works can be proven by applying the second moment method and comparing the bias to its variance .however , an overwhelming number of the leaves are on some path that has a flip in the label at some point : we only expect nodes with all - root - spin ancestry , a vanishing proportion as .the strong monotone adversary has control over all the rest , and can easily break the majority vote .even the monotone cutting adversary can control the majority vote , by cutting leaves whose spin matches the root but whose parents have the opposite label .this happens for a constant fraction of the leaf nodes , and this change overwhelms the majority vote .so majority vote fails against the semirandom model , for _ all _ nontrivial and .this is an instructive example , but we emphasize that breaking one algorithm does not yield a lower bound .for example , if the algorithm knew what the adversary were doing , it could potentially infer information about where the adversary has cut edges based on the degree profile , and could use this to guess the label of the root .[ [ the - problem - of - orientation ] ] the problem of orientation + + + + + + + + + + + + + + + + + + + + + + + + + + many first attempts at a separation in the broadcast tree model ( which work ! ) rely on knowing the label of the root . however , such adversaries present a major difficulty in coupling them to a corresponding adversary in the graph .each graph gives rise to many overlapping broadcast trees ( the local neighborhood of each vertex ) and a graph adversary needs to simultaneously make all of these tree reconstruction problems harder .this means a graph adversary can not focus on trying to hide the spin of a specific tree root ; rather , it should act in a local , orientation - free way that inhibits the propagation of information in all directions .a promising approach is to look for nodes whose neighbors in all have the opposite label , and cut all of these edges .such nodes serve only to further inter - connect each community , and cutting their edges would seem to make community detection strictly harder . in the corresponding broadcast tree model, these nodes represent flips in the label that are entirely corrected back , and deleting them penalizes any sort of over - reliance on distributional flukes in how errors propagate .for example , majority reconstruction in the tree fully relies on predictable tail events whereby nodes with label different from the root may lead to subtrees voting in the correct direction nonetheless .[ [ the - problem - of - dependence ] ] the problem of dependence + + + + + + + + + + + + + + + + + + + + + + + + + now , however , a different sort of problem arises : if we were to naively apply the adversary described above to a broadcast tree , this would introduce complicated distance- dependencies in the distribution of observed spins , as certain diameter- spin patterns are banned in the observed tree ( as they would have been cut ) . in particular , the resulting distribution is no longer a markov random field .this is not inherently a problem , in that we could still hope to prove stronger lower bounds for such models beyond the kesten stigum bound .however , the difficulty is that even for quite simple models on a tree ( e.g. the potts model , asymmetric binary channels ) the threshold is not known , and the lower bound techniques that establish the kesten stigum bound seem to break down .an alternative is to specifically look for degree- nodes whose neighbors in have the opposite label , and cut both incident edges .although there are still issues about making this a markov random field , we can alleviate them by adding a -clique potential on each degree- node and its two neighbors .then after we marginalize out the label of the degree- node , the -clique potential becomes a -clique potential , and we return to a markov random field over a tree ! in other words , if we ignore the spin on a degree-2 node and treat its two incident edges like a single edge , we return to a well - behaved spin propagation rule .we are now ready to describe the adversary that we will use to prove theorems [ thm : intro - main ] and [ thm : intro - treehard ] .we only need two additional adjustments beyond the discussion above .instead of making every possible degree- cutting move as described earlier , we will only cut with probability .we will tune to ensure that the monotone changes we make do not overshoot and accidentally reveal more information about the underlying communities by cutting in too predictable a fashion .finally , our adversary adds local restrictions to where and how it cuts , to ensure that the changes it makes do not overlap or interfere with each other ( e.g. chains of degree- nodes ) .these details will not be especially relevant until section [ sec : no - lrc ] , where they simplify the combinatorics of guessing the original precursor graph from the observed graph .[ dist : graph ] let be given .write and .we sample a ` precursor ' graph , and apply the following adversary : we can now outline the proof of our main theorem . in order to show that partial recovery is impossible , it suffices to show that it is impossible to reconstruct the relative spin ( same or different ) of two random nodes and , better than random guessing . before applying the adversary , an -radius neighborhood around the broadcast tree model rooted at . after the adversaryis applied to the graph , resembles a broadcast tree model with a corresponding cutting adversary applied .this resemblance will be made precise by the coupling argument of section [ sec : coupling ] ; there will be some complications in this , and our tree will not be uniformly the same depth but will have a serrated boundary .we show in section [ sec : treehard ] that when the cutting adversary is applied to the tree , the tree reconstruction problem ( guess the spin of from the spins on the boundary of ) becomes strictly harder : the average branching factor becomes lower due to sufficient cutting , while the new spin propagation rule resembles classical noisy propagation with at least as much noise ( so long as we marginalize out the spins of nodes ) .we then apply the proof technique of evans et al . to complete the tree lower bound .the final step in the proof is to show that reconstructing the relative spin of and is at least as hard as reconstructing the spin of given the spins on the boundary of ( which separates and with high probability ) .this is one of the most technically involved steps in the proof . in the lower bound for the random model , this step called `` no long - range correlations '' was already an involved calculation on a closed - form expression for . in our setting , in order to get a closed - form expression for the conditional probability , we will sum over all the possible precursor graphs that could have yielded the observed graph .we characterize these precursors in lemma [ lemma : unsurgery ] , and the main reason for the and nodes in our adversary construction is to simplify this process .a natural open question is whether one can find the optimal monotone adversary .for instance , we have not even used the power to add edges within communities .note however , that our current adversary is delicately constructed in order to make each step of the proof tractable .it is not too hard to propose alternative adversaries that seem stronger , but it is likely that one of the major steps in the proof ( tree lower bound or no long - range correlations ) will become prohibitively complicated .recent predictions on the performance of sdp methods could be suggestive of the true semirandom threshold .it is well - known that sparse , random graphs are locally tree - like with very few short cycles .this is the basis of mossel neeman sly s approach in coupling between a local neighborhood of the stochastic block model and the broadcast tree model .hence , our first order of business will be to couple a typical neighborhood from our graph distribution ( distribution [ dist : graph ] ) to the following tree model , against which we can hope to show lower bounds . given the parameters , generate a tree with spins as follows : [ dist : tree ] * start with a root vertex , with spin chosen uniformly at random from .* until a uniform depth ( where the root is considered depth 0 ) , let each vertex give birth to children of the same spin and children of opposite spin .* apply the graph adversary ( from distribution [ dist : graph ] ) to this tree ; this involves assigning markings and , and cutting some nodes .keep only the connected component of the root . *remove all nodes of depth greater than ( the bottom levels ) . *remove any node at depth along with its siblings , exposing the parent as a leaf .the reason we trim levels at the bottom is to ensure that the markings and cuttings in match those in , since these depend on the radius- neighborhood of each node and edge , respectively . removing nodes at level ensure that the more complicated spin interactions of nodes and their neighbors do not span across the boundary of a tree neighborhood in the graph we want to cleanly separate out a tree recovery problem .we use a slightly non - conventional definition of ` leaves ' : [ def : leaves ] when we refer to the * leaves * of a tree sampled from distribution [ dist : tree ] we mean the nodes at depth plus any nodes at depth that are revealed during the last step .nodes at depth that happen to give birth to no children are not considered leaves ; if the root is and gets cut by the adversary so that the tree is a single node , this single node is not considered a leaf .we can couple the above tree model to neighborhoods of the graph : [ prop : coupling ] let be given , and let be any vertex of the graph .let there exists a joint distribution such that the marginals on and match the graph and tree models , respectively , while with probability : * there exists an isomorphism ( preserving edges , spins , and the markings and ) between the tree and a vertex - subset of , * the tree root corresponds to the vertex in the graph , and the leaves correspond to a vertex set that separates the interior from the rest of the graph .* we have . as proven in , there exists a coupling between the precursor graph and the precursor tree , such that matches the radius neighborhood of in , which has size ; this fails with probability .next the adversary assigns vertex markings ( and ) to both models .the marking of each vertex is deterministic and only depends on the topology of the radius- neighborhood of the vertex .thus the markings in and match up to radius .some nodes are _ cuttable _ , i.e. both their neighbors have opposite spin ; the cuttable nodes in and also match up to radius .the adversary now cuts the edges incident to a random subset of cuttable vertices ; we can trivially couple the random choices made on with those on up to radius .we keep only the connected component of the root in ; likewise let us keep only the corresponding vertices in , i.e. only those still connected to by a path in .after removing nodes of depth greater than , we have removed the subset of for which the markings and the action of the adversary differ from those in .thus at this stage , the tree exactly matches the radius neighborhood of in , along the isomorphism given by the coupling from .any boundary vertex of this neighborhood must have distance exactly from , thus its corresponding vertex in the tree has depth and is a leaf . passing to the final step of removing leaves in and their siblings ,we remove their corresponding nodes from ; this does not change the boundary - to - leaves correspondence .in this section we show that our tree distribution ( distribution [ dist : tree ] ) evidences a random vs. semirandom separation in the broadcast tree model .recall that the goal is to recover the spin of the root from the spins on the leaves , given full knowledge of the tree topology and the node markings ( and ) .recall the non - conventional definition of ` leaves ' ( definition [ def : leaves ] ) .let be the advantage over random guessing , defined such that is the probability that the optimal estimator ( maximum likelihood ) succeeds at the above task .we will show that our tree model is asymptotically infeasible as ) for a strictly larger range of parameters than that of the corresponding random model .[ prop : treehard ] for every real number , there exists such that , yet for a tree sampled from distribution [ dist : tree ] with parameters , and depth , we have as .recall that the condition is the classical kesten stigum bound , which is sufficient to beat random guessing in the random model .several decades later , this bound was found to be tight for the random model .there remain many open questions regarding the hardness of tree models , and some care was required in crafting an adversary for this problem that keeps the proof of this lower bound tractable . broadly , the kesten stigum bound asserts that recoverability depends on the average branching factor ( a contribution from the tree topology ) and the amount of noise ( a contribution from the spin propagation rule ) .the first step of our proof is to distinguish these in our distribution : we can first generate a tree topology from the appropriate marginal distribution , and then sample spins from the conditional distribution given this tree .we will show how to view this distribution on spins within the lower bound framework of .moreover , the new spin propagation rule is at least as noisy as the original , while our cutting adversary has strictly decreased the average branching factor .we first re - state our tree model in terms of topology generation followed by spin propagation . instead of letting each nodegive birth to same - spin children and opposite - spin children , we can equivalently let each node give birth to children and then independently choose the spin of each child to be the same as with probability and opposite to otherwise .( the equivalence of these steps is often known as ` poissonization ' . ) here the correspondence between and is as usual : and .this allows us to first sample the entire tree topology without spins. then we can add markings ( and ) , since these depend only on the topology .next , we sample the spins as above , by generating an appropriate independent value on each edge , indicating whether or not a sign flip occurs across that edge .finally , we cut edges according to the adversary s rule .given the parameters , generate a tree with spins as follows : [ dist : tree-1 ] * start with a root vertex . until a uniform depth ,let each vertex give birth to nodes . * mark nodes as and according to the rules of the graph adversary .* for each edge , generate an independent flip value which is with probability and otherwise . *choose the root spin uniformly from .propagate spins down the tree , letting where is the edge connecting to its parent .* cut edges according to the adversary s rule , keeping only the connected component of the root . *trim the bottom of the tree ( according to the last two steps of distribution [ dist : tree ] ) .it is clear that this tree distribution ( distribution [ dist : tree-1 ] ) is identical to the original tree distribution ( distribution [ dist : tree ] ) .our next step will be to re - state this model in yet another equivalent way .the goal now is to sample the final tree topology ( including which edges get cut ) before sampling the spins . consider a node , its parent , and its single child . instead of writing the spin propagation as independent flips and , we will write it as random variables and . here is equal to 1 if the adversary decides to cut ( and 0 otherwise ) , and is equal to 1 if ( and otherwise ) .this means if then is 1 with probability ( and 0 otherwise ) ; and if we do not have then .hence with probability . if then is irrelevant because the adversary will cut from the tree . conditioned on , takes the value with probability and otherwise .this means that for a ( but not cut ) node , the joint distribution of obeys a propagation rule that is equivalent to putting noise ( instead of ) on edges and , where ( using the definition of ) one can verify that for all . for most nodes in the tree , we are simply going to replace by on the incident edges , which gives the correct joint distribution of but not the correct joint distribution of .this is acceptable because the distribution of leaf spins ( given the root spin ) is still correct ; for this reason we have made sure that none of the leaves are nodes .the only time when we actually care about the spin of a node is in the case where the root is . in this case, the root might be cut by the adversary , yielding a 1-node tree ( with no revealed leaves ) .otherwise , if the root is but not cut , our spin propagation rule needs to sample the spins of the root s two children ( a node that is not cut must have degree ) from the appropriate joint distribution over ; let denote this distribution , conditioned on .it will not be important to compute explicitly ( although it is straightforward to do so ) .we are now ready to state the next tree model .this model is equivalent to the previous ones in that the joint distribution of the root spin , topology , markings , and leaf spins is the same .the spins on the nodes ( other than the root ) do not have the same distribution as before , but this is irrelevant to the question of recovering the root from the leaves . given the parameters ,generate a tree with spins as follows : [ dist : tree-2 ] * start with a root vertex . until a uniform depth ,let each vertex give birth to nodes . * mark nodes as and according to the rules of the graph adversary .* decide which nodes the adversary should cut : cut each node independently with probability . *let for edges that are incident to a node , and let for all other edges . * for each edge ,generate an independent flip value which is with probability and otherwise .* choose the root spin uniformly from .if the root is but not isolated : let be its children , draw , and let , .for all other nodes , propagate spins down the tree as usual , letting where is the edge connecting to its parent . *trim the bottom of the tree ( according to the last two steps of distribution [ dist : tree ] ) .our next step is to further modify this tree model in ways that only make it easier , in the sense that the advantage can only increase .first , we will address the issue of the complicated propagation rule in the case that the root is .suppose the root is ( but not cut ) and consider deterministically setting where are the root s two children .from there , spin propagation continues as usual .we claim that this new model can only be easier than the original one . to see this , note that the new model can ` simulate ' the original one : upon observing leaf spins drawn from the new model , drawn and then , for each and for each leaf descended from , replace by . for conveniencewe will also replace the first level of the tree by deterministic zero - noise propagation in the case where the root is not .we can similarly argue that the model only gets easier , since one can simulate the old model using the new model by sampling the noise on the first level .note that we now have a tree model such that once the topology is chosen , the spin - propagation rule is very simple : at each edge , a sign flip occurs independently with some probability .hardness results for such a tree model were studied by evans et al ., who established the following bound on the advantage for a fixed tree topology ( theorem 1.3 ) : and where , and denotes the unique path from the root to leaf . in our case ,the tree ( including both the tree topology and markings ) is random , so the advantage is ] ; then , since there is some nonzero probability that one of s descendants will be cut at the subsequent cutting level .the expected number of leaves in the entire tree is now at most , where the factor accounts for the first level and the last levels that are not followed by a base level .now , which goes to 0 as provided that hence this inequality suffices for impossibility of recovery . however , depends on the topology of the final tree model , which depends on , , and , so this inequality is slightly more complicated than it looks .we write to make this dependence explicit ; recall that is a ( continuous ) function of .we argued above that for all .in particular , at the critical value for the _ random _ model , we have but is a continuous function of , so if we take to be slightly less than , we must still have , while , as desired .this completes the proof of theorem [ thm : intro - treehard ] ( semirandom vs. random separation on trees ) . in appendix[ sec : lower - bound - explicit ] , we explicitly compute a lower bound on the separation .the lower bound of the previous section will form the core of a lower bound for the graph : we have already established that , for a large range of parameters , it is impossible to learn the spin of a node purely from the spins at the boundary of its tree - like local neighborhood .we will now see why this makes recovery in the graph impossible : there is almost nothing else to learn about from beyond its local neighborhood once we know the spins on .[ lemma : no - lrc ] let a graph ( including markings and ) and spins be drawn from distribution [ dist : graph ] .let , , be a vertex - partition of such that * separates from in ( no edges between and ) , * , * contains no nodes .then for asymptotically almost every ( a.a.e . ) and .here denotes the spins on and denotes the spins on . to clarify , when we refer to or nodes in we are referring to the original markings that they were assigned in .for instance , if a node is cut by the adversary , it is still considered in even though it no longer has degree .recall that the markings ( and ) in are revealed to the reconstruction algorithm . in lemma [ lemma : no - lrc ] , we do crucially use that does not contain nodes : if a node in were with spin , and we then revealed that its neighbor in has spin , this would strengthen our belief that the neighbor in has spin , as otherwise the node would have some substantial probability of having been cut , which is observed not to be the case .so lemma [ lemma : no - lrc ] would be false if we allowed nodes in . the proof of lemma [ lemma : no - lrc ] will require a thorough understanding of the distribution of spins given , which we can only obtain by understanding the possible precursors of under the adversary .the reason for the and nodes in our adversary construction is to make these precursors well - behaved .we start with some simple observations .let be a graph that yields ( with nonzero probability ) by application of the adversary .[ obs : good ] a node has degree at least in .a node has at least three non - degree- neighbors in and the adversary will not remove these .[ obs : create ] the adversary does not create any new degree- nodes .in other words , if a node has degree in then it has degree in ( with the same two neighbors ) . in order to create a degree- node , the adversary must cut at least one edge incident to .this can only happen if is either or .it can not be because it does not have degree in .but it also can not be or else it has degree at least 3 in ( by observation [ obs : good ] ) .the key property of the /construction is that the adversary can not change whether or not a node has the following ` goodness ' property .say that a node has the * goodness * property with respect to a graph if at least three of its neighbors do not have degree .[ lemma : goodness ] the nodes are precisely the nodes that have the goodness property with respect to . note that this is not tautological because the nodes are defined as the nodes that have the goodness property with respect to , not .we need to show that the goodness property is invariant under the adversary s action .first we assume has goodness in and show that it also has goodness in . since has the goodness property in , it has three neighbors in that do not have degree .each remains connected to in since the adversary only cuts edges that are incident to a degree- node .furthermore , each does not have degree in because degree- nodes can not be created ( observation [ obs : create ] ) .now we show the converse : assume does not have goodness in and show that it still does not have goodness in .the only way that could obtain goodness is if at least one of its degree- neighbors becomes non - degree- ( while remaining connected to ) .but this is impossible because whenever the adversary cuts an edge incident to a degree-2 vertex , it causes to become isolated in .next we state an easy fact about the structure of nodes in .[ lemma : marked ] the nodes in are precisely the degree- nodes with two neighbors , plus some isolated nodes .every node in has degree- with two neighbors .if the node is cut , it becomes isolated in ; otherwise it remains degree- with two neighbors .conversely , let be degree- in with two neighbors ; we will show is .since degree- nodes can not be created ( observation [ obs : create ] ) , has degree- in with the same two neighbors , and is therefore .now we are ready to characterize the possible precursors of a given .[ lemma : unsurgery ] suppose we have a graph ( including node markings ) and spins , drawn from distribution [ dist : graph ] .the probability that a graph yields under the action of the adversary is zero unless can be obtained from by connecting each isolated node of to exactly two nodes of opposite spin to . in this case , if has isolated nodes and nodes with two opposite - spin neighbors , then where we define in the case .recall that is the probability with which the adversary cuts each node that has two opposite - spin neighbors .first suppose can yield via the adversary .we will show that takes the desired form .the nodes cut by the adversary are precisely the isolated nodes of .every such node was originally connected to two opposite - spin nodes in .therefore can be obtained from by connecting each isolated node to exactly two opposite - spin nodes .conversely , let be obtained from by connecting each isolated node to exactly two opposite - spin nodes. the nodes in are ( by lemma [ lemma : goodness ] ) precisely the nodes that have the goodness property in .these are also precisely the nodes that have the goodness property in , since the process of connecting each isolated node to two nodes does not change goodness .consider running the adversary on .the nodes that it marks will be precisely the nodes in .also , the nodes that it marks will be precisely the nodes in ; this is clear from lemma [ lemma : marked ] .this means the adversary will output iff it chooses to cut the nodes that are isolated in and chooses not to cut the nodes that have two opposite - spin neighbors in .this happens with probability .we proceed as follows . in the ordinary stochastic block model ,given an observed graph , the probability of any set of spins factorizes as a product of pairwise interactions . in our model , by summing over all possible precursors that could have lead to via the adversary , we find the same pairwise interactions together with a further global , combinatorial interaction .we show that the neighborhood is too small to make a significant impact on this global interaction , while the pairwise interactions between and are weak , so that the only factors relevant to are the pairwise interactions within , which are independent of the spins in .let denote the graph before the action of the adversary .then factors into the following potentials on unordered pairs : with ranging over unordered pairs of distinct vertices , where where denotes adjacency in the graph . to leverage this description ,let us sum over all possible precursors of under the adversary .let denote the set of possible precursors of , as described by lemma [ lemma : unsurgery ] : is obtained from by connecting each of the isolated nodes to exactly two nodes of the opposite spin . proportionality constant hidden by ` ' depends on but not on .as is obtained from by replacing opposite - spin non - edges by opposite - spin edges , we have for every , thus none of the terms depend on the precise choice of : where we have dropped the constants that only depend on ( and not ) .now we compute .suppose there are isolated nodes of positive spin and of negative spin .suppose there are nodes of positive spin , and of negative spin .then the number of possible is we can establish that this global factor only barely depends on the spins of : [ lemma : binom - ratio ] for a.a.e . , it holds for all , that the proof proceeds via concentration of measure and taylor expansion , and is deferred to appendix [ sec : binom - ratio - proof ] . paraphrasing this lemma , a.a.s . over , there exists a ` good ' subset of a.a.e . such that is independent of up to a factor .let us also require of ` good ' that the census ( sum of spins ) is ; as consists of all but nodes of , it is equivalent to ask for the same concentration over all spins of , and this occurs a.a.s . by hoeffding .let be this set of ` good ' values . since a.a.s .we have for any set ( which will be taken as either or below ) , we include a rigorous proof of this statement in appendix [ sec : omega - fact - proof ] .it will be useful to factor the product of classical pairwise interactions into subsets of these interactions as here , for instance , denotes the product of over unordered pairs consisting of one vertex from and one from . the corresponding `` no long - range correlations '' proof in established that for ` good ' values , for a quantity depending only on and .their proof of this fact holds verbatim in our setting : it only requires that and that the number of spins in the graph is distributed as .similarly , we can factor the term as where , for instance , counts the number of nodes in with two opposite - spin neighbors .this factorization holds as there are no edges and no nodes in .we can now absorb these terms into the terms : define and similarly , and , so that for , at this point we roughly adapt the proof of conditional independence from factorization in a markov random field , following the corresponding proof in .we compute that for a.a.e . , as desired .we can now assemble all of the pieces to prove our main result .we first prove that it is impossible to estimate the relative spin of any fixed pair of nodes , in a strictly larger parameter range than for the random model .the impossibility of partial recovery will then easily follow .[ prop : rel - spin - hard ] for all , there exists such that , yet given a graph ( including markings / ) from distribution [ dist : graph ] with parameters , , we have that for any fixed vertices , for a.a.e . as . by ` fixed ' vertices mean that the vertices are fixed before and are chosen ; so by symmetry it does nt matter which pair we fix .given , choose as in proposition [ prop : treehard ] ( tree separation ) .with probability , the tree coupling of proposition [ prop : coupling ] centered at vertex succeeds . in this case ,the neighborhood of coupled to the tree is of size , and so with probability , lies outside this neighborhood .let be the boundary vertices of , i.e. those vertices in corresponding to the leaves of the tree .let be the interior , and let be the complement of in . by the law of total variance , .\ ] ] with further probability ,lemma [ lemma : no - lrc ] ( no long - range correlations ) succeeds , and we have so that but it follows from the coupling in proposition [ prop : coupling ] ( which includes spins and markings ) that since as , non - reconstruction in the tree model ( proposition [ prop : treehard ] ) implies that this latter variance converges to : the variance of a -valued random variable with expectation is .so we know that , with probability , a proportion of contribute a value to the expectation in ( [ eq : totvar ] ) .the remaining proportion of contribute a value bounded in magnitude by , as the variance of a -valued variable must be .it follows that , a.a.s ., = 1 - o(1),\ ] ] and the only way that this is possible in the -valued distribution is if as desired. it will follow immediately from proposition [ prop : rel - spin - hard ] that it is hard to find a partition that is correlated ( better than random guessing ) with the true one . forif it was possible to find such a partition then one could also guess the relative spins of and .we now complete the proof of our main theorem , restating it slightly more precisely .for any , there exists so that and hence partial recovery in the stochastic block model is possible , and yet against the monotone adversary given in section [ sec : models ] , for any , no estimator of the spins achieves -partial recovery with probability greater than as .let be some assignment of spins to vertices that achieves -partial recovery : agrees with the true spins on at least vertices , possibly after a global spin flip .consider the relative spins , where and are distinct vertices ; these match the true relative spins on at least ordered pairs of distinct vertices .if we choose two distinct vertices at random , the chance of correctly estimating their relative spin from is at least suppose for a contradiction that some estimator achieves -partial recovery , for some , with probability not converging to as .when -partial recovery succeeds , the process above recovers the relative spin of two random vertices with probability at least , and note .when partial recovery does not succeed , the process still recovers the relative spin of two random vertices with probability at least , as can be seen by plugging in to ( [ eq : eta ] ) .it follows that we can recover the relative spin of two random vertices with probability at least ] except on the diagonal .then , one solves an sdp to maximize over , where is the space of symmetric psd matrices satisfying ( i.e. the diagonal entries of are at most ) . to have additional constraints , but we will not need this . ]the goal of the analysis is to show that the sdp outputs some that is close to a `` ground truth '' , which in our case is the -valued matrix of true relative spins .the following result outlines the steps of the analysis .[ prop : three ] suppose we have such that the following three conditions hold for some value , some function , and some matrix norm : 1 . is a maximizer of the reference objective over + ( the reference sdp recovers the truth ) , 2 . + ( the observed objective is close to the reference one in cut norm ) , 3 . if and , then + ( good solutions to the reference sdp are close to the ground truth ) .then where is any maximizer of the empirical objective over , and is the grothendieck constant . here ,the cut norm ( or -to- norm ) of a matrix is defined as the proof of proposition [ prop : three ] follows from lemma 3.3 in , and uses grothendieck s inequality .the partial recovery results of proceed by verifying the three conditions of proposition [ prop : three ] for a particular choice of parameters : [ prop : check - three ] assume .let with , , and .conditions ( 13 ) of proposition [ prop : three ] hold a.a.s . with , and as the frobenius norm . concretely , this means that we solve the following sdp : [ sdp : gv ] maximize subject to and , where .note that the regularization constant is necessary because if we simply take then condition 1 of proposition [ prop : three ] fails . in estimate from the empirical average degree , but their arguments also apply to the case where we deterministically take .( this requires knowledge of the parameters but we will address this issue later . ) in , condition 1 is shown in lemma 5.1 , and conditions 2 and 3 are implicit in the proof of lemma 5.2 .these require a technical condition : , which for large simply amounts to . by propositions[ prop : three ] and [ prop : check - three ] , we now attain the result a.a.s .it is also shown in how to translate this to a precise partial recovery result : [ prop : rounding ] let .for any , -partial recovery succeeds a.a.s . in the stochastic block model ,provided that , by taking the signs in the top eigenvector of .[ cor : random - recovery ] there exists a constant such that -partial recovery succeeds a.a.s . in the stochastic block model as and .the constant is quite large : sets , although no attempt is made to optimize this constant .nevertheless this result is only off by a constant from the threshold , which is .our random vs. semirandom separation becomes small as grows , so it remains plausible that semidefinite programs can achieve partial recovery when as .in fact , sdps are known to distinguish random block model graphs from erds rnyi graphs in such a range , and it would be interesting to determine whether this carries through to partial recovery in the semirandom model .we now turn to a semirandom view of the general gudon vershynin framework . in the semirandom block model, a monotone adversary is allowed to make changes aligned with the ground truth ; more formally , it can add a matrix to the observed adjacency , where is symmetric and has s in some same - spin entries where is , and s in some opposite - spin entries where is .it is easily verified that maximizes over : every matrix in has entries in ] of the function .equivalently , is the greatest solution in ] , with , , and . by expanding the probability mass function of the binomial distribution , is a degree polynomial , and thus smooth .less obviously , is strictly convex on ] .the line of slope ( ) does not intersect the graph of nontrivially , since on ] .the line of slope ( ) certainly intersects the graph of at and .hence there exists some maximal , at which the line of slope intersects the graph of tangentially at some .equivalently , we can characterize this slope as the maximum slope of the line defined by the origin and any point on the graph of : } \frac{m_k(q)}{q}.\ ] ] by the concavity and convexity properties of , the graph of lies below the line of slope on , and above on .it follows that . as we sweep from to , the slope passes continuously from to , granting an intersection satisfying for every ] except at , so the limiting success probability is .it is interesting that , as the noise varies , we see the success probability jump discontinuously from to zero .this contrasts with the behavior of recursive majority in the ordinary broadcast tree model , in which the error probability transitions continuously to at a threshold . for any fixed , computing the critical value amounts to maximizing the polynomial , or equivalently finding the unique root of its derivative in ] , we know that in analyzing recursive majority in the random model , mossel computed this derivative using russo s formula .let us assume for the moment that is odd ; the even case is similar . then we evaluate this at , we obtain : in the second - to - last step , we have used the asymptotic identity as , which still holds when depends on so long as .now for , the derivative tends to as , while for , this tends to as .thus , writing , we must have .we immediately obtain a lower bound on : for a matching upper bound , apply hoeffding s inequality for a binomial tail bound , to find and use the maximality of to find so that is an asymptotic upper bound for every , which completes the proof .note that is the probability of success at the threshold , so this proof also provides some sense of the critical success probability : so we observe a very strong threshold : as we vary , there is a discrete jump from very likely success of recursive majority to almost - sure failure ! this result does not change if we pass back to the poisson - birth tree used throughout the rest of this paper , rather than the -regular tree . here ] leaves , in expectation ; but the level- degree- node and its subtree gets cut with probability .the expected number of leaves cut in the subtree descending from a level- node is : \ , \ee[\pois(k ) \given\ ; \ne 1 ] ) \\ & = k^3 p\delta \eps^2 \cdot \prob{\pois(k(1-p ) ) \geq 3}^2 \cdot ( kp + \ee[\pois(k(1-p ) ) \given\ ; \geq 3 ] \ , \ee[\pois(k ) \given\ ; \ne 1]).\end{aligned}\ ] ] whenever and , we must have and ( from the definition ( [ eq : delta ] ) of ) . in this rangewe have \ , \ee[\pois(k ) \given\ ; \ne 1 ] ) \defeq \mathcal{k}(k).\ ] ] now , choosing any such that the semirandom model will be impossible while the random model is possible .this concludes our explicit computation of a separation .we have not made any attempt to optimize it , and that remains an interesting open question for future work .recall that is the number of isolated nodes in , and is the number of nodes .let be the number of excess spins among the isolated nodes , i.e. there are isolated nodes of spin and of spin .similarly let be the number of excess spins among the nodes .write and , where denotes the number of excess spins among the isolated nodes of , and likewise for the others. we will need a number of results on the sizes of these values .[ lemma : conc ] we have the following results a.a.s .* , * , * .the first result is easy : and are bounded above by , which is by assumption .we will establish the remaining concentration results through the bounded differences method ; however , this bounded difference property will require controlling the maximum degree of a node .note that a.a.s .every node of ( and thus ) has degree at most , by a chernoff - and - union argument .let denote the graph obtained from by simultaneously removing all edges incident to nodes of degree larger than .note now that the radius- neighborhood of any node in has size at most .let denote the census ( number of spins minus number of spins ) among those nodes that are ` cuttable ' in , i.e. nodes that are degree- in , with two opposite - sign ( to ) neighbors that each have at least non - degree- neighbors .this property of depends only on the radius- neighborhood in .given any vertex in , if we add or remove any number of edges incident to in , we can only change by at most , as we can only change the ` cuttable ' status of the previous and new radius- neighborhoods of in .( here we need radius instead of because by changing edges incident to we can push the neighbors of over the -degree cutoff . )this constitutes a bounded differences property for the function .we now apply the following concentration inequality : let be a function of a random graph of size with independent edges ( not necessarily identically distributed ) .suppose that , when we add or remove any number of edges incident to any given vertex , the value of changes by at most . then \given \geq \lambda c \sqrt{n } } \leq 2 \exp(-\lambda^2/2).\ ] ] this result is classical , and is based on applying azuma hoeffding to a _vertex exposure martingale _ ; see for example chapter 7 of .note that = 0 ] and $ ] are ; this amounts to showing that the probability of any vertex being isolated in , or in , is bounded above zero as .but these two properties are local , depending only on a neighborhood in the graph , so this is clear .this completes the proof of the concentration results ( lemma [ lemma : conc ] ) .now we proceed to the proof of lemma [ lemma : binom - ratio ] .we need to show that for a.a.e . , it holds for all that we will need sufficiently tight asymptotics for powers of binomials in this regime : we now apply these asymptotics to the task at hand : as every non - error term cancels .the result now follows : if the logarithm of an expression is then the expression itself is .enumerate all possible values for .let and let .we know and for some .we want to show that for some we have with probability ; here the probability is over drawn proportional to . if is the set of for which , we have .this means and so . we need and so it suffices to take any such that and , i.e. goes to 0 slower than does .throughout this paper , we have assumed ; such block models are often called ` assortative ' . however , everything does carry through equally in the ` dissortative ' case . herewe briefly sketch the relevant changes .our semirandom models are certainly designed for an assortative model , and are entirely too powerful in the dissortative case for example , by randomly adding and removing edges appropriately , the current semirandom block model can simulate the erds rnyi distribution when starting from any dissortative model , which clearly reveals no community structure . instead, the semirandom model in the dissortative case should add edges between communities , and remove edges within communities , so that these monotone changes are aligned with the latent structure to be recovered .similarly , the semirandom tree models are able to cut or replace any subtree that follows a same - spin edge .this leads to many sign changes throughout the paper .we can still couple the resulting graph neighborhoods to a tree distribution ; although this tree distribution might look quite different at a glance , it couples perfectly with an assortative tree model , corresponding via , by flipping all spins at odd levels of the tree .thus we obtain a random vs. semirandom separation for the dissortative tree model .our recovery results for recursive majority in section [ sec : recmaj ] carry through this coupling also , guaranteeing robust recovery by recursive anti - majority in the dissortative tree model . much of the `` no long - range correlations '' argument of section [ sec : no - lrc ] carries through unchanged , but precursors of now reconnect isolated nodes with two _ same - spin _ nodes .hence the new formula for is which still satisfies lemma [ lemma : binom - ratio ] , by negating each variable everywhere in the proof in appendix [ sec : binom - ratio - proof ] .so we also obtain a random vs. semirandom separation in the dissortative block model .the sdp upper bounds in section [ sec : upper - bound ] carry through with very few changes of sign .one verifies that the unchanged reference objective does maximize where is the matrix form of any semirandom change as redefined above .a sign does flip in the proof of proposition [ prop : new - sdp ] : we now have overall we obtain the same guarantees on semirandom partial recovery as in the assortative model , requiring rather than . | the stochastic block model is one of the oldest and most ubiquitous models for studying clustering and community detection . in an exciting sequence of developments , motivated by deep but non - rigorous ideas from statistical physics , decelle et al . conjectured a sharp threshold for when community detection is possible in the sparse regime . mossel , neeman and sly and massouli proved the conjecture and gave matching algorithms and lower bounds . here we revisit the stochastic block model from the perspective of semirandom models where we allow an adversary to make ` helpful ' changes that strengthen ties within each community and break ties between them . we show a surprising result that these ` helpful ' changes can shift the information - theoretic threshold , making the community detection problem strictly harder . we complement this by showing that an algorithm based on semidefinite programming ( which was known to get close to the threshold ) continues to work in the semirandom model ( even for partial recovery ) . this suggests that algorithms based on semidefinite programming are robust in ways that _ any _ algorithm meeting the information - theoretic threshold can not be . these results point to an interesting new direction : can we find robust , semirandom analogues to some of the classical , average - case thresholds in statistics ? we also explore this question in the broadcast tree model , and we show that the viewpoint of semirandom models can help explain why some algorithms are preferred to others in practice , in spite of the gaps in their statistical performance on random models . |
several learning problems in modern cyber - physical network systems involve a large number of very - high - dimensional input data .the related research areas go under the names of _ big - data analytics _ or _ big - data classification_. from an optimization point of view , the problems arising in this area involve a large number of constraints and/or local cost functions typically distributed among computing nodes communicating asynchronously and unreliably .an additional challenge arising in big - data classification problems is that not only the number of constraints and local cost functions is large , but also the dimension of the decision variable is big and may depend on the number of nodes in the network .we organize the literature in two parts .first , we point out some recent works focusing the attention on _ big - data optimization _ problems ,i.e. , problems in which all the data of the optimization problem are big and can not be handled using standard approaches from sequential or even parallel optimization .the survey paper reviews recent advances in convex optimization algorithms for big - data , which aim to reduce the computational , storage , and communications bottlenecks .the role of parallel and distributed computation frameworks is highlighted . in big - data , possibly non - convex ,optimization problems are approached by means of a decomposition framework based on successive approximations of the cost function . in dictionarylearning tasks motivate the development of non - convex and non - smooth optimization algorithms in a big - data context .the paper develops an online learning framework by jointly leveraging the stochastic approximation paradigm with first - order acceleration schemes .second , we review distributed optimization algorithms applied to learning problems and highlight their limitations when dealing with big - data problems .an early reference on peer - to - peer training of support vector machines is .a distributed training mechanism is proposed in which multiple servers compute the optimal solution by exchanging support vectors over a fixed directed graph .the work is a first successful attempt to solve svm problems over networks .however , the local memory and computation at each node does not scale with the problem and data sizes and the graph is time - invariant . in distributed alternating direction method of multipliers ( admm ) is proposed to solve a linear svm training problem , while in the same problem is solved by means of a random projected gradient algorithm . both the algorithms are proven to solve the centralized problem ( i.e. , all the nodes reach a consensus on the global solution ) , but again show some limitations : the graph topology must be ( fixed , , and ) undirected , and the algorithms do not scale with the dimension of the training vector space . in a survey on admm algorithms applied to statistical learning problems is given .in the problem of exchanging only those measurements that are most informative in a network svm problem is investigated . for separable problemsan algorithm is provided to determine if an element in the training set can become a support vector .the distributed optimization algorithm proposed in solves part of these problems : local memory is scalable and communication can be directed and asynchronous .however , the dimension of the training vectors is still an issue .the core - set idea used in this paper was introduced in as a building block for clustering , and refined in . in approach was shown to be relevant for several learning problems and the algorithm re - stated for such scenarios .a multi - processor implementation of the core - set approach was proposed in .however , differently from our approach , that algorithm : ( i ) is not completely distributed since it involves a coordinator , and ( ii ) does not compute a global core - set , but a larger set approximating it .the main contribution of this paper is twofold .first , we identify a distributed big - data optimization framework appearing in modern classification problems arising in cyber - physical network systems . in this frameworkthe problem is characterized by a large number of input data distributed among computing processors .the key challenge is that the dimension of each input vector is very - high , so that standard local updates in distributed optimization can not be used . for this big - data scenario, we identify a class of quadratic programs that model several interesting classification problems as , e.g. , training of support vector machines .second , for this class of big - data quadratic optimization problems , we propose a distributed algorithm that solves the problem up to an arbitrary tolerance and scales both with the number and the dimension of the input vectors . the algorithm is based on the notion of core - set used in geometric optimization to approximate the value function of a given set of points with a smaller subset of points . from an optimization point of view , a subset of active constraints is identified , whose number depends only on the tolerance .the resulting approximate solution is such that an -relaxation of the constraints guarantees no constraint violation .the paper is organized as follows . in section[ sec : distrib_optim_framework ] we introduce the distributed optimization problem addressed in the paper and describe the network model .section [ sec : distributed_svm ] motivates the problem set - up by showing a class of learning problems that can be cast in this set - up . in section[ sec : core - set consensus ] the core - set consensus algorithm is introduced and analyzed .finally , in section [ sec : simulations ] a numerical example is given to show the algorithm correctness .in this section we introduce the problem set - up considered in the paper .we recall that we will deal with optimization problems in which both the number of constraints and decision variables are `` big '' .we consider a set of processors , each equipped with communication and computation capabilities .each processor has knowledge of a vector and needs to cooperatively solve the quadratic program the above quadratic program is known in geometric optimization as _ minimum enclosing ball _problem , since it computes the center of the ball with minimum radius enclosing the set of points . by applying standard duality arguments, it can be shown that solving is equivalent to solving its dual with , .the problem can be written in a more compact form as where \in{{\mathbb{r}}}^{d\times n} ] and is meant component - wise .we will show in the next sections that this class of quadratic programs arises in many important big - data classification problems .each node has computation capabilities meaning that it can run a routine to solve a local optimization problem .since the dimension can be big , the distributed optimization algorithm to solve problem needs to be designed so that the local routine at each node scales `` nicely '' with . the communication among the processors is modeled by a time - varying ,directed graph ( digraph ) , where represents a slotted universal time , the node set is the set of processor identifiers , and the edge set characterizes the communication among the processors .specifically , at time there is an edge from node to node if and only if processor transmits information to processor at time .the time - varying set of outgoing ( incoming ) _ neighbors _ of node at time , i.e. , the set of nodes to ( from ) which there are edges from ( to ) at time , is denoted by ( ) .a static digraph is said to be _ strongly connected _ if for every pair of nodes there exists a path of directed edges that goes from to . for the time - varying communication graph we rely on the concept of a jointly strongly connected graph .[ ass : periodicconnectivity ] for every time instant , the union digraph is strongly connected .it is worth noting that joint strong connectivity of the directed communication graph is a fairly weak assumption ( it just requires persistent spreading of information ) for solving a distributed optimization problem , and naturally embeds an asynchronous scenario .we want to stress once more that in our paper all the nodes are peers , i.e. , they run the same local instance of the distributed algorithm , and no node can take any special role .consistently , we allow nodes to be asynchronous , i.e. , nodes can perform the same computation at different speed , and communication can be unreliable and happen without a common clock ( the time is a universal time that does not need to be known by the nodes ) .in this section we present a distributed set up for some fundamental classification problems and show , following , how they can be cast into the distributed quadratic programming framework introduced in the previous section .we consider classification problems to be solved in a distributed way by a network of processors following the model in section [ sec : distrib_optim_framework ] .each node in the network is assigned a subset of input vectors and the goal for the processors is to cooperatively agree on the optimal classifier without the help of any central coordinator .informally , the svm training problem can be summarized as follows .given a set of positively and negatively labeled points in a -dimensional space , find a hyperplane separating `` positive '' and `` negative '' points with the maximal separation from all the data points .the labeled points are commonly called _examples _ or _ training vectors_. linear separability of the training vectors is usually a strong assumption . in many important concrete scenariosthe training data can not be separated by simply using a linear function ( a hyperplane ) . to handle the nonlinear separability ,nonlinear kernel functions are used to map the training samples into a feature space in which the resulting features can be linearly separated .that is , given a set of points in the _ input space _they are mapped into a _ feature space _ through a function .the key aspect in svm is that does not need to be known , but all the computations can be done through a so called _ kernel function _ satisfying .it is worth noting that the dimension of the feature space can be much higher than the one of the input space , even infinite ( e.g. , gaussian kernels ) .following and we will adopt the following common assumption in svm . for any in the input space with independent of .this condition is satisfied by the most common kernel functions used in svm as , e.g. , the isotropic kernel ( e.g. , gaussian kernel ) , the dot - product kernel with normalized inputs or any normalized kernel .for fixed , let , , be a set of feature - points with associated label .the training vectors are said to be linearly separable if there exist and such that for all . the _ hard - margin svm training _problem consists of finding the optimal hyperplane , , ( is a vector orthogonal to the hyperplane and is a bias ) that linearly separates the training vectors with maximal margin , that is , such that the distance is maximized . combining the above equations it follows easily that .thus the svm training problem may be written as a quadratic program in most concrete applications the training data can not be separated without outliers ( or training errors ) . a convex program that approximatesthe above problem was introduced in .the idea is to introduce positive slack variables in order to relax the constraints and add an additional penalty in the cost function to weight them .the resulting classification problems are known as _ soft marging problems _ and the solution is called _ soft margin hyperplane_. next , we will concentrate on a widely used soft - margin problem , the _-norm problem _ , which adopts a quadratic penalty function .following , we will show that its dual version is a quadratic program with the structure of .the -norm optimization problem turns out to be solving problem is equivalent to solving the dual problem where if and otherwise .the vector defining the optimal hyperplane can be written as linear combination of training vectors , , where and only for vectors satisfying .these vectors are called _ support vectors_. support vectors are basically active constraints of the quadratic program .now , we can notice that defining , it holds so that the constant term , can be added to the cost function .thus , posing with the canonical vector ( e.g. , ^t ] , thus concluding the proof .a core - set is a set of `` active constraints '' in problem with a cost ( i.e. , ) .clearly , some of the constraints will be violated for this value of , but no one will be violated for , with being the optimal value of .an equivalent characterization for the core - set is that no constraint is violated if is relaxed to .this test is easier to run , since it does not involve the computation of the optimal value and will be used in the simulations .in this section we provide a numerical example showing the effectiveness of the proposed strategy .we consider a network with nodes communicating according to a directed , time - varying graph obtained by extracting at each time - instant an erds - rnyi graph with parameter .we choose a small value , so that at a given instant the graph is disconnected with high probability , but the graph turns out to be jointly connected .we solve a quadratic program , , with and choose a tolerance so that the number of vectors in the core - set is . in figure[ fig : rr_transient ] and figure [ fig : cc_transient ] the evolution of the squared - radius and center - norm of the core - sets at each node are depicted . as expected from the theoretical analysis ,the convergence of the radius to the consensus value is monotone non - decreasing . , .] , . ]in this paper we have proposed a distributed algorithm to solve a special class of quadratic programs that models several classification problems .the proposed algorithm handles problems in which not only the number of input data is large , but furthermore their dimension is big .the resulting learning area is known as _ big - data classification_. we have proposed a distributed optimization algorithm that computes an approximate solution of the global problem .specifically , for any chosen tolerance , each local node needs to store only active constraints , which represent a solution for the global quadratic program up to a relative tolerance .future research developments include the extension of the algorithmic idea , based on core - sets , to other big - data optimization problems .v. cevher , s. becker , and m. schmidt , `` convex optimization for big data : scalable , randomized , and parallel algorithms for big data analytics , '' _ ieee signal processing magazine _ , vol .31 , no . 5 , pp . 3243 , 2014 .k. slavakis and g. b. giannakis , `` online dictionary learning from big data using accelerated stochastic approximation algorithms , '' in _ 2014 ieee international conference on acoustics , speech and signal processing ( icassp ) _ , 2014 , pp .y. lu , v. roychowdhury , and l. vandenberghe , `` distributed parallel support vector machines in strongly connected networks , '' _ ieee transactions on neural networks _ , vol .19 , no . 7 , pp .11671178 , 2008 .s. boyd , n. parikh , e. chu , b. peleato , and j. eckstein , `` distributed optimization and statistical learning via the alternating direction method of multipliers , '' _ foundations and trends in machine learning _ ,vol . 3 , no . 1 , pp . 1122 , 2011 .d. varagnolo , s. del favero , f. dinuzzo , l. schenato , and g. pillonetto , `` finding potential support vectors in separable classification problems , '' _ ieee transactions on neural networks and learning systems _ , vol . 24 , no . 11 , pp . 17991813 , 2013 .g. notarstefano and f. bullo , `` distributed abstract optimization via constraints consensus : theory and applications , '' _ ieee transactions on automatic control _ ,56 , no .10 , pp . 22472261 , october 2011 .s. lodi , r. nanculef , and c. sartori , `` single - pass distributed learning of multi - class svms using core - sets , '' in _ proceedings of the 2010 siam international conference on data mining _ , 2010 , pp .257268 .s. s. keerthi , s. k. shevade , c. bhattacharyya , and k. r. murthy , `` a fast iterative nearest point algorithm for support vector machine classifier design , '' _ ieee transactions on neural networks _ ,11 , no . 1 ,124136 , 2000 . | a new challenge for learning algorithms in cyber - physical network systems is the distributed solution of big - data classification problems , i.e. , problems in which both the number of training samples and their dimension is high . motivated by several problem set - ups in machine learning , in this paper we consider a special class of quadratic optimization problems involving a `` large '' number of input data , whose dimension is `` big '' . to solve these quadratic optimization problems over peer - to - peer networks , we propose an asynchronous , distributed algorithm that scales with both the number and the dimension of the input data ( training samples in the classification problem ) . the proposed distributed optimization algorithm relies on the notion of `` core - set '' which is used in geometric optimization to approximate the value function associated to a given set of points with a smaller subset of points . by computing local core - sets on a smaller version of the global problem and exchanging them with neighbors , the nodes reach consensus on a set of active constraints representing an approximate solution for the global quadratic program . distributed optimization , big - data optimization , support vector machine ( svm ) , machine learning , core set , asynchronous networks . |
consider a class of quantum many - body problems such that each member of the class is characterized by particles of type 1 and particles of a different type 2 , and by the interactions , both pairwise and multiparticle , that operate among any subset of particles . with the interactions in place , all of the states andall of the observables of any exemplar of the class are determined by the integers and .specifically , for any observable defined for the class , there exists a physical mapping from to .this notion is trivially extended to more than two type of constituents .quite obviously , the nuclear many - body problem defines a class of this kind , with and taken as the numbers and of protons and neutrons in a nuclide .other examples coming easily to mind : - clusters , binary and ternary alloys , etc .approaches to calculation or prediction of the properties of individual systems belonging to such a class span a broad spectrum from pure _ ab initio _ microscopic treatments to phenomenological models having few or many adjustable parameters , with hybrid macroscopic / microscopic and density - functional methods in between .these approaches are `` theory - thick '' to varying degrees , with the _ ab initio _ ones based in principle on exact theory and the phenomenological ones invoking physical intuition , semi - classical pictures , and free parameters .thinking in the spirit of edwin jaynes , inventor of the maxent method and charismatic proponent of bayesian probability, it becomes of special interest to go all the way in the `` theory - thin '' direction and ask the question : * _ to what extent does the existing data on property across the members of a system class , _ and only the data _ , determine the mapping ? _ in general , this mapping takes one of two forms , depending on whether is a continuous variable ( e.g. , the nuclear mass excess or quadrupole moment ) or a discrete variable ( e.g. , the nuclear spin and parity ) .the former case defines a problem of function approximation , while the latter defines a classification problem . during the past three decades, powerful new methods have been developed for attacking such problems .chief among these are advanced techniques of statistical learning theory , or `` machine learning , '' with artificial neural networks as a subclass . considering the concrete example of the mapping that determines the nuclear ( i.e. , atomic ) mass , a learning machine consists of ( i ) an input interface where and are fed to the device in coded form , ( ii ) a system of intermediate processing elements , and ( iii ) an output interface where an estimate of the mass appears for decoding . given a body of training data to be used as exemplars of the desired mapping ( consisting of input `` patterns , '' also called vectors , and their associated outputs ) , a suitable learning algorithm is used to adjust the parameters of the machine , e.g. , the weights of the connections between the processing elements in the case of a neural network .these parameters are adjusted in such a way that the learning machine ( a ) generates responses at the output interface that reproduce , or closely fit , the masses of the training examples , and ( b ) serves as a reliable predictor of the masses of test nuclei absent from the training set .this second requirement is a strong one the system should not merely serve as a lookup table for masses of known nuclei ; it should also perform well in the much more difficult task of prediction or _generalization_. the most widely applied learning machine is the multilayer perceptron ( mlp ) , consisting of a feedforward neural network with at least one layer of `` hidden neurons '' between input and output interfaces. mlps are usually trained by the backpropagation algorithm, essentially a gradient - descent procedure for adjusting weight parameters incrementally to improve performance on a set of training examples . a significant measure of success has been achieved in constructing global models of nuclear properties based on such networks , with applications to atomic masses , neutron and proton separation energies , spins and parities of nuclear ground states , stability vs. instability , branching ratios for different decay modes , and beta - decay lifetimes .( reviews and original references may be found in ref . 5 .) the support vector machine ( svm), a versatile and powerful approach to problems in classification and nonlinear regression , entered the picture in the 1990s .rooted in the strategy of structural - risk minimization, it has become a standard tool in statistical modeling .although multilayer perceptrons as well as support vector machines are in principle universal approximators , svms eliminate much of the guesswork of mlps , since they incorporate an automatic process for determining the architecture of the learning machine . not surprisingly , they have become the method of choice for a wide variety of problems .our selection of global nuclear systematics as a concrete example for the application of advanced machine - learning algorithms is neither accidental nor academic .there exists a large and growing body of excellent data on nuclear properties for thousands of nuclides , providing the raw material for the construction of robust and accurate statistical models .moreover , interest in this classic problem in nuclear physics has never been greater .the advent of radioactive ion - beam facilities , and the promise of the coming generation epitomized by the rare isotope accelerator ( ria ) , have given new impetus to the quest for a unified , global understanding of the structure and behavior of nuclei across a greatly expanded nuclear chart .the creation of hundreds of new nuclei far from stability opens exciting prospects for discovery of exotic phenomena , while presenting difficult challenges for nuclear theory . following the pattern indicated above , traditional methods for theoretical postdiction or prediction of the properties of known or unknown nuclides include _ab initio _ many - body calculations employing the most realistic nuclear hamiltonians and , more commonly , density functional approaches and semi - phenomenological models . since computational barriers limit _ ab initio _ treatment to light nuclei ,viable global models inevitably contain parameters that are adjusted to fit experimental data on certain reference nuclei .global models currently representing the state of the art are hybrids of microscopic theory and phenomenology ; most notably , they include the macroscopic / microscopic droplet models of mller et al. and the density functional theories employing skyrme , gogny , or relativistic mean - field lagrangian parametrizations of self - consistent mean - field theory. from the standpoint of data analysis , these approaches are inherently theory - thick , since their formulation rests on a deep knowledge of the problem domain .it is evident that data - driven , `` theory - thin , '' statistical models built with machine - learning algorithms can never compete with traditional global models in providing new physical insights .nevertheless , in several respects they can be of considerable value in complementing the traditional methods , especially in the present climate of accelerated experimental and theoretical exploration of the nuclear landscape . *a number of studies suggest that the quality and quantity of the data has already reached a point at which the statistical models can approach and possibly surpass the theory - thick models in sheer predictive performance . in this contribution , we shall present strong evidence from machine learning experiments with support vector machines that this is indeed the case . * in spite of their `` black - box '' origin , the machine - learning predictions can be directly useful to nuclear experimentalists working at radioactive ion - beam facilities , as well as astrophysicists developing refined models of nucleosynthesis . *although not straightforward , it will in fact be possible to gain some insights into the inner workings of nuclear physics through statistical learning experiments , by applying techniques analogous to gene knock - out studies in molecular biology .* it is fundamental interest to answer , for the field of nuclear physics , the jaynesian question that was posed above .in technical jargon , the support vector machine is a _ kernel method_, which , in terms of a classification problem , means that it implicitly performs a nonlinear mapping of the input data into a higher - dimensional _ feature space _ in which the problem becomes separable ( at least approximately ) .architecturally , the support vector machine and multilayer perceptron are close relatives ; in fact the svm includes the mlp with one hidden layer as a special case .however , svms in general offer important advantages over mlps , including avoidance of the curse of dimensionality through extraction of the feature space from the training set , once the kernel and error function have been specified .support vector machines may be developed for function approximation ( i.e. , nonlinear regression ) as well as classification . in either case , the output of the machine ( approximation to the function or location of the decision hyperplane , respectively ) is expressed in terms of a representative subset of the examples of the mapping contained in the training set .these special examples are the _ support vectors_. the basic ideas underlying svms are most readily grasped by first considering the case of a classification problem involving linearly separable patterns .suppose some of the patterns are green and the others are red , depending on some input variables defining the -dimensional input space . to find a decision surface that separates red from green patterns ,one _ seeks the hyperplane that provides the maximum margin between the red and green examples_. the training examples that define the margin are just the support vectors . in this simple case , an exact solution is possible , but in general errors are unavoidable . when faced with a problem involving nonseparable patterns , the objective then is to locate a decision hyperplane such that the misclassification error , averaged over the training set , is minimized .guided by the principle of structural - risk minimization, the svm approach determines an optimal hyperplane by minimizing a cost function that includes a term to reduce the vc dimension ( thereby enhancing generalization capability ) and a term that governs the tradeoff between machine complexity and the number of nonseparable patterns . in practice ,the svm strategy actually involves two steps .the first is to implement a nonlinear mapping : , from the space of input vectors into a higher - dimensional feature space , which is `` hidden '' from input and output ( and corresponds to the hidden layer in mlps ) .this is done in terms of an inner - product kernel certain mathematical conditions , notably mercer s theorem. the second step is to find a hyperplane that separates ( approximately , in general ) the features identified in the first step .this is accomplished by the optimization procedure sketched above .a self - contained introduction to the svm technique is beyond the scope of the present contribution .excellent treatments are available in the original work of vapnik as expounded in refs .6,7 and in haykin s text ( see also ref .19 ) . to provide some essential background ,let us consider a regression problem corresponding to a map , where is an input vector with components , and suppose that training examples indexed by are made available .then the optimal approximating function takes the form solution of the optimization problem stated above determines the parameters and , and the support vectors of the machine are defined by those training patterns for which .different choices of the inner - product kernel appearing in eq .( [ est ] ) yield different versions of the support vector machine .common choices include corresponding to the _ polynomial learning machine _ with user - selected power ; a gaussian form containing a user - selected width parameter , which generates a radial - basis - function ( rbf ) network ; and which realizes a two - layer ( one - hidden - layer ) perceptron , only one of the parameters , being independently adjustable .we also draw attention to a generalization of the rbf kernel ( [ rbf ] ) introduced recently as a simplified version of what is called anova decomposition, having the form \right)^d \ , , \label{anova}\ ] ] the support vector machine may be considered as a feedforward neural network in which the inner - product kernel , through an appropriate set of elements , defines a layer of hidden units that embody the mapping from the -dimensional input space to the -dimensional feature space .these hidden units process the input patterns nonlinearly and provide outputs that are weighted linearly and summed by an output unit . as already pointed out , the familiar structures of radial - basis - function networks and two - layer perceptrons can be recaptured as special cases by particular choices of kernel .however , the svm methodology transcends these limiting cases in a very important way : it automatically determines the number of hidden units suitable for the problem at hand , whatever the choice of kernel , by finding an optimally representative set of support vectors and therewith the dimension of the feature space .in essence , the support vector machine offers a generic and principled way to control model complexity .by contrast , approaches to supervised learning based on mlps trained by backpropagation or conjugate - gradient algorithms depend heavily on rules of thumb , heuristics , and trial and error in arriving at a network architecture that achieves a good compromise between complexity ( ability to fit ) and flexibility ( ability to generalize ) .in this section we summarize the findings of recent explorations of the potential of support vector machines for global statistical modeling of nuclear properties .the discussion will focus on the predictive reliability of svm models relative to that of traditional `` theory - thick '' models .the properties that are directly modeled in these initial studies , all referring to nuclear ground states , are ( i ) the nuclear mass excess , where is the atomic mass , measured in amu , ( ii ) -decay lifetimes of nuclides that decay 100% via the mode , and ( iii ) nuclear spins and parities . the requisite experimental data are taken from the on - line repository of the brookhaven national nuclear data center ( nndc ) at http://www.nndc.bnl.gov/. the experimental mass values are those of the ame03 compilation of audi et al. extensive preliminary studies have been performed to identify inner - product kernels well suited to global nuclear modeling .earlier work converged on the anova kernel ( 5 ) as a favorable choice , and corresponding results have been published in ref . 19 .more recently , we have introduced a new kernel that yields superior results , formed by the sum of polynomial and anova kernels and named the pa kernel .( satisfaction of mercer s theorem is conserved under summation . )the new kernel contains three parameters ( , , and ) that may be adjusted by the user . aside from parameters contained in the inner - product kernel ,the svm procedure involves a constant giving the user control over the tradeoff between complexity and flexibility , plus an additional control constant in the regression case , measuring the tolerance permitted in the reproduction of training data .thus , svm models developed with the pa kernel contain four or five adjustable parameters ( five in all applications reported here ) . to allow for a meaningful evaluation of predictive performance ( whether interpolation or extrapolation ) , the existing database for the property being modeled is divided into three subsets , namely the _ training set _, _ validation set _ , and _test set_. these sets are created by random sampling , consistently with approximate realization of chosen numerical proportions among them , e.g. ( 1002):: for training , validation , and test sets , respectively , with . the training set is used to find the support vectors and construct the machine for given values of the adjustable parameters , , , , and .the validation set is used to guide the optimal determination of these parameters , seeking good performance on both the training and validation examples .the test set remains untouched during this process of model development ; accordingly , the overall error ( or error rate ) of the final model on the members of the test set may be taken as a valid measure of predictive performance .when one considers how svm models might be applied in nuclear data analysis during the ongoing exploration of the nuclear landscape , it seems reasonable that consistent predictive performance for 80:10:10 or 90:5:5 partitions into training , validation , and test sets would be sufficient for the svm approach to be useful in practice .the svm approach has been applied to generate a variety of global models of nuclear mass excess , beta - decay lifetimes , and spin / parity , corresponding to different kernels , databases , partitions into training / validation / test sets , and tradeoffs between the relative performance on these three sets . herewe will focus on those models considered to be the best achieved to date .moreover , due to limited space , we will restrict the discussion to the most salient features of those models and to an assessment of their quality relative to favored traditional global models and to the best available mlp models .further , more detailed information may be found at the web site http://abacus.wustl.edu/clark/svmpp.php , which generates svm estimates of the listed nuclear properties for pairs entered by visitors .this web site will be periodically updated as improved svm models are developed .development and testing of the svm mass models to be highlighted here are based on the ame03 data for all nuclides with and experimental masses having error bars below 4% .this set of nuclides is divided into the four classes : even--even- ( ee ) even--odd- ( eo ) , odd--even- ( oe ) , and odd--odd- ( oo ) .separate svm regression models were constructed for each such `` even - oddness '' class .this does introduce some minimal knowledge about the problem domain into the modeling process ; one might therefore say that the models developed are not absolutely theory - free .however , the data itself gives strong evidence for the existence of different mass surfaces depending on whether and are even or odd .knowledge of the integral character of and may , quite properly , bias the svm toward inclusion of associated quantum effects. table 1 displays performance measures for models based on an 80:10:10 target partitioning of the full data set among training , validation , and test sets , respectively .inspection of the actual distributions of these sets in the plane shows that substantial fractions of the validation and test sets lie on the outer fringes of the known nuclei , significantly distant from the line of stable nuclides .accordingly , performance on the test set measures the capability of the models in extrapolation as well as interpolation .performance on a given data set is quantified by the corresponding root - mean - square ( rms ) error of model results relative to experiment ( as is standard in global mass modeling ) . the `` optimized '' model parametersare included in table 1 .they show enough differences from one even - oddness class to another to justify development of separate models for the four classes .the results in table 1 attest to a quality of performance , in both fitting and prediction , that is on a par with the best available from traditional modeling and from mlp models trained by an enhanced backpropagation algorithm. to emphasize this point qualitatively , we display in table 2 some representative rms error figures that have been achieved in recent work with all three approaches .( we must note , however , that the data sets used for the different entries in the table may not be directly comparable , and the division into training , validation , and test sets does not necessarily have the strict meaning assigned here . )the second svm model listed in the table was developed for a partitioning of the data into training , validation , and test sets of approximately 90:5:5 , obtained by random transfer of nuclides from the validation and test sets of the 80:10:10 model to the training set .the quality of representation that can be realized through the svm methodology may be highlighted in another way .employing the nuclear mass excess values generated by the svm models of table 1 , we have calculated the values for eight alpha - decay chains of the superheavy elements 110 , 111 , 112 , 114 , 115 , 116 , and 118 .( the alpha - decay -value is defined as , where be stands for the binding energy of the indicated nuclide . )results are presented in graphical and tabular form on the web site http://haochen.wustl.edu/svm/svmpp.php .for the models of table 1 ( based on an 80:10:10 partition of the assumed ame03 data set ) , the average rms error of the 38 estimates of is 0.82 mev , while the average absolute error is 0.64 mev . we emphasize that these estimates are predictions ( rather than fits ) , since none of the nuclei involved belongs to the validation or test set .moreover , due to the situation of these superheavy nuclides in the plane , prediction of the associated values provides a strong test of extrapolation .the performance of svm mass models documented in tables 1 and 2 and in the alpha - chain predictions gives assurance that this approach to global modeling will be useful in guiding further exploration of the nuclear landscape .however , it is important to gain some sense of when and how it begins to fail .the performance figures for the two sets of svm models involved in table 2 are consistent with the natural expectation that if one depletes the validation and test sets of the 80:10:10 partition in favor of an enlarged test set , the predictive ability of the model is enhanced .conversely , one should be able to `` break '' the svm modeling approach by random depletion of the training set of the 80:10:10 model in favor of larger validation and test sets .eventually the training set will become too small for the method to work at all .the results of a quantitative study of this process are shown in figure 1 .writing the generic partition as ( 1002):: , the error measure increases roughly linearly with for greater than 10 .in addition to direct statistical modeling using either svms or mlps , a promising hybrid approach is being explored .recently , the _ differences _ between experimentally measured masses and the corresponding theoretical masses given by the finite - range droplet model ( frdm ) of mller , nix , and collaborators have been modeled with a feedforward neural network of architecture 46661 trained by a modified backpropagation learning algorithm. ( the integers denote the numbers of neurons in successive layers , from input to output . )the rms errors on training ( 1276 ) , validation ( 344 ) , and test ( 529 ) sets are respectively 0.40 , 0.49 , and 0.41 mev , where the numbers of nuclides in each of these sets is given in parentheses . in a similar experiment , we have constructed svm models for ee , eo , oe , and ee classes using pa kernels. overall rms errors of 0.19 , 0.26 , and 0.34 were achieved on the training ( 1712 ) , validation ( 213 ) , and test ( 213 ) sets , respectively , with little variation over even - oddness classes .error figures over comparable subsets for the frdm model in question run around 0.7 mev , again with relatively little variation from subset to subset .these results suggest that mlps and svms are capable of capturing some 1/2 to 2/3 of the physical regularities missed by the frdm . it remains to be seen whether the residual error has a systematic component or instead reflects a large number of small effects that will continue to elude global description .another important problem in global modeling involves the prediction of beta - decay halflives of nuclei . as in the case of atomic masses , this is a problem in nonlinear regression .here we restrict attention to nuclear ground states and to nuclides that decay 100% through the mode .for this presentation , we make the further restriction to nuclides with halflives below s ( although we have also included the longer - lived examples in another set of modeling experiments ) .the brookhaven nndc provides 838 examples fitting these criteria .since the examples still span 9 orders of magnitude in , it is natural to work with and seek an approximation to the mapping in the form of svms .again , we construct separate svms for the ee , eo , oe , and oo classes , and again a kernel of type pa is adopted .the full data set is divided by random distribution into training , validation , and test sets in approximately the proportions 80:10:10 .the performance of the favored models is quantified in table 3 .here we use two measures to assess the accuracy of the svm results for training , validation , and test nuclides .these are the rms error , again denoted by , and the mean absolute error of the model estimates of , relative to experiment .detailed studies of beta - decay systematics within the established framework of nuclear theory and phenomenology include those of staudt et al. ( 1990 ) , hirsch et al. ( 1993 ) , homma et al. ( 1996 ) , and mller et al. ( 1997 ) .however , comparison of the performance of the svm models with that of the models resulting from these studies is obscured by the differences in the data sets involved .most significantly , the data set employed here is considerably larger than those used previously , including as it does many new nuclides far from stability .analysis of svm performance on subsets of the data set , now in progress , will yield useful information on the efficacy of svm models relative to the more traditional ones , as we continue to develop improved global models of beta - decay systematics . on the other hand , mlp models for the beta - halflife problemhave been generated for the same data set as used in our svm study , allowing a meaningful comparison to be made .the best mlp models created to date show values for the rms error over all even - oddness classes of 0.55 in training , 0.61 in validation , and 0.64 in prediction .these values are somewhat larger than those seen in table 3 .however , it must be pointed out that the mlp results were obtained with a smaller training set , so the efficacy of the two statistical methods appears to be about equal at this stage of development . that being the case , it is relevant to note that the recent mlp models represent a distinct advance earlier versions, and that those earlier statistical models already showed better performance over short - lived data sets than the conventional models of homma et al. and mller et al. the applications to prediction of atomic masses and beta - decay lifetimes demonstrate the predictive power of svms in two important problems of global nuclear modeling that involve function estimation .the final two applications will probe the performance of svms in global modeling of the discrete nuclear properties of parity and spin .in essence , these are problems of classification : `` which of a finite number of exclusive possibilities is associated with or implied by a given input pattern ? ''support vector machines were first developed to solve classification problems , and good svm classifier software is available on the web. however , for convenience and uniformity we prefer to treat the parity and spin problems with the same svm regression technique as in the other examples , also using the pa choice of inner - product kernel . in the parity problem, the decision of the regression svm is interpreted to be positive parity [ negative parity ] if the machine s output is positive [ negative ] . in the spin problem ,the spin assigned by the machine is taken to be correct if and only if the numerical output ( after rescaling ) is within of the correct value ( in units ) .as before , all data are taken from the brookhaven site .for parity and spin , it is especially natural to create separate svm models for the different even - oddness classes .however , as is well known , all ee nuclei have spin / parity . modelingthis property is trivial for svms , so the ee class may be removed from further consideration .the data in and of itself permits us to do so .moreover , in the case of spin , the data itself establishes , with a high degree of certainty , that the spin of eo or oe nuclides takes half - odd integral values ( in units of ) , while the spin of oo nuclides is integral .although this formulation of the parity and spin problems introduces significant domain knowledge into the model - building process , the data alone provides adequate motivation .nuclei with spin values larger than 23/2 were not considered .the predictive performance that may be achieved with svm models of parity and spin is illustrated in tables 4 and 5 .performance is measured by the percentage of correct assignments .construction of both parity and spin models is based on an 80:10:10 partition of the data into training , validation , and test sets .( as usual , the target distribution is realized only approximately . )averaged over even - oddness classes , the overall performance of the parity svms is 97% correct on the training set and 95% on the validation set , with a predictive performance on the test set of 94% .obviously , assigning parity to nuclear ground states is an extremely easy task for support vector machines .one might expect quite a different situation for the spin problem : since there are 12 legitimate spin assignments for the eo or oe nuclides considered ( i.e. , obeying the rules for addition of angular momenta ) and also 12 for the oo class , the chance probability of a correct guess is low .it is then most remarkable that the svm spin models we have developed perform with very high accuracy in prediction as well as fitting and validation .while some success has been achieved previously in mlp modeling of parity and spin, consistent predictive quality within the 8090 percentile range has been elusive . within main - stream nuclear theory and phenomenology ,the problem of global modeling of ground - state spins has received little attention , and the few attempts have not been very successful . as a baseline , global nuclear structure calculations within the macroscopic / microscopic approach the ground - state spins of odd- nuclei with an accuracy of 60% ( agreement being found in 428 examples out of 713 ) .it should be mentioned that in the preliminary investigations described in ref .19 , the tasks of global modeling of parity and spin with svms were in fact treated as classification rather than function - estimation problems .corresponding svm classifiers were created using established procedures. based on an rbf kernel , results were obtained that surpass the available mlp models in quality , but are inferior to those reported here in tables 4 and 5 .global statistical models of atomic masses , beta - decay lifetimes , and nuclear spins and parities have been constructed using the methodology of support vector machines .the predictive power of these `` theory - thin '' models , which in essence are derived from the data and only the data , is shown to be competitive with , or superior to , that of conventional `` theory - thick '' models based on nuclear theory and phenomenology .conservative many - body theorists may be troubled by the `` black - box '' nature of the svm predictors , i.e. , the impenetrability of their computational machinery . however ,this alternative , highly pragmatic approach may represent a wave of the future in many fields of science already visible in the proliferation of density - functional computational packages for materials physics and eventually molecular biology , which , for the user , are effectively black boxes . while it is true that the statistical models produced by advances in machine learning do not as yet yield the physical insights of traditional modeling approaches , their prospects for revealing new regularities of nature are by no means sterile .this research has received support from the u.s . national science foundation under grant no .we acknowledge helpful discussions and communications with s. athanassopoulos , m. binder , e. mavrommatis , t. papenbrock , s. c. pieper , and r. b. wiringa . in the regression studiesreported herein , we have found the mysvm software and instruction manual created by s. rping ( dortmund ) to be very useful . e. t. jaynes , _ probability theory : the logic of science _ ( cambridge university press , cambridge , 2003 ) .s. haykin , _ neural networks : a comprehensive foundation _ , second edition ( mcmillan , new york , 1999 ) .d. e. rumelhart , g. e. hinton , and r. j. williams , in _ parallel distributed processing : explorations in the microstructure of cognition _ ,vol . 1 , edited by d. e. rumelhart _ et al . _( mit press , cambridge , ma , 1986 ) . j. hertz , a. krogh , and r. g. palmer , _ introduction to the theory of neural computation _ ( addison - wesley , redwood city , ca , 1991 ) .j. w. clark , t. lindenau , and m. l. ristig , _ scientific applications of neural nets _( springer - verlag , berlin , 1999 ) .v. n. vapnik , _ the nature of statistical learning theory _ ( springer - verlag , new york , 1995 ) .v. n. vapnik , _ statistical learning theory _ ( wiley , new york , 1998 ) .s. c. pieper and r. b. wiringa , _ annu ._ * 51 * , 53 ( 2001 ) .p. mller and j. r. nix , _j. phys .g _ * 20 * , 1681 ( 1994 ) .p. mller , j. r. nix , w. d. myers , and w. j. swiatecki , _ at .data nucl .data tables _ * 59 * , 185 ( 1995 ) .m. samyn , s. goriely , p .- h .heenen , j. m. pearson , and f. tondeur , _ nucl .phys . _ * a700 * , 142 ( 2002 ) ; s. goriely , m. samyn , p .- h .heenen , j. m. pearson , and f. tondeur , _ phys .c _ * 66 * , 024326 ( 2002 ) .m. bender , p .- h .heenen , and p .-reinhard , _ rev .* 75 * , 121 ( 2003 ) ; m. bender , g. f. bertsch , and p .- h .heenen , _ phys ._ * 94 * , 102503 ( 2005 ) .g. f. bertsch , b. sabbey , and m. uusnkki , _ phys ._ c * 71 * , 054311 ( 2005 ) .k. a. gernoth and j. w. clark , _ neural networks _ * 8 * , 291 ( 1995 ) .j. w. clark , e. mavrommatis , s. athanassopoulos , a. dakos , and k. a. gernoth , _ fission dynamics of atomic clusters and nuclei _ , edited by d. m. brink , f. f. karpechine , f. b. malik , and j. da providencia ( world scientific , singapore , 2001 ) , p. 76 .[ nucl - th/0109081 ] s. athanassopoulos , e. mavrommatis , k. a. gernoth , and j. w. clark , _ nucl ._ * a743 * , 222 ( 2004 ) , and references therein .s. athanassopoulos , e. mavrommatis , k. a. gernoth , and j. w. clark , in _ advances in nuclear physics , proceedings of the hellenic symposium on nuclear physics _, in press ( 2005 ) .[ nucl - th/0509075 ] s. athanassopoulos , e. mavrommatis , k. a. gernoth , and j. w. clark , in _ advances in nuclear physics _ , proceedings of the hellenic symposium on nuclear physics , in press ( 2006 ) .[ nucl - th/0511088 ] h. li , j. w. clark , e. mavrommatis , s. athanassopoulos , and k. a. gernoth , in _ condensed matter theories _ , vol .20 , edited by j. w. clark , r. m. panoff , and h. li ( nova science publishers , hauppauge , ny , 2006 ) , p. 505 .[ nucl - th/0506080 ] j. mercer , _ transactions of the london philosophical society ( a ) _ * 209 * , 415 ( 1909 ). m. o. stitson , a. gammerman , v. vapnik , v. vovk , c. watkins , and j. weston , in _ advances in kernel methods support vector learning _ , edited by b. schkopf , c. burges , and a. j. smola ( mit press , cambridge , ma , 1999 ) , p. 285 .g. audi , a. h. wapstra , c. thibault , j. blachot , and o. bersillon _ nucl .phys . _ * a729 * ( 2003 ) .s. gazula , j. w. clark , and h. bohr , _ nucl .phys . _ * a540 * , 1 ( 1992 ) . s. athanassopoulos , e. mavrommatis , k. a. gernoth , and j. w. clark , to be published .a. staudt , e. bender , k. muto , and h. v. klapdor - kleingrothaus , _ at .data nucl .data tables _ * 44 * , 132 ( 1990 ) .m. hirsch , a. staudt , k. muto , and h. v. klapdor - kleingrothaus , _ at .data nucl .data tables _ * 53 * , 165 ( 1993 ) .h. homma , e. bender , m. hirsch , k. muto , and h. v. klapdor - kleingrothaus , _ phys .c _ * 54 * , 2972 ( 1996 ) .p. mller , j. r. nix , and k. l. kratz , _ at .data nucl .data tables _ * 66 * , 131 ( 1997 ) .n. costiris , a. dakos , e. mavrommatis , k. a. gernoth , and j. w. clark , to be published .t. joachims ( 2004 ) , multi - class support vector machine , http://www.cs.cornell.edu/people/tj/svm_light/svm_multiclass.html ( 2004 ) .j. w. clark , s. gazula , k. a. gernoth , j. hasenbein , j. s. prater , and h. bohr , in _ recent progress in many - body theories _ , vol . 3 , edited by t. l. ainsworth , c. e. campbell , b. e. clements , and e. krotscheck ( plenum , new york , 1992 ) , p. 371 .k. a. gernoth , j. w. clark , j. s. prater , and h. bohr , _ phys ._ * b300 * , 1 ( 1993 ) .p. mller and j. r. nix , _ nucl .phys . a _* 520 * , 369c ( 1990 ) .s. rping , mysvm , http://www-ai.cs.uni-dortmund.de/software/mysvm/ ( 2004 ) . | advances in statistical learning theory present the opportunity to develop statistical models of quantum many - body systems exhibiting remarkable predictive power . the potential of such `` theory - thin '' approaches is illustrated with the application of support vector machines ( svms ) to global prediction of nuclear properties as functions of proton and neutron numbers and across the nuclidic chart . based on the principle of structural - risk minimization , svms learn from examples in the existing database of a given property , automatically and optimally identify a set of `` support vectors '' corresponding to representative nuclei in the training set , and approximate the mapping in terms of these nuclei . results are reported for nuclear masses , beta - decay lifetimes , and spins / parities of nuclear ground states . these results indicate that svm models can match or even surpass the predictive performance of the best conventional `` theory - thick '' global models based on nuclear phenomenology . |
asymptotic analyses have been widely conducted in various research areas related to wireless communications .although they do not quite provide the same information as complete ( non - asymptotic ) results , they usually give very useful insights , while being much more tractable and available for larger classes of models .for example , the asymptotic coding gain in coding theory characterizes the difference of the signal - to - noise ratio ( snr ) levels between the uncoded system and coded system required to reach the same bit error rate ( ber ) in the high - snr regime ( or equivalently , the low - ber regime ) ; the diversity gain and the multiplexing gain introduced in are also high - snr asymptotic metrics that crisply capture the trade - off between the snr exponents of the error probability and the data rate in mimo channels ; in wireless networks , the asymptotic transmission capacity gives the network performance when the density of interferers goes to 0 , or equivalently , in the high signal - to - interference ratio ( sir ) regime .these asymptotic analyses provide simple and useful results that capture important design trade - offs . in this paper , we focus on the asymptotic analyses of the sir distribution in wireless networks , which is a key metric that determines many other performance metrics , such as the achievable reliability , transmission rate , and the delay it is instrumental for the analysis and design of interference - limited wireless networks .our analysis is not limited to one scenario but comprehensively covers a wide range of models , including both ad hoc and cellular networks , both singular and bounded path loss laws , and general stationary point processes and general fading unless otherwise specified . besides, we also consider networks where each location of the transmitter ( or antenna ) has another transmitter ( or antenna ) colocated , which results in non - simple point processes . in all scenarios, we mainly analyze the asymptotic properties of the sir distribution . for cellular networks with nakagami- fading and both singular and bounded path loss models , it has been observed in that the success probability , defined as the complementary cumulative distribution function ( ccdf ) of the sir , for different point processes are horizontally shifted versions of each other ( in db ) in the high - reliability regime .generally , in non - poisson networks , the success probability is intractable . under this observation ,however , we can obtain good approximations of the lower part of the sir distribution ( coverage probabilities above 3/4 ) for non - poisson networks if we know the result of the poisson networks and the corresponding shift amounts . for the tail of the sir distribution , a similar property holds , which has been proved in , if the singular path loss model is applied .in general , the horizontal gaps of the sir distributions between a point process and the poisson point process ( ppp ) at both ends differ slightly , so for higher accuracy , the two asymptotic regimes should be treated separately .this paper summarizes the known asymptotic properties , derives results for scenarios that have not been previously studied , and gives insight about the factors that mainly determine the behavior of the sir .the reasons that we focus on the asymptotic sir analysis include the following : 1 .it captures succinctly the performance of the various network models ( especially for the high - reliability regime ) .it permits the isolation of the key network properties that affect the sir distribution .3 . it gives insight into when it is safe to use the singular path loss model instead of a bounded one .4 . it shows when a nearest - interferer approximation is accurate .the tail determines whether the mean sir ( and higher moments ) exist . for poisson networks ,the sir distribution has been derived in exact analytical form in a number of cases , namely for bipolar ad hoc networks with general fading , for ad hoc and cellular networks with successive interference cancellation , for multitier cellular networks ( hetnets ) with strongest - on - average base station association and base station cooperation without retransmissions and with retransmissions , for cellular networks with intra - cell diversity and/or base station silencing and for multitier cellular networks with instantaneously - strongest base station association . while for some specific assumptions ,closed - form expressions are available , the results for the sir distribution typically involve one or more integrals . the only exact result for a non - poisson network is given in for cellular networks whose base stations form a ginibre process ; it contains several nested integrals and infinite sums and is very hard to evaluate numerically . from these exact results ,simple asymptotic ones can often be derived , see , e.g. , , where results on the diversity gain are extracted from the more complicated complete results .the true power of the asymptotic approach , however , becomes apparent when general non - poisson models are considered , for which essentially no exact results are available . in general ad hoc networks modeled using the bipolar model ,the asymptotic sir distribution as the interferer density goes to 0 has been analyzed for rayleigh fading in and for general fading in . in (* ch . 5 ), the interference in general ad hoc networks has been analyzed .relevant to our work here are the bounds on the ccdf of the interference and the asymptotic result on the interference distribution for both singular and bounded path loss models . in , we analyzed the asymptotic properties of the signal - to - interference - plus - noise ratio ( sinr ) distribution in general cellular networks in the high - reliability regime for nakagami- fading and both singular and bounded path loss models . in , a simple yet versatile analytical framework for approximating the sir distribution in the downlink of cellular systemswas proposed using the mean interference - to - signal ratio , while considers general cellular networks with general fading and singular path loss and studies the asymptotic behavior of the sir distribution both at 0 and at infinity .the main contribution of the paper is a comprehensive analysis of the asymptotic properties of the ccdf of the sir , often referred to as the _ success probability _ or _coverage probability_. regarding the transmitter / base station ( bs ) distributions , we do not restrict ourselves to simple point processes ( where there is only one point at one location almost surely ) , which are the ones used in almost all the literature , but also consider duplicated-2-point " point processes .the duplicated-2-point point processes are defined as the point processes where there are two points at the same location , i.e. , each point is duplicated .the motivation to study this model is three - fold : ( 1 ) by comparing the results with those for standard ( simple ) network models , it becomes apparent whether the asymptotic behavior critically depends on the fact that the distances to the desired transmitter and to the interferers are all different ( a.s . ) .( 2 ) there are situations where it is natural to consider a model where two nodes are at the same distance , such as when edge users ( who have two base stations at equal distance ) in cellular networks are analyzed or when spectrum sharing between different operators is assumed and the two operators share a base station tower .( 3 ) the only kind of networks that are not captured by simple models are those that are formed by two fully correlated point processes .the duplicated " models fills this gap .the asymptotic properties of are summarized in table [ t_total ] with respect to as and for both singular and bounded path loss models , both ad hoc models and cellular models , and both simple point processes and duplicated-2-point point processes .our results show that the asymptotic sir behavior is determined by two factors and . is the nakagami fading parameter and , where is the path loss exponent .as will be apparent from the proofs , the pre - constants are also known in some cases , not just the scaling behavior . the indicates that the results have been derived in the literature marked with the corresponding reference; the marker ( * ) indicates that the results are only proven for the poisson case with rayleigh fading , while ( * * ) indicates that the results are only proven for the case of rayleigh fading and the duplicated-2-point point process where the distinct locations form a ppp ..asymptotic properties ( simple " : simple point processes ; duplicated " : duplicated-2-point point processes ) [ cols="^,^,^",options="header " , ] [ t_sir0 ] we can simply modify the proof of theorem 1 in and prove that , as . here the nearest interferer at most at distance , since the point at is duplicated .for the singular path loss model , we have + \mathbb{p}(r\geq b ) \mathbb{e}_{r}[\mathbb{p}\left ( h_0 < \theta h \right ) \mid r \geq b ] \nonumber\\ & = \mathbb{p}(r < b ) \mathbb{e}_{r}[\mathbb{p}\left ( h_0 < \theta h \ell(b)^{-1 } \ell(r)\mid r \right)\mid r < b ] + \mathbb{p}(r\geq b ) \mathbb{p}\left ( h_0 < \theta h \right ) .\label{eq_adhoc}\end{aligned}\ ] ] from section [ sec::adsimpler0 ] , we know that the first term in is and the second term is .so , , as . for the bounded path loss model, we can apply the same methods as in the corresponding cases in section [ sec::adsimpler0 ] and obtain the results in table [ t_sir0 ] .we have .thus , . we can easily obtain the results in table [ t_sir0 ] .we observe that in all cases , the lower tail is identical to the one when all interferers are considered .for the duplicated point processes , this implies that duplicating only the nearest interferer ( as is the case for edge users in cellular networks ) again results in the same scaling . for both singular and bounded path loss models , we have since < \infty ] and < \infty ] and ] , we have & = \lim_{\theta \to \infty } \int_0^{\infty } \frac{m^m}{\gamma(m ) } \left ( \theta^{m+i } x^{i } e^{-\theta m x } \right ) x^{m-1 } e^{-mx } \mathrm{d}x \nonumber\\ & = \lim_{\theta \to \infty } \frac{\gamma(m+i)}{m^i \gamma(m ) } \left(\frac{\theta}{1+\theta}\right)^{m+i } \int_0^{\infty } \frac{\left(m \left(1+\theta\right ) \right)^{m+i}}{\gamma(m+i ) } x^{m+i-1 } e^{-m(1+\theta)x } \mathrm{d}x \nonumber\\ & = \lim_{\theta \to \infty } \frac{\gamma(m+i)}{m^i \gamma(m ) } \left(\frac{\theta}{1+\theta}\right)^{m+i } \nonumber\\ & = \frac{\gamma(m+i)}{m^i \gamma(m)}. \label{asymp_fading}\end{aligned}\ ] ] when , let , and we have & = \lim_{\theta \to \infty } \frac{\delta \gamma(m - i)}{m^{m - i } } \frac { \frac{\partial } { \partial \theta } \mathbb{e } \left [ 1- \int_0^{\theta mr^{\alpha } \tilde{i } } \frac{1}{\gamma(m - i ) } x^{m - i -1 } e^{-x } \mathrm{d}x \right]}{\frac{\partial } { \partial \theta } \theta^{-\delta } } \nonumber\\ & \stackrel{(a)}{= } \lim_{\theta \to \infty } \frac{\delta \gamma(m - i)}{m^{m - i } } \frac { \mathbb{e } \left [ 1- \int_0^{\theta mr^{\alpha } \tilde{i } } \frac{1}{\gamma(m - i ) } x^{m - i -1 } e^{-x } \mathrm{d}x \right]}{\theta^{-\delta } } \nonumber\\ & = \lim_{\theta \to \infty } \frac{\delta \gamma(m - i)}{m^{m - i } } \frac { \mathbb{e } \left [ \bar{f}_{g_i}(\theta mr^{\alpha } \tilde{i } ) \right]}{\theta^{-\delta } } , \label{asymp_casei } \end{aligned}\ ] ] where follows by applying the lhospital s rule reversely and is the ccdf of .using the same method as in the proof of theorem 4 in , we obtain }{\theta^{-\delta } } \nonumber\\ & = \lim_{\theta \to \infty } \theta^{\delta } \sum_{x \in \phis } \bar{f}_{g_i}\left ( \theta m \|x\|^{\alpha } \left(\sum_{y \in \phis \setminus \{x\}}\left ( \left(h_{y,1}+h_{y,2}\right ) \|y\|^{-\alpha}\right)\right)\right ) \mathbf{1}(\phi(b(o,\|x\| ) = 0 ) ) \nonumber\\ & \stackrel{(b)}{= } \lim_{\theta \to \infty } \frac{\theta^{\delta } \lambda}{2 } \int_{\mathbb{r}^2 } \mathbb{e}_o^ ! \left [ \bar{f}_{g_i}\left ( \theta m \|x\|^{\alpha } \left(\sum_{y \in \phi_x}\left ( \left(h_{y,1}+h_{y,2}\right ) \|y\|^{-\alpha}\right)\right)\right ) \mathbf{1}\left(b(o,\|x\| ) \ ; { \rm empty } \right ) \right ] \mathrm{d}x \nonumber\\ & \stackrel{(c)}{= } \lim_{\theta \to \infty } \frac{\lambda}{2 } \int\limits_{\mathbb{r}^2 } \mathbb{e}_o^ ! \left [ \bar{f}_{g_i}\left ( m \|x\|^{\alpha } \left(\sum_{y \in \phi_{x\theta^{-\delta/2}}}\left ( \left(h_{y,1}+h_{y,2}\right ) \|y\|^{-\alpha}\right)\right)\right ) \mathbf{1}\left(b(o,\|x\|\theta^{-\delta/2 } ) \ ; { \rm empty } \right ) \right ] \mathrm{d}x \nonumber\\ & \stackrel{(d)}{= } \frac{\lambda}{2 } \int_{\mathbb{r}^2 } \mathbb{e}_o^ ! \left [ \bar{f}_{g_i}\left ( m \|x\|^{\alpha } i_{\infty } \right ) \right ] \mathrm{d}x \nonumber\\ & \stackrel{(e)}{= } \frac{\lambda}{2 } m^{-\delta } \mathbb{e}_o^ ! \left [ i_{\infty}^{-\delta } \right ] \int_{\mathbb{r}^2 } \bar{f}_{g_i}\left ( \|x\|^{\alpha } \right ) \mathrm{d}x \nonumber\\ & = \frac{\lambda}{2 } m^{-\delta } \mathbb{e}_o^ ! \left [ i_{\infty}^{-\delta } \right ] \pi \delta \int_0^{\infty } r^{\delta - 1 } \bar{f}_{g_i}\left ( r \right ) \mathrm{d}r \nonumber\\ & = \frac{\lambda}{2 } \pi m^{-\delta } \mathbb{e}_o^ ! \left [ i_{\infty}^{-\delta } \right ] \mathbb{e}[g_i^{\delta } ] \nonumber\\ & = \frac{\lambda \pi m^{-\delta } \gamma(m - i+\delta)}{2\gamma(m - i ) } \mathbb{e}_o^ ! \left [ i_{\infty}^{-\delta } \right ] , \label{asymp_-delta}\end{aligned}\ ] ] where follows from the campbell - mecke theorem , is a translated version of , follows by using the substitution , follows by the dominated convergence theorem and the fact that and thus , follows by using the substitution .so , by substituting into , it yields that = \frac{\delta \lambda \pi \gamma(m - i)}{2m^{m - i+\delta } } \mathbb{e}_o^ ! \left [ i_{\infty}^{-\delta } \right ] \mathbb{e}[g_i^{\delta } ] .\label{asymp_casei_final}\end{aligned}\ ] ] when , let , and we have & = \lim_{\theta \to \infty } \mathbb{e } \left [ \theta^{\delta } e^{-\theta mr^{\alpha}\tilde{i } } \right ] \nonumber\\ & = \lim_{\theta \to \infty } \theta^{\delta } \mathbb{e } \left [ \bar{f}_{g_m}\left(\theta mr^{\alpha } \tilde{i } \right ) \right ] \nonumber\\ & \stackrel{(a)}{= } \frac{\lambda}{2 } \pi m^{-\delta } \mathbb{e}_o^ ! \left[ i_{\infty}^{-\delta } \right ] \mathbb{e}[g_m^{\delta } ] \nonumber\\ & = \frac{\lambda}{2 } \pi m^{-\delta } \gamma(1+\delta ) \mathbb{e}_o^ ! \left [ i_{\infty}^{-\delta } \right ] , \label{asymp_case0}\end{aligned}\ ] ] where follows from .r. k. ganti , j. g. andrews , and m. haenggi , `` high - sir transmission capacity of wireless networks with general fading and node distribution , '' _ ieee transactions on information theory _ ,57 , no . 5 ,31003116 , may 2011 .m. haenggi , j. g. andrews , f. baccelli , o. dousse , and m. franceschetti , `` stochastic geometry and random graphs for the analysis and design of wireless networks , '' _ ieee journal on selected areas in communications _27 , no . 7 , pp . 10291046 , sep .a. guo and m. haenggi , `` asymptotic deployment gain : a simple approach to characterize the sinr distribution in general cellular networks , '' _ ieee transactions on communications _ , vol .63 , no . 3 , pp .962976 , mar . 2015 .r. k. ganti and m. haenggi , `` asymptotics and approximation of the sir distribution in general cellular networks , '' _ ieee transactions on wireless communications _ , vol .15 , no . 3 , pp .21302143 , mar .f. baccelli , b. blaszczyszyn , and p. mhlethaler , `` stochastic analysis of spatial and opportunistic aloha , '' _ ieee journal on selected areas in communications _ ,27 , no . 7 , pp . 11051119 , sep. 2009 .x. zhang and m. haenggi , `` a stochastic geometry analysis of inter - cell interference coordination and intra - cell diversity , '' _ ieee transactions on wireless communications _ , vol . 13 , no . 12 , pp .66556669 , dec . 2014 .b. blaszczyszyn and h. p. keeler , `` studying the sinr process of the typical user in poisson networks by using its factorial moment measures , '' _ ieee transactions on information theory _ ,61 , no . 12 , pp .67746794 , dec . 2015 .r. giacomelli , r. k. ganti , and m. haenggi , `` outage probability of general ad hoc networks in the high - reliability regime , '' _ ieee / acm transactions on networking _ , vol .19 , no . 4 , pp . 11511163 ,aug . 2011 .a. guo , y. zhong , m. haenggi , and w. zhang , `` the gauss - poisson process for wireless networks and the benefits of cooperation , '' _ ieee transactions on communications _ ,64 , no . 7 , pp . 29852998 , jul . | in the performance analyses of wireless networks , asymptotic quantities and properties often provide useful results and insights . the asymptotic analyses become especially important when complete analytical expressions of the performance metrics of interest are not available , which is often the case if one departs from very specific modeling assumptions . in this paper , we consider the asymptotics of the sir distribution in general wireless network models , including ad hoc and cellular networks , simple and non - simple point processes , and singular and bounded path loss models , for which , in most cases , finding analytical expressions of the complete sir distribution seems hopeless . we show that the lower tails of the sir distributions decay polynomially with the order solely determined by the path loss exponent or the fading parameter , while the upper tails decay exponentially , with the exception of cellular networks with singular path loss . in addition , we analyze the impact of the nearest interferer on the asymptotic properties of the sir distributions , and we formulate three crisp conjectures that if true determine the asymptotic behavior in many cases based on the large - scale path loss properties of the desired signal and/or nearest interferer only . stochastic geometry , point processes , asymptotics , interference , sir distribution . |
it is well known that precision and accuracy of astronomical observations , both optical and radio , made through the earth s atmosphere depend on the elevation at which the object is observed .these errors grow with decreasing of the elevation due to larger air mass and difficulties in modelling of refraction effects at low elevation . from this point of view observationsshould be made in the near - zenith zone when possible .on the other hand , inclusion in processing of observations made at low elevations is important when definite groups of highly correlated parameters , for instance station coordinates and zenith troposphere delays , are estimated simultaneously .in such a case using observations made at in a widest range of elevation allows us to mitigate the correlations between unknowns and improve the solution . to meet these mutually exclusive requirements , proper elevation - dependent weighting ( edw ) of observations is used . in a special case of step - like weighting function , i.e. rejection of the observations made at the elevation less than the given limit, such a limit usually is called cut - off elevation angle ( cea ) .it was shown in many studies that elevation - dependent weighting may have a significant impact on the results of processing of the space geodesy observations .in particular , several studies of this effect was made by the goddard and vienna vlbi analysis groups in the framework of the ivs vlbi2010 committee activity .they investigated an influence of cea and edw on geodetic results such as earth orientation parameters ( eop ) , baseline length repeatability , troposphere parameters , and station heights .those results were based on simulation .gipson in used another approach to edw .he applied elevation dependent additive noise to the measurement error instead of using a weighting factor as it is usually being made .he tested his method with the actual cont05 observations .results of both mentioned and other results are sometimes contradictory .this gave an impulse to the present work where some results are presented of investigation of the impact of the cea and edw on the baseline length repeatability and eop estimates .for this test , cont05a observations were processed making use of occam software with different edw functions including cea , keeping all other options as follows : * kalman filter mode ( kf ) , * random walk model for clocks , psd=1.5 ps/s , * random walk model for ztd , psd=0.25 ps/s , * one ns and ew troposphere gradient estimate for the session . for the continuous edw mode ( continuous weighting function ) , the measurement error is multiplied by a factor where and are edw parameters , is the source elevation .such a weighting function provides a smooth stepless change in weight for any .one can see that a case of gives merely , which is close to actual wet mapping function used in the last works by macmillan and gipson ( private communications ) .figure [ fig : mf ] shows actual hydrostatic and wet mapping functions along with approximation for a typical cont05 session .one can see that these functions are close enough , and either of them can be used for the edw without significant impact on the result .= 0.48 = 0.48 edw mode with was implemented in the occam / gross software for routine data processing .it will be referred hereafter as `` normal mode '' . in a case of ceawe have the latter line corresponds to the kf realization used in occam . for vlbi delay , measurement error coming from correlator is multiplied by two values computed for both the stations . in our test, were used for cea test , and were used for continuous edw mode .test results obtained with different cea are shown in figure [ fig : cea_baselines ] .the case of includes all the observations without weighting , since no cont05 observations were made at the elevation less than 4 .= 0.48 + = 0.48 table [ tab : edw_baseline ] shows edw test results .different edw modes are denoted as w_e_p , where e and p are and in eq.([eq : edw ] ) .test results are given for quadratic approximation in percent with respect to the case of cea with .one can see that several edw modes show about the same improvement in the baseline length repeatability ..comparison of the baseline repeatability obtained with different edw models .see explanation in text . [cols="^,^,^,^,^ " , ]the preliminary conclusions from this test are the following . *the baseline length repeatability steadily grows with the cea increasing , remaining practically the same in the cut - off angle range from 3 ( i.e. no cut - off for the cont05 ) to 9 . *the best result is obtained when the edw elevation - depending weighting is applied to the low - elevation observations .however , the test results are not always unambiguous .further adjustment of the weighting method may be fruitful . *the xp , yp and ut1 uncertainties grow with the increasing cut - off angle after about 10 . most probably , this reflects the fact that only about 6% of the total number of cont05 observations were made at the elevations below 10 .the xc and yc uncertainties and scatter depend on the cea much less . *xp bias w.r.t .igs slightly depends on the cea , except the maximum tested cea values , evidently unrealistic .in contrast , yp bias substantially changes with increasing cea .most probably , this can be explained by the cont05 network orientation , for which the longitude of the central meridian just corresponds to the y direction of the terrestrial coordinate system .* some statistics such as the uncertainty and the scatter of the xc and yc , as well as the wrms of xp and yp w.r.t .igs have the minimum at the cea around , which is interesting and deserves a supplement investigation . *as one can expect , the correlations between eop comprising xp and yp grow with increasing cea , but remain small due to good cont05 network geometry .the same can be expected for the ivs2010 network .the correlation between xc and yc remain practically the same for all tested cea , except the maximum tested cea value , evidently unrealistic .finally , we can conclude that inclusion of the low - elevation observations , properly weighted , improves the baseline length repeatability and eop results . on the contrary ,filtering the observations using the cut - off elevation method may lead to degradation of geodetic results .however , this should be mentioned that the conclusions drawn from the result obtained in this paper has been proven with standard geodetic vlbi observations , where rather few observations were made at low elevations , as mentioned above .perhaps , special r&d sessions with more uniform distribution of observations over the sky , including observations at very low elevations , may be useful for more detailed study on the impact and optimal processing of the low - elevation observations on geodetic parameters obtained from vlbi observations .it ought be mentioned that all the edw modes considered in this paper in fact modify only diagonal elements of the corresponding covariance matrix . according to gipsons work best result can be achieved in a case of account also for correlations between observations .it seems to be interesting to investigate how this approach will work in kf estimator .gipson , j. incorporating correlated station dependent noise improves vlbi estimates . in : j. boehm , a. pany , h. schuh ( eds . ) , proc .18th european vlbi for geodesy and astrometry working meeting , vienna , austria , 12 - 13 apr 2007 , geowissenschaftliche mitteilungen , heft nr .79 , schriftenreihe der studienrichtung vermessung und geoinformation , technische universitaet wien , 2007 , 129134 . | in this paper , results are presented on studies which have been performed to investigate the impact of the cut - off elevation angle ( cea ) and elevation - dependent weighting ( edw ) on the eop estimates and baseline length repeatability . for this test , cont05 observations were processed with different cea and edw , keeping all other options the same as used during the routine processing . uncertainties and biases , as well as correlations between estimated parameters have been investigated . it has been shown that small cea , up to about 810 degrees does not have large impact on the results , and applying edw allows us to get better result ( smaller errors ) . however , this result has been proven with standard geodetic vlbi observations , where rather few observations were made at low elevations . perhaps , special r&d sessions with more uniform distribution of observations over elevation may be useful for more detailed study on the subject . ' '' '' width 0.4 0.2ex 5th ivs general meeting , st . petersburg , russia , 36 march 2007 |
entropy is a measure of the _ size _ of a data distribution contained within a bounded region ( distribution support ) of some space . in a thermodynamic contextthis distribution size is interpreted as the number of quantum states accessible to a dynamical system given macroscopic constraints . more generally ,if a measure space is partitioned , the measure distribution size is estimated by the effective number of partition elements given the distribution weighting .definition of the space partition is a central element of entropy calculation .the partition is sometimes defined as a small - scale limiting partition of the space ( _ e.g. , _ thermodynamic limit , limit procedures in classical analysis ) , sometimes based on properties of the data distribution itself and/or on the analysis goals ( as in wavelet analysis ) .we define entropy as an explicit function of the partition definition , a scaled binning of the measure space .we calculate the entropy and related quantities as functions of the partition scale(s ) , similar to the multi - scale partition approaches used in fractal dimension calculations and image deconvolution . unlike some analysis methods we invoke no scale limits .quantities are defined on bounded scale intervals explictly excluding asymptotic limits ; the analysis system is in this sense _ scale local_. the end result of this approach is an entropy which represents _ arbitrary _ data correlations as a distribution on scale , particularly useful for problems where the detailed scaling behavior of correlations over substantial scale intervals is of interest ( _ e.g. , _ condensation , coalescence , critical phenomena , strange attractors ) , where instrumental effects may distort scale distributions , and where the correlation structure is not simply expressible as a power law or other elementary function . in this paper we describe precision binning methods , define the basic scale - local entropy measure and generalize other aspects of information theory to define the scale - dependent entropy difference or information between an object distribution and a model reference as a differential correlation measure . based on scale derivatives of entropy and informationwe define scale - local dimension and dimension transport as generalizations of conventional counterparts based on limit concepts .we apply these correlation measures to several simulations and real data analysis problems .the entropy definition employed here is based on , the rank- correlation integral at scale .given a data distribution in a -dimensional primary measure space spanned by variables , we consider a set of corresponding correlation spaces containing all possible -point clusters of data points ( -tuples ) .there is one such -point distribution ( in a -fold cartesian product space ) for each unique value .the -point correlation integral is the projection of the -point distribution onto its difference subspace spanned by , integrating over the sum variable(s ) . the integration limit of the correlation integral on the difference variables is in the simplest case ( isotropic binning ) the single scale of the analysis .the reciprocal of the correlation integral estimates the _ effective _ bin number in the -dimensional difference subspace .it s counterpart in the primary measure space is the root , the effective bin number in the primary space .defining entropy as the logarithm of the effective primary - space bin number is consistent with entropy as a logarithmic size measure and is a generalization of the thermodynamic definition , the logarithm of the number of accessible states .the correlation integral can be approximated by binning the primary measure space , and expressed in terms of normalized bin contents , in which case .this results in the rank- rnyi entropy \simeq \frac{1}{1-q}\log \left[\sum_{i=1}^{m(e)}p_{i}(e)^q\right].\ ] ] the entropy , the number of occupied bins , and the bin probability are explicit functions of the binning scale .more generally , a non - isotropic binning ( one utilizing bins without unit aspect ratio ) would imply a multidimensional scale space .given scale - local entropy we define scale - local information as a basis for differential comparisons between object and reference distributions , data and model .there are a number of possible information definitions from information theory and topology , with significant differences in performance .we define information here as the difference between entropies for object and reference distributions .this implies that the effective bin number for an object distribution is compared _ in ratio _ to that of a reference .nonzero information implies _ multiplicative _ reduction of effective bin number by increased correlation structure in the object distribution relative to the reference ( cumulant analysis is an alternative differential approach emphasizing _ linear or additive _ reduction of distribution size ) .scale - local information is then defined as information so defined provides a differential comparison between a data or _object _ distribution and a model or _reference_. for example , the reference distribution for an arbitrary point set would be a distribution with the same number of points which maximally ` fills ' the bounded support a uniform random distribution . since the uniform reference is useful in many applications we derive explicit analytic forms for its entropy and dimension .although these scale - local analysis methods are completely general as to the nature of the measure distribution , for the purpose of illustration we emphasize point sets in this paper .the measure distribution is then a set of points in a -dimensional embedding space ( _ e.g. , _ the distribution of particles from a heavy - ion collision in momentum space ) .the analysis begins by applying a partition to the embedding space . for algebraic simplicitywe consider a grid of -dimensional cubes , an isotropic binning . at each scalethere is a continuum of possibilities for the relative position of the binning system on the embedding space .differing partition placement effects the analysis in general , and there is no _ a priori _ reason to prefer any single placement .thus , we average ( dither ) over all partition placements to calculate the entropy at each scale .we define a dithering phase for each of the embedding - space dimensions .the relationship between the partition system and the measure distribution is controlled by varying . to implement ditheringwe calculate the correlation integral of an event times at each scale , incrementing each time .finally we average over these results to obtain the entropy .the dithered entropy is thus .\end{aligned}\ ] ] where is the bin probability of the bin for binning phase and scale . is normalized so that at each scale and dithering phase , the sum taken over all occupied bins in each partition .a simple application of scale - local entropy and dithered binning illustrates the analysis process .we obtain the scaled entropy of a 2d uniform distribution of randomly generated points on a unit - square support for several values of index .a monte carlo simulation with 50k points is shown in figure [ 1drgud ] along with analysis results for . .a box plot of the distribution itself is shown in the left panel , the right panel shows the measured entropy for ( dashed line ) , ( dotted line ) , and ( dot - dashed line).,width=384 ] to interpret these results we consider small- , intermediate- , and large - scale regimes . in the small - scale limit , well below the point - separation scale ( ) , each distribution point occupies a single bin ; there are n occupied bins ( ) , each with bin probability .thus , at small scale the entropy approaches =\log n.\ ] ] at intermediate scales we idealize the uniform point set to a continuum ( ) .the number of occupied bins is then simply and the bin probability is giving . at scales substantially greater than the boundary scale ( ) the entire distribution is contained in a single bin : and .the rank- entropy thus vanishes at scales much larger than the distribution boundary size .the detailed -dependence near the particle - separation and boundary scales is amenable to an analytic treatment .information is a relative quantity , a matter of definition .it is impossible to make an absolute determination of the information content of an arbitrary distribution . using a maximum - entropy reference we can measure information relative to a distribution which is minimally correlated ( given certain constraints ) .this motivates us to derive an algebraic form for the scale - local entropy of a _ bounded uniform distribution _ ( bud ) .this distribution represents a maximum - entropy hypothesis within a boundary , a maximum filling of the distribution support .this is a correlation reference from which any object distribution may deviate with reduced entropy .the derivation is presented in two parts : scales below and above the boundary scale . for partitions below the boundary scalethere is at least one bin in the interior of the embedding space . to derive an analytic form for the entropy of a bud we consider a two - dimensional distribution ( andlater generalize to dimensions ) .the bud is defined on a square support with side length . because the distribution is uniform the probability of finding a point in any given binis simply determined by the bin area .figure [ dithervariables2d ] shows a bud binned with a general rectangular binning ( thin dark lines ) .we calculate the bin contents for each bin ( as a function of scale ) and integrate over all possible dithering configurations to determine the analytic form of the entropy . for bins that are entirely contained within the embedding space ( interior bins )the bin probability is trivial : is the area of the bin divided by the total area of the support , independent of bin dithering . for edge and corner binsthe problem is more complicated .we consider each bin type interior ( white ) , edge ( light grey ) and corner ( dark grey ) separately .corner bins contain both an x- and y - axis support edge .for all dithering configurations there are four corner bins .the effective area of these corner bins is scaled down by the fraction of the bin along the axis ( ] ) .we define and express the amount of overlap between the last bin on axis and the edge of the support along that axis as . with these definitions , calculating the contribution to the correlation integral of the corner bins is a matter of integrating over all values using the relevant bin probabilities .labeling the corner bins from right to left starting with the upper left bin we write down the -dependent corner bin probabilities as ^q\left ( \frac{e_{\text{x}}e_{\text{y}}}{l^2}\right)^q \\ \nonumber p_{\text{c}}^q&=&[\phi_{\text{x}}+\delta_{\text{x}}-\text{int } ( \phi_{\text{x}}+\delta_{\text{x}})]^q(1-\phi_{\text{y}})^q\left ( \frac{e_{\text{x}}e_{\text{y}}}{l^2}\right)^q \\ \nonumber p_{\text{d}}^q&=&[\phi_{\text{x}}+\delta_{\text{x}}-\text{int } ( \phi_{\text{x}}+\delta_{\text{x}})]^q[\phi_{\text{y}}+\delta_{\text{y}}-\text{int } ( \phi_{\text{y}}+\delta_{\text{y}})]^q\left ( \frac{e_{\text{x}}e_{\text{y}}}{l^2}\right)^q.\end{aligned}\ ] ] we calculate the dither - averaged correlation integral by integrating over the different dithering configurations for each bin and summing . following this approachwe integrate the expressions over the two dithering variables and sum results to calculate the correlation integral .the integral over the first corner bin yields the second term is ^q\ , \rmd\phi_{\text{x}}\ , \rmd\phi_{\text{y } } \\ \nonumber & = & \left(\frac{e_{\text{x}}e_{\text{y}}}{l^2}\right)^q\left(\frac{1}{1+q}\right)\left[\int_{\delta_{\text{y}}}^1 v^q\ , \rmd v+\int_0^{\delta_{\text{y } } } v^q\ , \rmd v\right ] \\\nonumber & = & \left(\frac{e_{\text{x}}e_{\text{y}}}{l^2}\right)^q\left(\frac{1}{1+q}\right)^2.\end{aligned}\ ] ] the third and fourth terms are similar to the first and second ; each of the four corner bins contributes an term to the total dither - averaged correlation integral .the contributions from edge bins are simpler to calculate .the overlap fraction is unity in the direction parallel to the support edge ; along that axis we merely count the number of edge bins .the integral along the second axis is similar to the corner bins .again there are four terms , but symmetry simplifies the problem ( 1-\phi_{\text{y}})^q\ , \rmd\phi_{\text{x}}\ , \rmd\phi_{\text{y } } \\ \nonumber & = & \left(\frac{e_{\text{x}}e_{\text{y}}}{l^2}\right)^q \int_0 ^ 1\int_0 ^ 1 \left[\text{int}\left(\frac{l}{e_{\text{x}}}+1+\phi_{\text{x}}\right ) -2\right ] [ \phi_{\text{y}}+\delta_{\text{y}}-\text{int}(\phi_{\text{y}}+\delta_{\text{y}})]^q \ , \rmd\phi_{\text{x}}\ , \rmd\phi_{\text{y } } \\ \nonumber & = & \left(\frac{e_{\text{x}}e_{\text{y}}}{l^2}\right)^q\left(\frac{l}{e_{\text{x}}}-1\right)\left(\frac{1}{1+q}\right).\end{aligned}\ ] ] which is the expression for the x - axis border bins .contributions from the y - axis bins are obtained by switching indices .the contribution from edge bins is thus .\ ] ] there remains the integral over interior bins , a simple matter of bin counting the full correlation integral can be assembled from the corner , edge , and interior bin integrals \\\nonumber \fl \;\;\;\;\;\;\;\;\;\ , = \left(\frac{e_{\text{x}}e_{\text{y}}}{l^2}\right)^{q-1 } \left[1+\left(\frac{1-q}{1+q}\right)\left(\frac{e_{\text{x}}}{l}\right)\right]\left[1+\left(\frac{1-q}{1+q}\right)\left(\frac{e_{\text{y}}}{l}\right)\right].\end{aligned}\ ] ] inserting this result into the definition of the rank- entropy we find that \\\nonumber \fl \;\;\;\;\;\;\;\;\;\, = \frac{q-1}{1-q } \log \left(\frac{e_{\text{x}}e_{\text{y}}}{l^2}\right)+\frac{1}{1-q } \log \left[1+\left(\frac{1-q}{1+q}\right)\left(\frac{e_{\text{x}}}{l}\right)\right]+\frac{1}{1-q } \log \left[1+\left(\frac{1-q}{1+q}\right)\left(\frac{e_{\text{y}}}{l}\right)\right ] \\\nonumber \fl \;\;\;\;\;\;\;\;\;\ , = \left\{\log\left(\frac{l}{e_{\text{x}}}\right)+\frac{1}{1-q}\log\left[1+\left(\frac{1-q}{1+q}\right)\frac{e_{\text{x}}}{l}\right]\right\}+\left\{\log\left(\frac{l}{e_{\text{y}}}\right)+\frac{1}{1-q}\log\left[1+\left(\frac{1-q}{1+q}\right)\frac{e_{\text{y}}}{l}\right]\right\}.\end{aligned}\ ] ] because of the additivity of entropy under product this entropy expression contains two terms , one for each axis , which suggests the -dimensional generalization : adding an equivalent term for each additional dimension .for the derivation of eq .( [ genent ] ) we considered contributions from three types of bins : corner , border , and interior . in the 2d derivationthis approach is only valid when both and .if either of the scales is larger than the support size there are no _ interior _ bins and eq .( [ genent ] ) is not valid .thus , to obtain the entropy expression for a bud at large scale consider a single axis ( we now exploit the additivity of entropy with respect to dimension for uncorrelated systems ) with .when there is a bin edge in the support there are exactly two corner bins ; we express the bin probability of the second bin in terms of the first ^q \\ \nonumber p_2^q&=&[1-p_1]^q.\end{aligned}\ ] ] when the support fits completely inside a single bin ; the distance from the support edge to the nearest bin edge is larger than the size of the support ( ) .we now evaluate the relevant integrals ^q\ , \rmd\alpha=1-\frac{l}{e},\ ] ] since the definition of corner bins is arbitrary , we could relabel and get the same result ; symmetry requires that .these results can now be assembled to calculate the 1d scaled entropy for scales larger than the support size \\ \nonumber & = & \frac{1}{1-q } \log \left[1-\frac{l}{e}+\frac{l}{e}\left(\frac{2}{1+q}\right)\right ] \\\nonumber & = & \frac{1}{1-q } \log \left[1+\left(\frac{1-q}{1+q}\right)\frac{l}{e}\right].\end{aligned}\ ] ] combining results , we obtain the rank- scaled entropy for each degree of freedom of a bud over all scales ,&for \cr \frac{1}{1-q } \log\left[1+\left(\frac{1-q}{1+q}\right)\frac{l}{e}\right],&for .\cr}\ ] ] we have generated the exact expression for the scale - local entropy in the general case of a -dimensional bounded uniform distribution .dashed , dotted , dot - dashed lines ) with analytic brud entropy for ( solid line ) as a reference .data and reference entropy distributions differ near the mean interparticle spacing scale ( - 2.35 ) as discussed in the text.,width=384 ] this entropy is precise for a discrete , random point set and . for the reference entropydiffers from the entropy for real point sets in a small scale interval near , the typical two - point separation .this information ( entropy difference ) derives from the fact that the form of rnyi entropy employed here is based on ordinary moments ( averaged powers of ) , whereas the point - set data are poisson distributed .a correlation integral based on factorial moments would give zero information for uniform ( uncorrelated ) point sets .we could conclude that a factorial - moment approximation to the correlation integral should be used for point sets at least , if not other applications .however , extending our previous observation that boundedness itself is a form of correlation in a self - consistent system we can also view the point set as a result of increasing correlation ( coalescence ) of a continuous distribution at a characteristic scale .the point set does have correlations additional to the boundedness of the continuous bud .the apparent discrepancies in the entropies of brud and real points sets ( the nonzero in fig .[ 50krud ] ) reveal a genuine correlation feature in a discrete point set compared to a continuum .the rnyi entropies based on ordinary moments are preferred as a _more general _ formulation applicable to arbitrary measure distributions .the information corresponding to the difference between factorial moments and ordinary moments for the uniform random point set is meaningful and can be expressed analytically ( the subject of a future paper ) .having obtained entropy and information as scale - local distributions we can similarly express the dimension of a distribution as a function of scale .we start with a conventional definition of dimension based on asymptotic limits = \lim_{e\rightarrow 0 } \biggl [ -\frac{s_q(e)}{\log ( e ) } \biggr].\ ] ] in this approach dimension is a single number defined in the limit of a zero - scale partition .the slope of the distribution is evaluated in the limit of zero scale .this definition favors certain specific types of correlation ( power laws ) , and assumes that the limit exists , which may not be true even in principle . for more general casesthe results can be misleading .we relax the asymptotic limit restriction as we did with scale - local entropy to obtain a more general _ scale - local _ dimension = -\frac{\partial [ s_q(e)]}{\partial [ \log ( e)]}.\end{aligned}\ ] ] applying this definition to the generalized entropy of a brud in eq .( [ point - ent ] ) yields dashed , dotted , dot - dashed lines ) .the solid lines shows the analytical results derived for a generalized brud.,width=384 ] dimension expressed as a scale - local distribution is a novel aspect of this entropy treatment . to understand how scale can affectthe inferred dimension of an object consider the apparent dimension of a planet in the solar system from different viewpoints .for an observer on one planet other planets appear to the eye as isolated points ( with zero dimension ) . with a powerful telescopethe resolution size ( scale ) of the observation decreases substantially .planets appear as 2d disks with 1d border .a radar probe orbiting a planet can determine that the planet at smaller scale has a rich 3d surface and internal structure that could not be supported by a 1d point or 2d plane .continuing to the atomic scale planets are made of atoms and molecules that appear point - like . at this scale a planets dimensionality returns to zero .this general principle is illustrated in figure [ dq ] .as an exercise in precision correlation analysis using scale - local entropy and dimension we model cluster formation via condensation by generating a hierarchical point distribution with correlation features distributed over a range of scales .this model is relevant to phase transitions and complex systems analysis . to create a two - dimensional , two - tier cluster hierarchy on a square region of side generate a uniform random distribution of cluster sites , providing correlations at the characteristic length scale the mean site separation . at each cluster sitewe throw a randomly generated uniform distribution of points with width , giving the distribution a second characteristic length scale . , & .the scaled dimension of the data ( dashed line ) is compared to the analytic brud reference with the corresponding number of points ( ) at scale ( black solid line ) as well as the analytic reference for at scales and ( gray solid lines).,width=384 ] if the two tiers of the hierarchy are sufficiently separated on scale , as in fig .[ 2d2steph ] , the sub - structure of the clusters is not evident to the analysis at large scale ( ) .the distribution appears to be a brud of random points . at smaller scale ( )the apparent structure is dominated instead by the internal cluster structure , a brud of points .this two - tiered hierarchical distribution is a first approximation to a self - similar distribution : it appears as a brud at scales and simultaneously ( see fig .[ 2d2steph ] ) . extendingthe hierarchy by recursive self - similar cluster generation would converge to the limiting case of a fractal point distribution over an arbitrarily large scale interval ( _ e.g. , _ cantor set ) .the ` fractal ' dimension would depend on the relations among the hierarchy scale separation , the cluster size and the point count .the lower right panel of figure [ 2d2steph ] shows the _ dimension transport _ for the two - tier hierarchy .dimension transport is defined as the scale derivative of information , /\partial [ \log ( e)]$ ] , and is a measure of the scale - dependent dimension difference between reference and object distributions .dimension transport measures increasing correlation as the transport of dimensionality from larger to smaller scale . in the example of the two - tier hierarchy correlation ( relative to the brud of the same multiplicity )anticorrelation of points at larger scale is achieved when points condense toward the cluster sites .the anti - correlation at larger scale results in a reduction of larger - scale dimensionality ( the system appears more point - like ) and an increase in the local point density and dimensionality at smaller scale . ) in a system with a fixed multiplicity ( ) .data are compared to a reference brud with equal multiplicity.,width=384 ] the extended two - tier condensation example in figure [ 6panelh ] shows how small - scale correlations increase by condensing points of a brud onto cluster sites . at the onset of cluster formation ( cluster sites , points per cluster )the transport of dimension to smaller scale is barely visible ( but still non - statistical ) .when the size of the clusters becomes significant ( cluster sites , points per cluster ) the analysis indicates what the eye perceives directly , that the distribution of points is in some way correlated .when the cluster size is 10% of the number of clusters ( cluster sites , points per cluster ) the dimension transport shows quite dramatically and quantitatively the transport of dimension from larger to smaller scale .we have developed a novel analysis system which is well - suited to the task of precision correlation analysis for general measure distributions and especially for systems which exhibit clustering or other self - similar behavior . by extending the rnyi - entropy concept to a locally - defined function of scalewe are able to establish a more complete picture of data correlations and make precision comparisons among data , simulations and model distributions in the context of information theory .comparison of monte carlo results and analytic distributions have led to a detailed understanding of scale - local entropy measures .analysis of simulated clustering data suggests the power of this method in the quantification of scale - dependent correlation structure .the authors would like to thank all of the people who have contributed to and supported this work .in particular , we would like to thank dhammika weerasundara who was instrumental in the development of scale - local entropy methods , as well as stephen bailey , justin prosser , and curtis reynolds who helped implement several applications of this analysis .5 baker g l and gollub j p 1990 _ chaotic dynamics _( cambridge : cambridge university ) grassberger p 1983 _ phys .lett . _ a * 97 * 224 - 230 kittel c and kroemer h 1980 _ thermal physics _( new york : freeman ) pando j and fang l 1998 _ phys .* 3553 - 3601 pantin e and starck j - l 1996 _ astron . astrophys .ser . _ * 118 * 575 - 585 reid j g 2002 event - by - event analysis methods and applications to relativistic heavy - ion collision data _ doctoral dissertation _nucl - ex/0302001 rnyi a 1960 _ mta iii .* 10 * 251 - 282 trainor t a 1998 scale - local topological measures , _ preprint _ , university of washington , 1998 | a novel method for correlation analysis using scale - dependent rnyi entropies is described . the method involves calculating the entropy of a data distribution as an explicit function of the scale of a -dimensional partition of -cubes , which is dithered to remove bias . analytic expressions for dithered scale - local entropy and dimension for a uniform random point set are derived and compared to monte carlo results . simulated nontrivial point - set correlations representing condensation and clustering are similarly analyzed . .2 in keywords : scale , entropy , dimension , information , fractal , complex system , phase transition |
magic are two imaging atmospheric cherenkov telescopes for gamma - ray astronomy located at the _ observatorio del roque de los muchachos _ ( european northern observatory , la palma island , spain ) .the production of a large amount of data during normal operation is inherent to the observational technique .the storage and processing of these data is a technical challenge which the magic collaboration has solved by profitting from infrastructures like those developed for lhc experiments . + during the last years , the magic groups at ifae and uab ( barcelona ) and ucm ( madrid ) have set up , in collaboration with pic , the magic data center .the facility became operational in february 2007 and , as of now , is equipped with the needed storage resources and computing capabilities to process the data from the first telescope and make them available to the magic collaboration .however , as magic ii is expected to start generating data this year , we expect the data volume to be increased by a factor of 3 with respect to the present single telescope situation . in consequence, we will increase the capabilities of the data center by providing it with the needed hardware and human resources to make it able to centralise the storage and analysis of the data .the main goals of this extension are to allow fast massive ( re-)processings of all stored data and to support data analysis for all magic collaborators .+ in what follows we will describe the present status of the data center and the foreseen upgrades required to deal with the data flow from the two - telescopes system .this will lead us to provide additional services useful for the collaboration and also for the astrophysical community .+ [ cols= " < , > , > , > , > " , ]the magic telescopes have in their focal plane a camera segmented into a number _ c _ of different pixels ( each equipped with a photo - multiplier ) , whose signals are digitized by the daq system . currently , magic i is in service as a single telescope with a camera of 577 pixels .magic ii is under commissioning , and has an improved camera with 1039 pixels for regular operation plus 42 additional pixels equiped with experimental high quantum efficiency photodetectors for test purposes .+ for every trigger , the single pixel signal is sampled _ s _ times .each sample is digitized with 12 bit precision and the resulting values stored in 2 byte fields .the information is then saved into a raw data file .the size of a magic raw event is given by _+ 2byte__ _ _ , where _ h _ is a fixed - size ( 4.5 kbyte ) header describing the event .the event rate depends on the observation conditions and trigger configuration , and can range between 200 and 700 hz .the average event rate during observation periods 67 - 73 ( may - dec 2008 ) was 350hz .we will use this value to compute the data volume and storage needs summarized in table [ tab : volume ] . + raw event filesare processed using the magic standard analysis and reconstruction software ( mars ) .the first step is a program dubbed _ callisto _ , which calibrates the cherenkov pulse s intensity and arrival time , producing a so - called calib data file .this part of the analysis is the most cpu demanding , therefore calib data files are saved before further data processing .the rest of the analysis chain consists of a set of executables taking as input the output of the previous program in the chain : _ star _ computes the parameters describing the cherenkov images of the individual telescope ; _ superstar _ merges the information of a given shower from the two telescopes ; and _ melibea _ computes the estimated energy , arrival direction and the so - called _ hadronness _ ( a parameter used for gamma / hadron discrimination ) . the output from _ star _ and _ melibea _ will be referred to as different steps reduced data files , and is also stored permanently .+ estimations of the data volume at the different stages of the analysis chain , and for the different telescope configurations are also shown in table [ tab : volume ] , the different phases of the standard analysis , the name of the standard programs and the input and output file formats are summarized in table [ tab : analysis ] . + a diferent route is followed by the monte - carlo simulated events that are used for the estimation of the energy and _ hadronness_. in this case , instead of raw files , atmospheric particle showers are generated using corsika and then digested in two steps ( _ reflector _ and _ camera _ ) that finally produce data - like files that are calibrated and reduced with the same programs used for real data .the parameters of the detector simulation are adapted to the telescope performance in different observation periods and configurations .therefore , several versions of the monte - carlo library are provided at the data center .presently , the simulated events are generated at the infn padova and udine , and the resulting files require 10tbyte of disk space . +currently , the magic data center takes care of the following tasks : * data transfer from la palma to pic , via internet and tapes * data storage on tapes and disk at pic * data access at all data processing levels ( raw , calib , reduced ) for all magic collaborators * real - time , automatic analysis of the data , processed with magic standard software * reanalysis of all stored data in case of software updates and bugfixes * magic data base * software repository and bug tracker * storage of the data quality control files during normal data taking on the site , raw data are stored into disk , and later recorded to tape .the tapes are then sent from la palma to pic via airmail , since currently there is not enough internet bandwidth between the island and europe to support the transfer of the files .+ the computing system on the site also performs the so - called _ onsite _ analysis , by which raw data are processed , producing calib files and reduced files right after the data taking .these files are indeed transferred to pic via internet , together with some log files generated by the subsystems of the telescope . + currently , all tapes received at pic are downloaded to a buffer disk and then written back to tape grouped by source and observation night in a single file ( an _ iso volume _ ) that can be mounted as an external unit in the file system .this procedure is obsolete and recently has been found more optimal to store the files directly in tapes .+ the data are organized in a data base and served to the collaboration through a user interface machine and a web site . until late 2008these data were in a nfs file system , but currently , all data are being migrated to a new grid - based file system ( dcache ) that allows more transparent access ( in the sense of making no difference between tape and disk storage ) to any level of raw and processed data .it is planned that in july 2009 the nfs will be finally dismantled and all applications and data access will be based on grid .+ the reduction of the files that arrive via internet is triggered by scripts that run automatically every few minutes and notice when transfers have successfully finished . when this happens , batch jobs are generated and submitted to the grid with a specific configuration that ensures that they will run at pic ( the only computing element where mars is currently installed ) .this set of scripts can also be used to massively reprocess all the data stored at pic in case that a bug is found in the software or some improvement in the analysis makes it worth .massive operations of this kind have been performed twice in 2008 and once more in 2009 . for the last one, we have estimated that we used 115 days of cpu time in two weeks .this means that we have the capability to run _ star _ on one year of data in about 8 days .it is worth to mention that this peak processing rate would have never been achieved with just the minimum number of cpu cores that the magic project is granted at pic according to its share .in fact , we should always have access to at least 7 cores any time , but in periods of low usage from other experiments we have got up to 10 times these resources .+ finally the data center also provides a `` concurrent version server '' ( _ cvs _ ) for the software development and the daily check , which generates a daily report on the data quality and the telescope stability .in a near future we want to provide additional services to allow a more agile analysis by any magic collaborator and also easier access of anyone to published data . for thiswe intend to : * extend the automatic data reduction up to _ melibea _ * provide resources and tools for high level analyses by any magic collaborator * open the magic public data to the whole scientific community by linking it to the european virtual observatory currently , the high level analysis , starting from melibea , is carried out by analysers that select a mc - gamma sample and a real data sample fitting well the observational conditions of the analyzed data .these samples are used in the training of the multidimensional technique of selection of gamma - like events ( the random forest method ) .we intend to automatize also this part of the analysis at pic in the near future , making the task of the analyzer simpler . + alsothe computing power of pic can be more widely exploited by opening the job submission to the rest of the collaboration .the already working roles of the grid scheme allow to assign priorities to the cpu farm users according to their duties , securing that the official data reduction is not delayed .+ finally , we intend to establish a link with the european virtual observatory in order to share potentially interesting data for the astrophysical community . for this purpose a software that will translate root information in the widely used format in astronomy - fitsis currently being developed .the magic data center based at pic is already providing quality services to the magic collaboration , exploiting when possible the extra resources that a grid - based infrastructure implies .+ the success of the two year experience as official data center makes us push for the extension of the current facilities .we hope this will improve the access to the data by the magic analyzers , and will make it more transparent for interested astrophysicists outside the collaboration .+ 99 albert , j. et al .instr . meth .a 594 ( 2008 ) 407 orito , r. et al ._ these proceedings _ borla , d. , goebel , f. et al ._ these proceedings _ moralejo , a. et al . , _these proceedings _ bretz , t. , wagner , r. , 2003 proc . of the 28th icrc ( tsukuba ) hillas , a. m. 1985 proc . of the 19th icrc ( la jolla )aliu , e. et al . ,. phys . 30 ( 2009 )293 albert , j. et al .meth . a 588, 424 ( 2008 ) oya i. et al . ,highlights of spanish astrophysics v cd - rom ( 2009 ) | the magic i telescope produces currently around 100tbyte of raw data per year that is calibrated and reduced on - site at the observatorio del roque de los muchachos ( la palma ) . since february 2007 most of the data have been stored and further processed in the port dinformaci cientfica ( pic ) , barcelona . this facility , which supports the grid tier 1 center for lhc in spain , provides resources to give the entire magic collaboration access to the reduced telescope data . it is expected that the data volume will increase by a factor 3 after the start - up of the second telescope , magic ii . the project to improve the magic data center to meet these requirements is presented . in addition , we discuss the production of high level data products that will allow a more flexible analysis and will contribute to the international network of astronomical data ( european virtual observatory ) . for this purpose , we will have to develop a new software able to adapt the analysis process to different data taking conditions , such as different trigger configurations or mono / stereo telescope observations . massive data processing , grid , european virtual observatory |
this review will start with a sketch of the kinematical - algebraic aspects of the overlap dirac operator in the vector - like context .next comes a general discussion of numerical implementations of the overlap dirac operator .section 2 is devoted to an alternative domain wall model .this model is domain - wall like in the sense that an extra dimension is added and the computation of the light fermion propagator requires a single conjugate gradient procedure , albeit for a matrix representing fermions in five dimensions . on the other hand ,the model is designed so that its output in exact arithmetic is the same as that of an iterative method implementing the overlap dirac operator directly .the latter method requires a two - level nested conjugate gradient procedure . in section 3rigorous results on spectral properties of our model are presented and in section 4 these results are compared to true domain wall fermions .the main conclusion is that nested procedures typically are more efficient than implementations based on domain - walls .this counter - intuitive conclusion is explained by the condition number of the domain wall problem being the product of the condition number of the four dimensional problem by the inverse bare quark mass .the latter two factors govern individually the two nested cycles in direct implementations . in section 5it is shown that one can eliminate the requirement of linearly growing memory consumption for increasing accuracy at the cost of a factor of two in operations . in practice ,the factor of two is often not felt because the implementation is memory bound .section 6 contains some final comments .a large part of this talk is about work done in collaboration with rajamani narayanan .the overlap formulation of vector - like gauge theories on the lattice preserves chiral symmetries exactly , a property thought to be unattainable for many years .if one adds to the ginsparg wilson relation , as originally formulated in 1982 , a requirement of -hermiticity , the combination is equivalent to the overlap at the algebraic level .the set - up has been of interest to mathematicians much earlier as a generalization of the concept of angle between two straight intersecting lines in a plane .the plane is generalized to a vector space over the complex numbers and the two lines to two subspaces of the vector space . in our applicationthe dimension of the vector space will always be even , but the subspaces can have unequal dimensions - when they do , the angle concept looses its meaning .the kato setup is also meaningful for infinite dimensional hilbert spaces and has other applications to physics .the subspaces are defined by projectors onto them , in our case the projectors are hermitian and replaced by linearly related reflections , and . and .the eigenspaces of and are the subspaces in question .they are spanned by orthonormal sets denoted by and with .the information about the relative positioning of the subspaces is contained in the overlap matrix : the coarsest measure of relative orientation is obviously .the main identity is in our case and , where and is any lattice version of the dirac operator describing fermions with negative mass of order where is the lattice spacing . can be thought of as describing dirac fermions of positive infinite mass . is restricted by the requirement that be hermitian . the simplest choice , which minimizes operations in the computation of the action of on a vector , is to pick as the wilson lattice dirac operator .it is possible that when all practical aspects of a simulation are taken into account a seemingly more complicated choice would pay better off . depends on the background gauge field represented by a collection of unitary link matrices . has the following properties : * is too large to be stored in the computer memory in its entirety , but it is sparse , so its action on vectors can be implemented . *the spectrum of is bounded by a finite bound that does not depend on the gauge background . * if all products of gauge matrices on links around plaquettes , , obey , where is a small positive number independent of the gauge background , the spectrum of is also bounded by a finite number , independently of the gauge - field .the calculation of the action of on a vector must use sparse matrix techniques .the boundedness of the spectrum means that the sign function needs to be approximated accurately only in two finite segments symmetrical about zero , which contain the entire spectrum of , for any gauge background allowed by the pure gauge action. the main strain on the approximation occurs around zero , where the gap turns out in practice to be very small .this is where essentially all the numerical cost goes .there are two main approaches to the approximate implementation of the sign function : one is the direct approach ( overlap ) , and the other is indirect ( domain wall fermions ). in the direct approach one looks for a rational approximation for the sign function in the range defined by the bounds on the spectrum of .the rational approximation is written as a sum of pole terms .the crucial point is that the action of each pole term on a vector need not be calculated separately : rather , the action of all terms can be calculated simultaneously , in one single pass through the conjugate gradient algorithm .the approximation gets more accurate when the number of pole terms is increased .settling for a certain number of terms , , one obtains an approximation for , the `` masses '' have to be non - negative , but the weights can take either sign . in practice we need the action of on a vector , so the action of is needed many times for one evaluation .we end up with two levels of nested conjugate gradient algorithms . in the context of domain wall fermionsan approximation for the sign function is not constructed directly .rather , one invents a larger problem , defined by a matrix . is determined by but its dimensions are -times larger . is still sparse .one then arranges that in a given subspace of the larger space acts on one has above , and can differ from eq .( 2 ) by terms that disappear in the continuum limit .the exact form is a matter of convenience .one ends up needing to do only one inversion , using a single conjugate gradient algorithm ( cg ) . in the most common applications of domain wall fermionsone uses a construction of due to kaplan . historically , kaplan s formulation came first , and was a prime motivator for subsequent developments .the main difference between the two methods is that one has a nested cg in one and a single cg in the other , but employing a larger matrix .the objective of my work with narayanan was to invent a model containing a version of which is close to kaplan s domain wall fermions , but also to the rational approximation so that a comparison of numerical costs may be carried out using rigorous methods .we wanted the output of either method to be the same in exact arithmetic , so it would be only the ways of doing the calculation that differ .one way is similar to domain wall fermions and the other is direct .let be the dirac field describing a light quark .we wish to end up with an effective action for given by : \psi .\ ] ] to be specific , we choose with this rational approximation can be easily replaced by others .we now add extra fields , and , pack them together with the light field into a combined field , . the total action is introduce and .the matrix is given by : our goal is attained by the model because of the identity : \times \nonumber\\ \int d\bar\psi d\psi e^{-\bar\psi \left ( { { 1+\mu}\over 2 } \gamma_5 + { { 1-\mu}\over { 2n}}\sum_{s=1}^n { 1\over { c_s^2 h_w + { { s_s^2}\over h_w } } } \right ) \psi } \end{aligned}\ ] ] the prefactor can be canceled by adding pseudofermions , which will be decoupled in the index .let us now roughly estimate computational costs . in the direct approach the number of inner iterations is approximately given by the condition number of , ( which is the square root of the condition number of ) .the number of outer iteration goes as the condition number of which is roughly .thus , the total number of actions is roughly given by . in the domain wall versionthe number of operations is governed by .every operation counts roughly as operations .we need to estimate in terms of and . to find need the maximal and minimal eigenvalues of .the basic trick for finding and is the derivation of an exact formula for . \times \nonumber\\ \det \left [ { { 1+\mu}\over 2}\gamma_5 + z + { { 1-\mu}\over 2 } f_n ( h_w , z ) \right ] \end{aligned}\ ] ] here , eigenvalues of are roots of the equation .all the roots come from roots of the last factor .( roots of the factors in the product over are canceled by poles in the last factor .so , the spectrum of is determined by the last factor . )we write where , the point is that one has explicit formulae for , so long and are real . if we have if , these formulae make the -dependence explicit .let me describe the main idea for deriving bounds on the eigenvalues of .we are looking for zero modes of : conditions on , depending on and spectral properties of , limit the eigenvalues of to some range .suppose is a zero mode of .it can exist only if is in the range of eigenvalue of .but , is bounded from above and below for and for .so , it is possible to exclude vicinities of and . in this way we obtain rigorous bounds : and is a function of the spectrum of and of .it is well defined and calculable but clumsy to write down . for ranges of practical interest .this leads us to : where the last factor is close to unity in practice .the meaning of the result is quite obvious : we need to overcome both the lattice artifact of having at times almost zero modes for and the physical effect of small quark mass ( , when close to zero , gives the bare lattice quark mass ) .clearly our conclusion speaks in favor of the direct approach , if we assume the bounds to be typically saturated ( the action of is more expensive than that of ) .but , before jumping to this conclusion we should try to decide how good our bounds are .it turns out they are quite good .the bound on the is typically saturated . as faras goes we can prove ^ 2 \right ) \end{aligned}\ ] ] one can not do better than either . for small enough and large enough either a very small eigenvalue of or a very small mass are guaranteed to make the smallest eigenvalue of very small too .the lower bound in eq . ( 18 ) , if optimal , indicates that it is possible for the smallest eigenvalue of to be as small as the product of these two small numbers .for true domain wall fermions with action we can prove only one kind of bound : ^ 2 \nonumber\\ + ( 1+\mu)^2 \left ( { 1\over n } -{1\over { n^2 } } \right ) \end{aligned}\ ] ] an approximate analysis yields upper bounds on are of order 10 in practice .we conclude that the matrices describing the version of domain wall fermions used in large scale simulations have conditions numbers that behave similarly to the condition number of our model .= 2.5 in in a modest numerical experiment on a two - dimensional model we compared domain wall fermions using our alternative model , true domain wall fermions and the direct overlap approach .the pure gauge action was of the single plaquette type , with a lattice coupling and the lattice size was .we performed the calculations needed to obtain the condensate .we required the norm of the residual to go down to and counted operations of on a vector .the value of used was .we did not use preconditioning in any of the three methods .results were obtained using 20 gauge field configurations .figure 1 shows the number of -on - vector operations as a function of quark mass .the comparison between the direct overlap and the alternative method is simple because , by design , the numbers one would get in exact arithmetic are the same in either method . comparing to true domain wall fermions is more difficult because one needs to match parameters to get similar physics , and this is ambiguous .for example we set the wilson mass parameter in to in all three cases and used the same mass parameter although this certainly is not correct for very small .we found that the best was to use the direct overlap approach , but this experiment was very limited and one should not immediately draw conclusions about numerical qcd .however , there is enough evidence by now to scrutinize seriously the question whether it is worth investing large scale computational resources into true domain wall fermions given that there is a direct overlap alternative . in particularwe have employed no preconditioning and ignored many other options of optimization .theoretically , one is more comfortable with overlap fermions , since they differ from other four dimensional fermions just by a more complicated action . if the domain wall approach were more effective computationally one could have the best of the two worlds by using our alternative domain wall fermions .whichever the case may be , it is hard to argue in favor of true domain wall fermions .until now we concerned ourselves only with counting operations .but , as is well known , this is only one aspect of a computation .memory requirements and access patterns matter as much , or more , depending on architecture . until now all methods required a factor of times more memory than on ordinary wilson fermion code would . in the direct approachthere is an option to trade a factor of order 2 in operations for reducing the memory needed to that of ordinary wilson fermions .basically , this is possible because the heart of the code is a conjugate gradient procedure , and only a certain linear combination of the vectors that are acted on is needed .the conjugate gradient procedure is closely related to the lanczos scheme for bringing a hermitian matrix to tri - diagonal form .it is well known that in the lanczos scheme , if one wishes additional information ( for example one needs an eigenvector ) one can avoid large memory consumption by going through the algorithm twice , once to collect the coefficients and a second time to accumulate the needed data .the same works here . in the multishift scheme iterates over an index updating vectors , starting from an input vector . at each iterationthese vectors are determined by a few -dependent scalars and by three -independent vectors that make up the core conjugate algorithm , . among them , is the residual .when the iteration is stopped we have vectors and they are summed into , our approximation to the vector actually , the iteration must be effectively stopped after a different number of steps for each , since for higher values of the `` mass '' is larger and , as a result , the convergence much faster .clearly , is made out of the basic krylov vectors generated in the core conjugate gradient ; the single reason we need to store vectors is that the components of in the krylov basis at step in the iteration are not yet known because they depend also on steps .the idea is now to use a first pass to calculate the needed conjugate gradient scalars which are dependent but not -dependent . using these scalars we can now compute in an iteration storing scalars only the extra and -dependent scalars needed for implementing the multishift .it is possible now to also compute a set of -independent scalars which have the property that becomes at the last iteration the desired approximation for .but , we need the vectors again so we need to run the basic conjugate gradient again . hence , a second pass is required , but one needs to store only four large vectors for any .a more detailed description of the algorithm can be found in .the surprise was that with a code written in a higher level language the two pass version can actually run faster than the single pass version , by about 30 percent .in the two pass version both the operations count and the memory usage are independent on !the speed up is certainly a surprise and must be strongly machine dependent .but , it turns out , that one gets the same amount of speed - up on an sgi o2000 as on a pentium iii pc .let us summarize roughly the situation we were looking at when we began to work on reference : although an approach based on the overlap dirac operator looked theoretically cleaner , true domain wall fermions were more attractive numerically .our analysis has led us to the conclusion that there is no evidence that true domain wall fermions have even a numerical advantage . in all cases we looked at ,one faces a problem related to almost zero modes of .this requires large numbers of extra fields in order to preserve chirality .it also affects adversely the condition numbers .whichever method we use , the worst case condition numbers are a product of the inverses of two main scale ratios : the first is the scale of the small eigenvalues of divided by an upper bound of the order of 5 - 10 in lattice units . the second scale ratio is the lattice physical quark mass squared divided by a number of order unity .each small scale ratio slows down inversion independently and the effect compounds in the worst case .thus , as far as we can see , at the numerical level , there are no _ a priori _ advantages to choosing true domain wall fermions over overlap fermions in the context of qcd . in both formulationsone faces similar numerical obstacles , and the overlap , to say the least , does not fare any worse than domain wall fermions . at the analytical levelwe are convinced that an approach based on the overlap ( or any other efficient replacement of the overlap dirac operator that might be found in the future ) is superior at presently attainable gauge couplings in numerical qcd .perturbation theory is more transparent to interpret and technically less complex in the overlap version .the chirality violating effects associated with the number of extra fields are much more explicit and therefore their impact should be easier to trace through .algorithmically we have been looking only under the lamp post and we are far from having exhausted the options there .nevertheless , i feel that true domain wall fermions are an inefficient way to incorporate the new ideas about lattice chirality into practical qcd simulations .for other recent work on similar issues the reader is referred to .i would like to urge you to use your imagination : there must exists much better ways than the ones we have tried until now .this research was supported in part by doe grant de - fg02 - 96er40949 and by a guggenheim fellowship .i would like to thank l. baulieu and the entire group at lpthe for their hospitality and support .9 r. narayanan , h. neuberger , phys .d62 , 074504 ( 2000 ) d. b. kaplan , phys .b288 , 342 ( 1992 ) ; r. narayanan and h. neuberger , phys .b302 , 62 ( 1993 ) ; nucl .b443 , 305 ( 1995 ) ; h. neuberger , phys .81 , 4060 ( 1998 ) ; phys .b417 , 141 ( 1997 ) ; phys .b427 , 125 ( 1998 ) ; phys .d57 , 5417 ( 1998 ) ; phys .d60 , 065006 ( 1999 ) ; phys .d61 , 085015 ( 2000 ) p. ginsparg , k. wilson , phys . rev .d25 , 2649 ( 1982 ) .t. kato , `` perturbation theory for linear operators '' , springer - verlag , berlin , 1984 .j. avron , r. seiler , b. simon , comm .phys 159 399 ( 1994 ) .a. frommer , s. gsken , t. lippert , b. nckel , and k. schilling , int .c6 , 627 ( 1995 ) ; b. jergerlehner , hep - lat/9612014 . h. neuberger , int . j. mod .c10 , 1051 ( 1999 ) .j. kiskis and r. narayanan , phys .d64 , 117502 ( 2001 ) . and private communication .r. edwards , u. heller , phys .d63 094505 ( 2001 ) . | an alternative to commonly used domain wall fermions is presented . some rigorous bounds on the condition number of the associated linear problem are derived . on the basis of these bounds and some experimentation it is argued that domain wall fermions will in general be associated with a condition number that is of the same order of magnitude as the _ product _ of the condition number of the linear problem in the physical dimensions by the inverse bare quark mass . thus , the computational cost of implementing true domain wall fermions using a single conjugate gradient algorithm is of the same order of magnitude as that of implementing the overlap dirac operator directly using two nested conjugate gradient algorithms . at a cost of about a factor of two in operation count it is possible to make the memory usage of direct implementations of the overlap dirac operator independent of the accuracy of the approximation to the sign function and of the same order as that of standard wilson fermions . |
secure communication is a topic that is becoming increasingly important thanks to the proliferation of wireless devices . over the years, several secrecy protocols have been developed and incorporated in several wireless standards ; e.g. , the ieee 802.11 specifications for wi - fi . however ,as new schemes are being developed , methods to counter the specific techniques also appear . breaking this cycle is critically dependent on the design of protocols that offer provable secrecy guarantees .the information theoretic secrecy paradigm adopted here , allows for a systematic approach for the design of low complexity and provable secrecy protocols that fully exploit the intrinsic properties of the wireless medium .most of the recent work on information theoretic secrecy is , arguably , inspired by wyner s wiretap channel . in this setup, a passive eavesdropper listens to the communication between two legitimate nodes over a separate communication channel . while attempting to decipher the message , no limit is imposed on the computational resources available to the eavesdropper .this assumption led to defining * perfect secrecy capacity * as the maximum achievable rate subject to zero mutual information rate between the transmitted message and the signal received by the eavesdropper . in the additive gaussian noise scenario , the perfect secrecy capacity turned out to be the difference between the capacities of the legitimate and eavesdropper channels . therefore ,if the eavesdropper channel has a higher channel gain , information theoretic secure communication is not possible over the main channel .recent works have shown how to exploit multipath fading to avoid this limitation .the basic idea is to opportunistically exploit the instants when the main channel enjoys a higher gain than the eavesdropper channel to exchange secure messages .this opportunistic secrecy approach was shown to achieve non - zero * ergodic secrecy capacity * even when * on average * the eavesdropper channel has favorable conditions over the legitimate channel .remarkably , this result still holds even when the channel state information of the eavesdropper channel is not available at the legitimate nodes .the ergodic result in applies only to delay tolerant traffic , e.g. , file downloads .early attempts at characterizing the delay limited secrecy capacity drew the negative conclusion that non - zero delay limited secrecy rates are not achievable , over almost all channel distributions , due to * secrecy outage * events corresponding to the instants when the eavesdropper channel gain is larger than the main one .later , it was shown in that , interestingly , a non - zero delay limited secrecy rate could be achieved by introducing * private key queues * at both the transmitter and the receiver .these queues are used to store private key bits that are shared * opportunistically * between the legitimate nodes when the main channel is more favorable than the one seen by the eavesdropper .these key bits are used later to secure the delay sensitive data using the vernam one time pad approach .hence , secrecy outages are avoided by simply storing the secrecy generated previously , in the form of key bits , and using them whenever the channel conditions are more advantageous for the eavesdropper . however , this work stopped short of proving sharp capacity results or deriving the corresponding optimal power control policies .these results can be recovered as special cases of the secrecy outage capacity and power control characterization obtained in the sequel .in particular , this work investigates the outage secrecy capacity of point - to - point block fading channels .we first consider the scenario where perfect knowledge about the main and eavesdropper channels are available _ a - priori _ at the transmitter . the outage secrecy capacity and corresponding optimal power control policy is obtained and then the results are generalized to the more practical scenario where only the main channel state information ( csi ) is available at the transmitter .finally , the impact of the _ private key queue _ overflow on secrecy outage probability is studied .overall , our results reveals interesting structural insights on the optimal encoding and power control schemes as well as sharp characterizations of the fundamental limits on secure communication of delay sensitive traffic over fading channels .the rest of this paper is organized as follows.we formally introduce our system model in section [ s : sysmodel ] . in section [ section : capacity] , we obtain the capacity results for the full and main csi scenarios .the optimal power control policies , for both cases , are derived in section [ section : power ] .the effect of key buffer overflow on the outage probability is investigated in section [ s : finitebuffer ] .we provide simulations to support our main results in section [ section : simulations ] .finally , section [ s : conclusion ] offers some concluding remarks . to enhance the flow of the paper , the proofsare collected in the appendices .we study a point - to - point wireless communication link , in which a transmitter is trying to send information to a legitimate receiver , under the presence of a passive eavesdropper .we divide time into discrete slots , where blocks are formed by channel uses , and blocks combine to form a super - block .let the communication period consist of super - blocks .we use the notation to denote the block in the super - block .we adopt a block fading channel model , in which the channel is assumed to be constant over a block , and changes randomly from one block to the next . within each block ,the observed signals at the receiver and at the eavesdropper are : and respectively , where is the transmitted signal , is the received signal by the legitimate receiver , and is the received signal by the eavesdropper . and are independent noise vectors , whose elements are drawn from standard complex normal distribution .we assume that the channel gains of the main channel and the eavesdropper channel are i.i.d .complex random variables .the power gains of the fading channels are denoted by and .we sometimes use the vector notation ] .note that , is the supremum of achievable main channel rates , without the secrecy constraint .also , is the non - negative difference between main channel and eavesdropper channel s supremum achievable rates .we show in capacity proofs that the outage capacity achieving power allocation functions lie in the space of stationary power allocation functions that are functions of instantaneous transmitter csi .hence for * full csi * , we constrain ourselves to the set of stationary power allocation policies that are functions of ] without the outage constraint . with the outage constraint , the fluctuations of due to fading are unacceptable , since can go below the desired rate when the channel conditions are unfavorable ( e.g. , when , ) .hence , we utilize secret key buffers to smoothen out these fluctuations to provide secrecy rate of ] .the channel outage constraint on the other hand is a necessary condition to satisfy the secrecy outage constraint in due to .[ e : fullcsi ] consider a four state system , where and takes values from the set and the joint probabilities are as given in table [ ex1:pr ] .let the average power constraint be , and there is no power control , i.e. , . the achievable instantaneous secrecy rate at each state is given in table [ ex1:rs ] .according to the pessimistic result in [ 6,8 ] , any non - zero rate can not be achieved with a secrecy outage probability in this case . however , according to theorem 1 , rate can be achieved with secrecy outage probability is continuous , the result similarly applies to discrete as well . ] , since =0.8 ] is the expected secrecy rate under the power allocation policy .the proof follows the approach in full csi case , hence we omit the details for brevity . define the sub - problem = & \max_{p(h_m)}\expect \left [ r_s({{\bfh}},p ) \right ] \label{eq : mainpowercontrol}\\ \textrm{subject to : } & p(h_m ) \geq 0,~ \forall h_m \nonumber\\ & \expect [ p(h_m ) ] \leq p_{\text{avg } } , \label{mainpowerconstraint}\\ & \pr\left(r_m({{\bf h}},p ) < { r}\right ) \leq\epsilon \label{mainserviceconstraint}\end{aligned}\ ] ] let be the power allocation function that solves this sub - problem .lemmas [ bmaxlemma ] and [ fullcsiuniquepwr ] also hold in this case .the only difference is the following lemma , which replaces lemma [ l : fullcsipower ] in full csi .[ t : maincsipower ] for any , where is a constant that satisfies , and is a constant that satisfies with equality .the proof is similar to the proof of lemma [ l : fullcsipower ] , and is provided in appendix [ a : maincsipower ] .the graphical solution in figure [ fig : fullcsipower ] to find also generalizes to the main csi case .the proofs of the capacity results of section [ section : capacity ] assume availability of _ infinite size _ secret key buffers at the transmitter and receiver , which mitigate the effect of fluctuations in the achievable secret key bit rate due to fading .finite - sized buffers , on the other hand will lead to a higher secrecy outage probability due to wasted key bits by the key buffer overflows .we revisit the full csi problem , and we consider this problem at ` packet ' level , where we assume a packet is of fixed size of bits .we will prove the following result .[ eq : buffervsoutage ] let .let be the buffer size ( in terms of packets ) sufficient to achieve rate with at most probability of secrecy outage .then , + ( c_f^{\epsilon})^2\epsilon(1-\epsilon)}{(\epsilon'-\epsilon)c_f^{\epsilon } } \log\left ( \frac{\var[r_s({{\bf h}},p^{c_f^{\epsilon } } ) ] + ( c_f^{\epsilon})^2\epsilon(1-\epsilon ) } { ( \epsilon'-\epsilon)^2 c_f^{\epsilon } } \right ) } \leq 1 \label{c : asymptoticbuffer}\end{aligned}\ ] ] before providing the proof, we first interpret this result .if buffer size is infinite , we can achieve rate with probability of secrecy outage . with finite buffer, we can achieve the same rate with probability of secrecy outage .considering this difference to be the price that we have to pay due to the finiteness of the buffer , we can see that the buffer size required scales with , as .achievability follows from simple modifications to the capacity achieving scheme described in appendix [ a : fullcsi ]. we will first study the key queue dynamics , then using the heavy traffic limits , we provide an upper bound to the key loss ratio due to buffer overflows .then , we relate key loss ratio to the secrecy outage probability , and conclude the proof . for the key queue dynamics , we use a single index to denote the time index instead of the double index , where .we consider transmission at outage secrecy rate of , and use power allocation function , which solves the problem - .let us define as the key queue process with buffer size , and let .then , during each block , 1. the transmitter and receiver agree on secret key bits of size using privacy amplification , and store the key on their secret key buffers .2 . the transmitter pulls key bits of size from its secret key buffer to secure the message stream of size using one time pad , and transmits over the channel .as explained in appendix [ a : fullcsi ] .the last phase is skipped if outage ( ) is declared , which is triggered by one of the following events * channel outage ( ) : the channel can not support reliable transmission at rate , i.e. . *key outage ( ) : there are not enough key bits in the key queue to secure the message at rate .this event occurs when . * artificial outage ( ) : outage is artificially declared , even though reliable transmission at rate is possible . due to the definition of , , and the set of events indexed by i.i.d .we choose such that is i.i.d . as well , and the dynamics of the key queue can therefore be modeled by note that , due to the definition of .let be the time average loss ratio over the first blocks , for buffer size , which is defined as the ratio of the amount of loss of key bits due to overflows , and the total amount of input key bits then , we can see that , follows from , , and the fact that .[ l : stationarity ] converges in distribution to an almost surely finite random variable . the proof is provided in appendix [ app : stationarity ] .this implies that exists .now , we provide our asymptotic result for the key loss ratio .we define the drift and variance of this process as \nonumber\\ & = \expect[r_s({{\bf h}},p^{{r}})]-{r}(1-\epsilon ) \label{mu_rate}\end{aligned}\ ] ] and \nonumber \end{aligned}\ ] ] respectively , where follows from the definition of .[ t : losspr ] for any , the key loss ratio satisfies the following asymptotic relationship e^{\frac{-2{r}|{\mu}_{{r}}|}{\sigma_{{r}}^2 } } } { { \sigma}_{{r}}^2 } \leq e^{-2m}\end{aligned}\ ] ] the proof is provided in appendix [ app : bufferoverflowproof ] . [ outagelemma ] if , then secrecy outage probability is satisfied .find such that for any . in 2-index time notation with , it corresponds to , . then . here , follows from the union bound , and second term follows from the equivocation analysis and in appendix a , which shows that there exists some packet size large enough such that .equation implies that secrecy outage probability is satisfied .let .since and , we have .this implies that ( since otherwise , key outage probability would be zero ) , which , due to implies & = ( 1- \lim_{t \to \infty}\pr({{{\cal o}}}_{\text{enc}}(t))){r}\nonumber \\ & = ( 1-\epsilon'){r}\label{corr1:ltm}\end{aligned}\ ] ] here , due to the choice of power allocation function , we have = \lim_{t\to\infty}\frac{1}{t}\sum_{t=1}^t r_s(t) ] and |_{{{r}}=c_f^{\epsilon}}=\expect[r_s({{\bf h}},p^{\ast})] ] is a continuous function of , hence for any given , there exists an such that , and =(1- \frac{\epsilon+\epsilon'}{2}){{r}} ] , therefore . from , we get since as , , and , where \\ & \leq \var[r_s({{\bf h}},p^{c_f^{\epsilon } } ) ] - c_f^{\epsilon}(1-\epsilon)\epsilon\end{aligned}\ ] ] the last inequality induces the upper bound , which concludes the proof .in this section , we conduct simulations to illustrate our main results with two examples . in the first example , we analyze the relationship between -achievable secrecy capacity and average power .we assume that both the main channel and eavesdropper channel are characterized by rayleigh fading , where the main channel and eavesdropper channel power gains follow exponential distribution with means 2 and 1 , respectively .since rayleigh channel is non - invertible , maintaining a non - zero secrecy rate with zero secrecy outage probability is impossible . in figure[ fig : capacityresults ] , we plot the -achievable secrecy capacity as a function of the average power , for outage probability , for both full csi and main csi cases .it can be clearly observed from the figure that the gap between capacities under full csi and main csi vanishes as average power increases , which support the result of theorem [ t : highpowercapacity ] .-achievable secrecy capacities as a function of average power , ,width=332 ] in the second example , we study the relationship between the buffer size , key loss ratio and the outage probability .we assume that both the main and eavesdropper channel gains follow a chi - square distribution of degree 2 , but with means 2 and 1 , respectively .we focus on the full csi case , and consider the scheme described in section [ s : finitebuffer ] .we consider transmission at secrecy rate of with the use of the power allocation policy that solves the problem - . for , and the average power ,we plot the key loss ratio , as a function of buffer size in figure [ fig : buffersizing1 ] , for , and , where is the -achievable secrecy capacity .it is shown in lemma [ t : losspr ] of section [ s : finitebuffer ] that expect the key loss ratio decreases as increases , which is observed in figure [ fig : buffersizing1 ] .finally , we study the relationship between the secrecy outage probability and the buffer size for a given rate . in figure[ fig : buffersizing2 ] , we plot the secrecy outage probabilities , denoted as , as a function of buffer size for the same encoder parameters . on the same graph, we also plot our asymptotic result given in theorem [ eq : buffervsoutage ] , which provides an upper bound on the required buffer size to achieve outage probability for rate , with the assumption that is an equality for any .we can see that , this theoretical result serves as an upper bound on the required buffer size when , which is the additional secrecy outages due to key buffer overflows , is very small .another important observation from figures [ fig : buffersizing1 ] and [ fig : buffersizing2 ] is that , for a fixed buffer size , although the key loss ratio decreases as increases , secrecy outage probability increases .this is due to the fact that key bits are pulled from the key queue at a faster rate , hence the decrease in the key loss ratio does not compensate for the increase of the rate that key bits are pulled from the key queue , therefore the required buffer size to achieve same is higher for larger values of . , and outage probability ] , and outage probability ]this paper obtained sharp characterizations of the secrecy outage capacity of block flat fading channels under the assumption full and main csi at the transmitter . in the two cases ,our achievability scheme relies on opportunistically exchanging private keys between the legitimate nodes and using them later to secure the delay sensitive information .we further derive the optimal power control policy in each scenario revealing an interesting structure based by judicious time sharing between time sharing and the optimal strategy for the ergodic .finally , we investigate the effect of key buffer overflow on the secrecy outage probability when the key buffer size is finite .first , we prove the achievability . consider a fixed power allocation function .let us fix /(1-\epsilon) ] is an -achievable secrecy rate .the outage capacity is then found by maximizing /(1-\epsilon) ] and ) ] and ) ] * min - entropy as as .* conditional min - entropy of given as * -smooth min - entropy of as . without loss of generality ,we drop the block index and , and focus on the first block , and assume the event does not occur .let ] . for -\delta}{1-\epsilon} ] is achievable .the converse proof is also the same as in full csi case , and is omitted here .the parameter is the maximum value for which the problem - has a solution ; hence the average power constraint is active .moreover , the outage constraint is also active , and due to the fact that is a concave increasing function of , we have , since otherwise one can further increase to find a power allocation function that satisfies the equality . since for a given , the power allocation function that yields is , we have where the set of channel gains for which the system operates at rate , and . the set contains channel gains for which takes minimum values , so that the average power constraint is satisfied for the maximum possible . since is a decreasing function of , one can see that the choice of that yields is . since the probability density function of is well defined , , hence , which , along with , implies that .let .then , any that satisfies , would also satisfy .so , the set of power allocation functions that satisfy shrinks as increases , hence ] is continuous . from lemma[ l : fullcsipower ] , we know that where and are constants that satisfy and with equality with respect to parameters and , respectively .let us define another power allocation function such that it is easy to see that \leq \expect[r_s({{\bf h}},p^{{{r}}'})] ] is a left continuous function of .following a similar approach , it can also be shown that ] , then the unique solution of /(1-\epsilon) ] .it is easy to see that , }{{{r}}_{\max}}\leq(1-\epsilon) ] is continuous and strictly decreasing on ] and }{{{r}}_{\max}}\leq(1-\epsilon) ] .we use lagrangian optimization approach to find .we can express ] due to , , .hence , there is a minimum power constraint for set , as define as the set in which the minimum power constraint is not active , i.e. , where is complement of .first , we focus on the solution in the nonboundary set .since the optimal solution must satisfy the euler - lagrange equations , for , we get the following condition whose solution yields \nonumber\end{aligned}\ ] ] if for some , the value is negative , then due to the concavity of with respect to , the optimal value of is zero .therefore , the solution yields combining the result with the minimum power constraint inside set , the solution of yields , which concludes the proof .now , we find .we proceed by further simplifying the lagrangian in , for the case where , for a given as follows .({{\bf h}})d{{\bf h}}\nonumber\\ & \quad + \int_{{{\bf h}}\notin { { \cal g}}}\left [ r_s({{\bf h}},p)-\lambda p({{\bf h } } ) \right]f({{\bf h}})d{{\bf h}}\nonumber\\ & = \quad \int \left [ r_s({{\bf h}},p_{\text{wf}})-\lambda p_{\text{wf}}({{\bf h } } ) \right]f({{\bf h}})d{{\bf h}}\nonumber\\ & \quad+ \int_{{{\cal g } } } \left\ { \left[r_s({{\bf h}},p_{\text{inv}})-r_s({{\bf h}},p_{\text{wf}})\right]^+ \right.\nonumber\\ & \qquad-\lambda \left.\left[p_{\text{inv}}({{\bf h}})-p_{\text{wf}}({{\bf h}})\right]^+\right\ } f({{\bf h}})d{{\bf h}}\label{jeqnfinal2 } \end{aligned}\ ] ] after this simplification , the first term in does not depend on .we conclude the proof by showing that where the set is defined as follows , ^+ -\lambda \left[p_{\text{inv}}({{\bf h}})-p_{\text{wf}}({{\bf h}})\right]^+ \geq k \right\}\label{optimalg}\end{aligned}\ ] ] where the parameter is a constant that satisfies . we prove this by contradiction .first define ^+ -\lambda \left[p_{\text{inv}}({{\bf h}})-p_{\text{wf}}({{\bf h}})\right]^+ ] , since can be written as where and are i.i.d . ,and depends only on and .therefore , is independent of given , hence markovity follows .now , we prove that is a recurrent regenerative process where regeneration occurs at times such that .a sufficient condition for this is to show that has an accessible atom .assume , ] .consider another recursion with .it is clear that is also regenerative , where regeneration occurs at , where , and let be equal in distribution to .note that is regenerative both at states and .let ] denote the expected time to hit from .then , \leq \expect[\tau'_1 ] + \expect[\tau'_2 ] \label{tbar}\end{aligned}\ ] ] since the key queue has a negative drift , i.e. , <0 ] .now , we show that < \infty]th block ( on average ) .the last inequality follows from , and ratio test .this result , along with and lemma [ tlemma ] concludes that is a positive recurrent regenerative process , which concludes the proof .we follow an indirect approach to prove the lemma .let denote the key queue dynamics of the same system for the infinite buffer case ( ) .first , we use the heavy traffic results in to calculate the overflow probability of the infinite buffer queue .then , we relate the overflow probability of infinite buffer system to the loss ratio of the finite buffer queue .the dynamics of the infinite buffer queue is characterized by where .the heavy traffic results we will use are for queues that have a stationary distribution . since it is not clear whether is stationary or not , we will upper bound by another stationary process , and the buffer overflow probability result we will get for will serve as an upper bound for . 1 .if , then , using the facts and , we obtain which , using the described key queue recursions in , implies observe that , by ( [ eq : newqstar ] ) , which , in conjunction with ( [ eq : queuebound1 ] ) and , yields .2 . if , then . we further consider two cases. first , if , then , next , if , then which , combined with ( [ eq : queuebound3 ] ) , yields now , we show that converges in distribution to an almost surely finite random variable .first , we need to show that the expected drift of is negative .it is clear from that the expected drift of the process is equal to -{r}(1-\epsilon) ] is a non - increasing continuous function of .therefore , it is a continuous function of .furthermore , by definition of in , .combining these two facts , we conclude that , for . combining lemma [ mulemma ] with the classic results by loynes , we can see that converges in distribution to an almost surely finite random variable such that using , we finish the proof of the lemma . first , we prove that which is based on the heavy traffic limit for queues developed in , see also theorem 7.1 in . in order to prove , we only need to verify the following three conditions : i ) ; ii ) ; and iii ) the set of random variables indexed by is uniformly integrable .\iii ) note that , lies on the interval ] , , hence we can see that .therefore , this class of random variables is uniformly integrable .this completes the proof of .this result , in conjunction with lemma [ l : stablelemma ] completes the proof . using lemma 1 in , we relate the loss ratio of our finite buffer queue to the overflow probability of the infinite buffer queue as follows \limsup_{t \to\infty } l^t(m ) \leq \int_{x = m}^{\infty } \limsup_{t\to\infty}\pr(q(t)>x ) dx \label{lossoverflow}\end{aligned}\ ] ] combining lemma [ l : bufferoverflow ] with , the proof is complete .u. maurer and s. wolf , `` information - theoretic key agreement : from weak to strong secrecy for free , '' _ advances in cryptology - euro - crypt 2000 , lecture notes in computer science 1807 _ , pp.351 - 368 , 2000 .nascimento , j. barros , s. skludarek , and h. imai , `` the commitment capacity of the gaussian channel is infinite , '' _ ieee transactions on information theory _ , vol.54 , no.6 , pp.2785 - 2789 , june 2008 h. s. kim and n. b. shroff , `` on the asymptotic relationship between the overflow probability in an infinite queue and the loss probability in a finite queue , '' _ advances in applied probability _ ,836 - 863 , dec . 2001 | this paper considers point to point secure communication over flat fading channels under an outage constraint . more specifically , we extend the definition of outage capacity to account for the secrecy constraint and obtain sharp characterizations of the corresponding fundamental limits under two different assumptions on the transmitter csi ( channel state information ) . first , we find the outage secrecy capacity assuming that the transmitter has perfect knowledge of the legitimate and eavesdropper channel gains . in this scenario , the capacity achieving scheme relies on opportunistically exchanging private keys between the legitimate nodes . these keys are stored in a key buffer and later used to secure delay sensitive data using the vernam s one time pad technique . we then extend our results to the more practical scenario where the transmitter is assumed to know only the legitimate channel gain . here , our achievability arguments rely on privacy amplification techniques to generate secret key bits . in the two cases , we also characterize the optimal power control policies which , interestingly , turn out to be a judicious combination of channel inversion and the optimal ergodic strategy . finally , we analyze the effect of key buffer overflow on the overall outage probability . |
local helioseismic diagnostic methods such as time - distance helioseismology , helioseismic holography and ring - diagram analysis , have over the years provided us with unprecedented views of the structures and flows under sunspots and active regions . however , a growing body of evidence appears to suggest that interpretations of the measured statistical changes in the properties of the wave - field may be rendered inaccurate by complexities associated with the observations and wave propagation physics . incorporating the full mhd physics and understanding the contributions of phase and frequency filters , and differences in the line formation height ,are thought to be central to future models of sunspots .one of the earliest studies that highlighted the interaction of waves with sunspots was the fourier - hankel analysis of , who found that sunspots can absorb up to half of the incident acoustic - wave power and shift the phases of interacting waves quite significantly ( see also ) .these results were echoed over the years by a steady steam of theoretical results ( e.g. ; ; ; ; ) that have consistently emphasized the need for more sophisticated modeling and interpretation of wave propagation in strongly magnetized regions .important advances in our observational understanding of sunspots were also achieved by and , who inferred the presence of flows underneath sunspots , and who estimated the sub - surface wave - speed topology . however , while the inversion procedures applied to derive these results fail to directly account for the tensorial nature of magnetic field effects , the action of the field is mimicked via changes in the acoustic properties of the medium ( the so - called wave speed ) .recently however , numerical forward models of helioseismic wave ( e.g. ; ) and ray propagation in magnetized atmospheres have been developed and are beginning to make inroads into this problem .in particular , the results of and cameron ( 2008 ; private communication ) strongly suggest that active - region magnetic fields play a substantial role in influencing the wave field , and that the complex interaction of magnetic fields with solar oscillations , as opposed changes in the wave - speed , are the major causes of observed travel - time inhomogeneities in sunspots .we study the impact of strong magnetic fields on wave propagation and the consequences for time - distance helioseismology using two numerical forward models , a 3d ideal mhd solver and mhd ray theory .the simulated data cubes are analyzed using the traditional surface - focused center - to - annulus method frequently applied in the time - distance analyses of sunspots ( e.g. , ) .furthermore , we apply the same method as to also isolate and analyze the thermal contribution to the observed travel - time shifts .the background stratification is given by a convectively stable , hydrostatic polytrope below the photosphere , smoothly connected to an isothermal atmosphere .the magneto - hydrostatic ( mhs ) sunspot model that we embed in the background is similar in construction to that of and , where the flux tube is modeled by an axisymmetric magnetic field geometry based on the self - similar solution .this approximation requires the following choices for the radial ( ) and vertical ( ) components of the magnetic field : where and are the radial and vertical coordinates in cylindrical geometry .the term controls the magnitude of the magnetic field ( peak field strength of 3000 g at the photosphere ) and the flux ( ) .the horizontal extent of the tube and its rate of divergence with altitude is set by ( see figure [ fig : mhd ] a ) . upon solving the mhs equations of pressure and lorentz support ( described in detail in ) ,we obtain the altered thermodynamic stratification of the underlying magnetized plasma ( figure [ fig : mhd ] c ) .the linearized ideal mhd wave equations are integrated according to the recipe of .we implement periodic horizontal boundaries and place damping sponges adjacent to the vertical boundaries to enhance the absorption and transmission of outgoing waves .waves are excited via a pre - computed deterministic source function that acts on the vertical momentum equation . in a manner similar to ,the forcing term is also multiplied by a spatial function that mutes source activity in a circular region of 10 mm radius to simulate the suppression of granulation related wave sources in a sunspot .figure [ fig : mhd ] shows some of the resulting properties of the simulated vertical ( doppler ) velocity data extracted from the mhd simulations . detail the steps involved in using mhd ray theory to model helioseismic ray propagation in magnetized atmospheres . here , we provide a brief description of the magneto - acoustic ray tracing procedure .the ray paths are computed in cartesian geometry in the vertical - plane assumed to contain both magnetic field lines and ray paths . in this case , we only require the 2d dispersion relation with the alfvn wave factored out : where , represents the sound speed , the alfven speed , is the squared brunt - visl frequency and is the square of the isothermal acoustic cut - off frequency .the remaining term , ( where is the prescribed magnetic field ) , represents the component of the wavevector parallel to the magnetic field .the full 3d dispersion relation is presented in .the construction of is completed by specifying the governing equations of the ray paths using the zeroth order eikonal approximation which are solved using a fourth - order runge - kutta numerical method .the magneto - acoustic rays stay on the fast - wave dispersion branch at all times .it should be noted that neither forward model ( i.e. , [mhd.hanasoge ] , [ mhd.moradi ] ) accounts for the presence of flows .the simulated doppler velocity data - cube of [mhd.hanasoge ] ( extracted at an observational height of 200 km above the photosphere ) had dimensions of mm minutes , with a spatial resolution of mm and a cadence of 1 minute . for the time - distance calculations , we compute cross covariances of oscillation signals at pairs of points on the photosphere ( source at , receiver at ) based on a single - skip center - to - annulus geometry ( see e.g. ) .we cross correlate the signal at a central point with signals averaged over an annulus of radius around that center .firstly , we filter out the -mode ridge . subsequently ,standard gaussian phase - speed in conjunction with gaussian frequency filters centered at 3.5 , 4.0 and 5.0 mhz with 0.5 mhz band - widths are applied in order to study frequency dependencies of travel times ( e.g. ; ) .the annular sizes and phase - speed filter parameters used in estimating the times shown in figures [ fig : dtmaps ] and [ fig : azim ] ( including the central phase speed ( ) and full width at half - maximum ( fwhm ) used ) are outlined in table [ tab : filters ] . the point - to - annulus cross - covariancesare fitted by two gabor wavelets ( e.g. , ) to extract the required travel times . in order to compare theory with simulation, we estimate centre - to - annulus mean time shifts , , using the mhd ray tracing technique of [mhd.moradi ] for the same sunspot model ( [sunspot ] ) .the single - skip magneto - acoustic rays do not require filtering .instead , they are propagated from the upper turning point of their trajectories , in both the positive and negative directions , at a prescribed frequency with horizontal increments of 1 mm across the sunspot .the required range of horizontal skip distances are obtained by altering the shooting angle at which the rays are initiated .the skip distances are then binned according to their travel path lengths , , while the travel times are averaged across both the positive and negative horizontal directions . for both forward models (i.e. , [mhd.hanasoge ] , [ mhd.moradi ] ) , we only concern ourselves with the mean _ phase _ time shifts .figure [ fig : dtmaps ] shows maps of as well as the frequency filtered azimuthal averages of obtained using time - distance center - to - annulus measurements for the measurement geometries indicated in table [ tab : filters ] .the map for mm clearly displays positive travel - time shifts , reaching a maximum of around 25 seconds at spot center .a similar travel - time shift is observed from the azimuthal average of when a frequency filter centered at 3.5 mhz is applied to the data .we also observe the magnitude of the positive steadily decrease as we increase the frequency filter to 4.0 mhz , with negative starting to appear in the profile , and by 5.0 mhz the travel times observed inside the sunspot are completely negative . for the larger annuli , negative time shifts of increasing magnitude are consistently observed as we increase the central frequency of the filter .in fact , all maps for larger than mm that we measured displayed similar behavior to the and mm bins ( albeit with smaller time shifts ) . it is important to take note of both the signs of the travel - time perturbations and their apparent frequency dependence .positive have traditionally been interpreted as indicative of a region of slower wave propagation in the shallow subsurface layers beneath the spot , while negative times are of a wave - speed enhancement .so in essence , the profiles that we have derived from the simulation would appear to indicate a traditional two - layered " wave - speed structure ( e.g. , ; ) beneath the sunspot .however as can be seen in figure [ fig : mhd ] , the thermal profile of our model atmosphere is a `` one - layer '' sunspot model ( ) and of the order of .similarly , changes in the sub - surface wave speed , ( where is the unperturbed sound speed ) , lie only in the positives 0250% ( not shown here ) , with the greatest enhancements seen near the surface .the large decrease in the sound speed we observe in our model also raises the possibility that current methods of linear inversion may lie beyond their domains of applicability .we also observed that travel times associated with the smallest measurement geometry are most sensitive to the phase - speed filter used , i.e. , when the phase - speed parameters were adjusted to filter all background power below the ridge , negative were obtained .this behavior was noted by , who determined the causative factor to be the background power between the and ridges .it is unsettling that the sign of the time shift may be reversed at will , through small changes in the filter width and center .figure [ fig : azim ] ( frames a - c ) show the resultant profiles derived from the mhd ray tracer for identical measurement geometries as used for the time - distance calculations .the similarities between between the ray profiles and their time - distance counterparts in figure [ fig : dtmaps ] are striking .firstly , the ray travel - time perturbation profiles contain predominantly negative travel - time shifts for all frequencies , albeit with slightly smaller magnitudes .secondly , a similar frequency dependence of is also observed .generally , high frequency rays propagated within the confines of a magnetic field are expected to i ) travel faster and ii ) propagate longer distances than low frequency rays .however , one significant difference we can observe in these profiles is the absence of any positive travel - time shifts for the mm bin .this is significant because the exclusively negative we observe across all geometries not only reflects the one - layered wave- and sound - speed profiles below the surface , but also highlights the effects that phase - speed filtering can have on time - distance measurements ( recall that ray calculations require no such filtering ) .nonetheless , the overall self - consistency between these results and those in [mhdtt ] are very encouraging , despite the 2d nature of the ray calculations . given the fact that ray theory appears to succeed in capturing the essence of the travel - time variations as derived from the mhd simulations , we can isolate the thermal component of the measured using the same approach as to ascertain the contribution to the travel - time shifts from the underlying thermal structure . to do this, we essentially re - calculate the ray paths in the absence of the flux tube while maintaining the modified sound - speed profile obtained in [sunspot ] .the resulting _ thermal _ travel - time perturbations , , would then be purely a result of thermal ( sound - speed ) variations along the ray path .the resulting profiles , presented in figure [ fig : azim ] ( frames d - f ) , surprisingly show that , even without the magnetic field , ray theory produces negative travel times the exception being for rays propagated at 5.0 mhz .this indicates that the contribution from the underlying thermal structure is significant enough to modify the upper turning point of the ray paths , thus shortening the ray travel times .the appearance of negative travel times for a model with a decrease in sound speed would appear to be somewhat counterintuitive , since from standard ray theory , one would expect negative time shifts with increases in sound speed .the most likely explanation for this phenomenon , is that since both the sound speed and density differ from the quiet sun , consequent changes in the acoustic cut - off frequency ( , where is the density scale height ) in the near - surface regions of our model modifies the the ray path for waves with frequencies less than 5.0 mhz quite significantly , thereby causing negative travel - time shifts .however , when comparing with the time perturbations derived from calculations that include the magnetic field ( i.e. figure [ fig : azim ] frames a - c ) , mhd effects appear to be dominant contributors to the observed time shifts .this is perhaps most evident for mm ( figure [ fig : azim ] d ) , where the thermal contribution at spot center appears to make up approximately 11% of ( i.e. figure [ fig : azim ] a ) at 5.0 mhz , 18% at 4.0 mhz and 28% at 3.5 mhz . for the largest bin ,we see a similar contribution at 4.0 and 5.0 mhz , but a much greater contribution at 3.5 mhz ( 45% of ) .incorporating the full mhd physics into the various forward models used in local helioseismology is essential for testing inferences made in regions of strong magnetic fields . by comparing numerical simulations of mhd wave - field and ray propagation in a model sunspot, we find that : i ) the observed travel - time shifts in the vicinity of sunspots are strongly determined by mhd physics , although sub - surface thermal variations also appear to affect ray timings by modifying the acoustic cut - off frequency , ii ) the time - distance travel - time shifts are strongly dependent on frequency , phase speed filter parameters and the background power below the ridge , and finally iii ) mhd ray theory succeeds in capturing the essence of center - to - annulus travel - time variations as derived from the mhd simulations .the most unsettling aspect about this analysis is that despite using a background stratification that differs substantially from model s ( christensen - dalsgaard et al .1996 ) and a flux tube that clearly lacks a penumbra , the time shifts still look remarkably similar ( at least qualitatively ) to observational time - distance analyses of sunspots .preliminary tests conducted with different sunspot models ( e.g. , different field configurations , peak field strengths etc . )have also provided similar results .given the self - consistency of these results , as derived from both forward models , it could imply that we are pushing current techniques of local helioseismology to their very limits .it would appear that accurate inferences of the internal constitution of sunspots await a clever combination of forward modeling , observations , and a further development of techniques of statistical wave - field analysis . | we investigate the direct contribution of strong , sunspot - like magnetic fields to helioseismic wave travel - time shifts via two numerical forward models , a 3d ideal mhd solver and mhd ray theory . the simulated data cubes are analyzed using the traditional time - distance center - to - annulus measurement technique . we also isolate and analyze the direct contribution from purely thermal perturbations to the observed travel - time shifts , confirming some existing ideas and bring forth new ones : ( i ) that the observed travel - time shifts in the vicinity of sunspots are largely governed by mhd physics , ( ii ) the travel - time shifts are sensitively dependent on frequency and phase - speed filter parameters and the background power below the ridge , and finally , ( iii ) despite its seeming limitations , ray theory succeeds in capturing the essence of the travel - time variations as derived from the mhd simulations . |
let be a standard one - dimensional brownian motion , be an integer , and let be a symmetric element of . denote by the -tuple wiener - it integral of with respect to .it is well known that multiple wiener - it integrals of different orders are uncorrelated but not necessarily independent . in an important paper , stnel andzakai gave the following characterization of the independence of multiple wiener - it integrals .[ uzthmintro ] let be integers and let and be symmetric . then , random variables and are independent if and only if rosiski and samorodnitsky observed that multiple wiener - it integrals are independent if and only if their squares are uncorrelated : this condition can be viewed as a generalization of the usual covariance criterion for the independence of jointly gaussian random variables ( the case of ) . in the seminal paper , nualart and peccati discovered the following surprising central limit theorem .[ npthmintro ] let , where is fixed and are symmetric .assume also that = 1 ] .we also assume that is generated by . for every , let be the wiener chaos of , that is , the closed linear subspace of generated by the random variables of the type , where is the hermite polynomial defined as we write by convention . for any , the mapping can be extended to a linear isometry between the symmetric tensor product equipped with the modified norm and the wiener chaos .for we write , .it is well known ( wiener chaos expansion ) that can be decomposed into the infinite orthogonal sum of the spaces . therefore, any square integrable random variable admits the following chaotic expansion where ] .also , for every element and every nonempty set ] such that each element of ] and ) ; 2 .let be functions such that for every ( in particular , each is a function of variables ) .then })\big| \leq \prod_{i=1}^q \|h_i\|_{l^2(\mu^{|c_i|})}.\ ] ] moreover , if for some , then } ) \big|\leq \|h_j \otimes_{c_0 } h_k\|_{l^2(\mu^{|c_j \triangle c_k| } ) } \prod_{i\ne j , k}^q \|h_i\|_{l^2(\mu^{|c_i|})},\ ] ] where ( notice that when and are symmetric . )_ proof_. in the case , is just the cauchy - schwarz inequality and is an equality .assume that hold for at most functions and proceed by induction . among the sets at least two , say and , have nonempty intersection .set , as above .since does not have common elements with for all , by fubini s theorem } ) = \int_{a^{c-|c_0| } } h_j\otimes_{c_0 } h_k({\bf z}_{c_j \triangle c_k } ) \prod_{i\ne j , k}^q h_i({\bf z}_{c_i } ) \ , \mu^{c-|c_0|}(d{\bf z}_{[c ] \setminus c_0}).\ ] ] observe that every element of \setminus c_0 ] for all ; 2 . for all ; 3 . for all and all ; moreover , if the distribution of each is determined by its moments , then ( a ) is equivalent to that 1 . are independent .[ rm1 ] 1 .theorem [ main ] raises a question whether the moment - independence implies the usual independence under weaker conditions than the determinacy of the marginals .( recall that a random variable having all moments is said to be determinate if any other random variable with the same moments has the same distribution . )the answer is negative in general , see ( * ? ? ? * theorem 5 ) .2 . assume that ( for simplicity ) .in this case , ( ) becomes for all . in view of theorem [ uzthmintro ] of stnel and zakai, one may expect that ( ) could be replaced by a weaker condition ( ) : .+ however , the latter is false . to see it ,consider a sequence such that and . by theorem [ nupec ] below , .putting , we observe that ( ) holds but ( ) does not , as .3 . taking into account that assumptions and of theorem [ nupec ] are equivalent , it is natural to wonder whether assumption ( ) of theorem [ main ] is equivalent to its symmetrized version : the answer is negative in general , as is shown by the following counterexample .let ^ 2\to { \mathbb{r}} and }\\ 1&\quad\mbox{if and }\\ 0&\quad\mbox{elsewhere } , \end{array } \right.\ ] ] so that and .the condition of moment - independence , ( ) of theorem [ main ] , can also be stated in terms of cumulants .recall that the joint cumulant of random variables is defined by }_{\big| t_1=0,\ldots , t_m=0},\ ] ] provided . when all are equal to , then , the usual cumulant of , see .then theorem [ main]( ) is equivalent to * _ for all integers , , and _ theorem [ main ] was proved in the first version of this paper .our proof of the crucial implication involved tedious combinatorial considerations .we are thankful to an anonymous referee who suggested a shorter and more transparent line of proof using malliavin calculus .it significantly reduced the amount of combinatorial arguments of the original version but requires some basic facts from malliavin calculus .we incorporated referee s suggestions and approach into the proof of a more general theorem [ mainblock ] . even though theorem [ main ] becomes a special case of theorem [ mainblock ] ( see corollary [ marginalblock ] ) , we keep its original statement for a convenient reference . for each ,let be a family of real - valued random variables indexed by a finite set .consider a partition of into disjoint blocks , so that .we say that vectors , are asymptotically moment - independent if each admits moments of all orders and for any sequence of non - negative integers , -\prod_{k=1}^d e\big[\prod_{i\in i_k } f_{i , n}^{\ell_i}\big ] \big\}=0.\ ] ] the next theorem characterizes the asymptotic moment - independence between blocks of multiple wiener - it integrals .[ mainblock ] let be a finite set and be a sequence of non - negative integers .for each , let be a family of multiple wiener - it integrals , where with .assume that for every < \infty.\ ] ] given a partition of into disjoint blocks , the following conditions are equivalent : 1 .random vectors , are asymptotically moment - independent ; 2 . for every from different blocks ; 3 . for every from different blocks and ._ proof _ : the implication is obvious . to show , fix belonging to different blocks . by we have which yields = \sum_{r=0}^{q_i\wedge q_j } r!^2\binom{q_i}{r}^2\binom{q_j}{r}^2(q_i+q_j-2r ) !\|f_{i , n}\widetilde{\otimes}_r f_{j , n}\|^2.\ ] ] moreover , [f_{j , n}^2 ] = q_i!q_j!\|f_{i , n}\|^2\|f_{j , n}\|^2.\ ] ] applying ( [ sym - norm ] ) to the second equality below , we evaluate as follows : this bound yields the desired conclusion .now we will prove .we need to show for fixed . writing as and enlarging and s accordingly , we may and do assume that all . we will prove ( [ blocks ] ) by induction on .the formula holds when or 1 .therefore , take and suppose that ( [ blocks ] ) holds whenever .fix and set assume that , otherwise the inductive step follows immediately .let denote the divergence operator in the sense of malliavin calculus and let be the malliavin derivative , see ( * ? ? ?- 1.3 ) . using the duality relation (1.3.1(ii ) ) and the product rule for the malliavin derivative ( * ? ? ?* theorem 3.4 ) we get & = e\big[i_{q_{i_1}}(f_{i_1,n})x_ny_n\big ] = e\big[\delta(i_{q_{i_1}-1}(f_{i_1,n}))x_ny_n\big]\\ & = e\big[i_{q_{i_1}-1}(f_{i_1,n})\otimes_1 d(x_ny_n)\big]\\ & = e\big [ y_n \,i_{q_{i_1}-1}(f_{i_1,n})\otimes_1 dx_n \big ] + e\big [ x_n \,i_{q_{i_1}-1}(f_{i_1,n})\otimes_1 dy_n \big ] \\ & = a_n+b_n.\end{aligned}\ ] ] first we consider . using the product rule for we obtain \\ & = \sum_{j\in i \setminus i_1 } q_j e \big[ i_{q_{i_1}-1}(f_{i_1,n})\otimes_1 i_{q_j-1}(f_{j , n } ) \prod_{i\in i \setminus \{i_1 , j\ } } f_{i , n } \big].\end{aligned}\ ] ] by the multiplication formula ( [ multiplication ] ) we have since and belong to different blocks , condition ( c ) of the theorem applied to the above expansion yields that converges to zero in . combining this with and lemma [ usefullemma1 ]we infer that .now we consider . if , then by convention and so .hence -e\big[f_{i_1,n}\big ] \prod_{k=2}^d e\big[\prod_{i\in i_k } f_{i , n}\big ] \big\}=\lim_{n \to \infty } b_n=0.\ ] ] therefore , we now assume that .write ] .also , for every element and every nonempty set ] such that , and , and then choose ] and .therefore , each element of ]. then is a linear combination of functions of the form where the summation goes over all choices under the constraint that the sets and are fixed .this constraint makes unique , .let , and notice that either or since .suppose , the other case is identical .applying lemma [ l : cs ] with fixed we get and make a disjoint partition of , and additional integration with respect to yields as . this yields and completes the proof of theorem [ mainblock ] .[ rm2 ] condition ( ) of theorem [ mainblock ] is equivalent to * _ for every _ where denotes the euclidean norms in and respectively . indeed , condition ( * b * ) of theorem [ mainblock ] implies ( * b * ) and the converse follows from as the squares of multiple wiener - it integrals are non - negatively correlated , cf . .the following corollary is useful to deduce the joint convergence in law from the convergence of marginals .it is stated for random vectors , as is theorem [ mainblock ] , but it obviously applies in the setting of theorem [ main ] when all vectors are one - dimensional .[ marginalblock ] under notation of theorem [ mainblock ] , let be a random vector such that * as , for each ; * vectors , are independent ; * condition or of theorem [ mainblock ] holds [ equivalently , or of theorem [ main ] when all are singletons ] ; * is determined by its moments for each .then the joint convergence holds , _ proof : _ by ( i ) the sequence is tight .let be a random vector such that as along a subsequence . from lemma [ usefullemma1](ii )we infer that condition of theorem [ mainblock ] is satisfied .it follows that each has all moments and for each . by ( iv ) , the laws of vectors and are determined by their joint moments , respectively , see ( * ? ? ?* theorem 3 ) . under the assumption ( iii ) , the vectors , are asymptotically moment independent .hence , for any sequence of non - negative integers , - e\big[\prod_{i\in i } u_{i}^{\ell_i}\big ] & = e\big[\prod_{i\in i } v_{i}^{\ell_i}\big ] - \prod_{k=1}^d e\big[\prod_{i\in i_k } u_{i}^{\ell_i}\big ] \\ & = \lim_{n_j\to\infty } \big\ { e\big[\prod_{i\in i } f_{i , n_j}^{\ell_i}\big ] -\prod_{k=1}^d e\big[\prod_{i\in i_k } f_{i , n_j}^{\ell_i}\big ] \big\}=0.\end{aligned}\ ] ] thus .we can give a short proof of the difficult and surprising part implication of the fourth moment theorem of nualart and peccati , that we restate here for a convenience .[ nupec ] let be a sequence of the form , where is fixed and .assume moreover that =q!\|f_n\|^2=1 ] ; 3 . for all ; 4 . for all ._ proof of _ .assume .since the sequence is bounded in by the assumption , it is relatively compact in law . without loss of generalitywe may assume that and need to show that .let be an independent copy of of the form with .this can easily be done by extending the underlying isonormal process to the direct sum .we then have as , where stands for an independent copy of .since = e[f_{n}^4 ] - 3 \to 0,\ ] ] and are moment - independent .( if they were independent , the classical bernstein theorem would conclude the proof . )however , in our case condition ( ) in says that taking we get where we used the multilinearity of and the fact that and are i.i.d . since , , and for , we infer that . applying our approach, one can add a further equivalent condition to a result of peccati and tudor .as such , theorem [ pectud ] turns out to be the exact multivariate equivalent of theorem [ nupec ] .[ pectud ] let , and let be positive integers .consider vectors with .assume that , for , as , let be a centered gaussian random vector with the covariance matrix .then the following two conditions are equivalent ( ) : 1 . ; 2 .\to e\big[\|n\|^4\big] ] .let be the associated gaussian random vector , . 1 .assume that is invertible .then , for any lipschitz function we have - e[h(n ) ] \big| \leq \sqrt{d}\,\|\sigma\|_{op}^{1/2}\|\sigma^{-1}\|_{op}\|h\|_{lip } \sqrt{e\|f\|^4-e\|n\|^4},\ ] ] where denotes the operator norm of a matrix and .2 . for any -function have - e[h(n ) ] \big| \leq \frac12\|h''\|_\infty \sqrt{e\|f\|^4-e\|n\|^4},\ ] ] where . _ proof _ : the proof is divided into three steps. _ step 1 _ : recall that for a lipschitz function ( * ? ? ? * theorem 6.1.1 ) yields - e[h(n ) ] \big|\leq \sqrt{d}\,\|\sigma\|_{op}^{1/2}\|\sigma^{-1}\|_{op}\|h\|_{lip}\sqrt{\sum_{i , j=1}^d e\left\ { \big ( \sigma_{ij}-\frac{1}{q_j}\langle df_i , df_j\rangle \big)^2 \right\}},\ ] ] while for a -function with bounded hessian ( * ? ? ?* theorem 6.1.2 ) gives - e[h(n ) ] \big| \leq \frac12\|h''\|_\infty\sqrt{\sum_{i , j=1}^de\left\ { \big ( \sigma_{ij}-\frac{1}{q_j}\langle df_i , df_j\rangle \big)^2 \right\}}.\ ] ] _ step 2 _ : we claim that for any , indeed , by ( * ? ? ?* identity ( 6.2.4 ) ) and the fact that if , we have on the other hand , from ( [ cov ] ) we have the claim follows immediately ._ step 3 _ : applying we get - \sigma_{ii } \sigma_{jj } - 2 \sigma_{ij}^2 \big)\\ & = \sum_{i , j=1}^d \big\{{\rm cov}(f_i^2,f_j^2 ) -2\sigma_{ij}^2\big\}.\end{aligned}\ ] ] combining steps 1 - 3 gives the desired conclusion .here we will prove a multivariate extension of a result of nourdin and peccati .such an extension was an open problem as far as we know . in what follows, will denote a random variable with the centered distribution having degrees of freedom .when is an integer , then , where are i.i.d .standard normal random variables . in general, is a centered gamma random variable with a shape parameter and scale parameter .nourdin and peccati established the following theorem .[ noupec ] fix and let be as above .let be an even integer , and let be such that = e[g(\nu)^2]=2\nu ] .then , the following four assertions are equivalent , as : 1 . ; 2 . -12e[f_n^3 ] \to e[g(\nu)^4 ] -12e[g(\nu)^3 ] = 12\nu^2 - 48\nu ] for every .assume that : 1 . -12e[f_{i , n}^3 ] \to 12\nu_i^2 - 48\nu_i ] whenever .then where are independent random variables having centered distributions with degrees of freedom , respectively ._ proof_. using the well - known carleman s condition , it is easy to check that the law of is determined by its moments . by corollary [ marginalblock ]it is enough to show that condition of theorem [ main ] holds .fix as well as .switching and if necessary , assume that . from theorem [ noupec] get that for each and every , except when . using the identity ( see ( [ fubini ] ) ) together with the cauchy - schwarz inequality we infer that condition of theorem [ main ] holds for all values of , and , except of the cases : , , and .assumption together with ( [ cov2 ] ) show that for all .thus , it remains to verify condition of theorem [ main ] when .lemma [ technical ] ( identity ( [ azertiop ] ) therein ) yields using ( [ cs ] ) and theorem [ noupec ] and a reasoning as above , it is straightforward to show that the sum tends to zero as . on the other hand , the condition on the -th contraction in theorem [ noupec]( ) yields that as .moreover , we have ,\ ] ] which tends to zero by assumption .all these facts together imply that + as . using ( [ cs ] ) for get , showing that condition of theorem [ main ] holds true in the last remaining case .the proof of the theorem is complete .[ ] consider , where are even integers .suppose that \to 1 , \quad e[f_{1,n}^4 ] -6e[f_{1,n}^3 ] \to -3 , \quad \text{and } \\& e[f_{2,n}^2 ] \to 2 , \quad e[f_{2,n}^4 ] -6e[f_{2,n}^3 ] \to 0 , \quad \text{as } \ n\to \infty.\end{aligned}\ ] ] when or we require additionally : \to 0 \ ( q_2=2q_1).\ ] ] then theorem [ t : noupec ] ( the case , ) gives where are i.i.d .standard exponential random variables .[ t : bivariateblock ] let be positive integers .assume further that .consider with and .suppose that as where , the marginals of are determined by their moments , and are independent . if \to 0 ] for all and . by theorem [ nupec] , for all .observe that so that for , except possibly when . but in this latter case ,\big| \to 0\ ] ] by the assumption . corollary [ marginalblock ] concludes the proof .theorem [ t : bivariateblock ] admits the following immediate corollary .[ corindblocks ] let be positive integers .consider two stochastic processes and , where and .suppose that as where is centered and gaussian , the marginals of are determined by their moments , and are independent . if \to 0 ] .assume further that the covariance function has the form with and a function which is slowly varying at infinity and bounded away from and infinity on every compact subset of .the following result is due to taqqu .[ taqqu ] if then , as , where ^{-1/2} ] , and the double wiener - it integral is with respect to a two - sided brownian motion .let be an integer .the following result is a consequence of corollary [ corindblocks ] and theorems [ bm ] and [ taqqu ] .it gives the asymptotic behavior ( after proper renormalization of each coordinate ) of the pair when .since what follows is just mean to be an illustration , we will not consider the remaining case , that is , when ; it is an interesting problem , but to answer it would be out of the scope of the present paper .[ bm - dmt ] let be an integer , and let the constants and be given by theorems [ bm ] and [ taqqu ] , respectively . 1 . if then where is a standard brownian motion in .2 . if then where is a brownian motion independent of the rosenblatt process of parameter ._ proof _ : let us first introduce a specific realization of the sequence that will allow one to use the results of this paper .the space being a real separable hilbert space , it is isometrically isomorphic to either ( for some finite ) or .let us assume that , the case where being easier to handle .let be an isometry .set for each .we have =\int_0^\infty e_k(x)e_l(x)dx,\quad k , l{\geqslant}1.\label{ekrho}\ ] ] if denotes a standard brownian motion , we deduce that these two sequences being indeed centered , gaussian and having the same covariance structure . using ( [ mapping ] ) we deduce that has the same distribution than ( with the -tuple wiener - it integral associated to ) . hence , to reach the conclusion of point 1 it suffices to combine corollary [ corindblocks ] with theorem [ bm ] . for point 2 , just use corollary [ corindblocks ] and theorem [ taqqu ] , together with the fact that the distribution of is determined by its moments ( as is the case for any double wiener - it integral ) . to develop the next application we will need the following basic ingredients : 1 . a sequence of i.i.d . random variables , with mean 0 , variance 1 and all moments finite .two positive integers as well as two sequences , of real - valued functions satisfying for all and , 1 . for every permutation ; 2 . whenever for some ; 3 . .consider this series converges in , =0 ] .we have the following result . [ thmmoo ] as , assume that the contribution of each to is uniformly negligible , that is , and that , for any , then and are asymptotically moment - independent . _ proof : _ fix .we want to prove that , as , -e[q_{1,n}({\bf x})^m]e[q_{2,n}({\bf x})^n]\to 0.\ ] ] the proof is divided into three steps . _ step 1_. in this step we show that -e[q_{1,n}({\bf g})^mq_{2,n}({\bf g})^n]\to 0 \quad \text{as } \n \to \infty.\ ] ] following the approach of mossel , odonnel and oleszkiewicz , we will use the lindeberg replacement trick .let be a sequence of i.i.d . random variables independent of . for a positive integer , set , and put .fix and write for and , where means that the term is dropped ( observe that this notation bears no ambiguity : indeed , since vanishes on diagonals , each string contributing to the definition of contains the symbol exactly once ) . for each and , note that and are independent of the variables and , and that by the binomial formula , using the independence of from and , we have \\ & = \sum_{i=0}^m\sum_{j=0}^n \binom{m}{i}\binom{n}{j}e[u_{1,n , s}^{m - i}u_{2,n , s}^{n - j}v_{1,n , s}^{i}v_{2,n , s}^j]e[x_s^{i+j}].\end{aligned}\ ] ] similarly , \\ &= \sum_{i=0}^m\sum_{j=0}^n \binom{m}{i}\binom{n}{j}e[u_{1,n , s}^{m - i}u_{2,n , s}^{n - j}v_{1,n , s}^{i}v_{2,n , s}^j]e[g_s^{i+j}].\end{aligned}\ ] ] therefore - e[q_{1,n}({\bf w}^{(s)})^m q_{2,n}({\bf w}^{(s)})^n ] \\ & = \sum_{i+j \ge 3 } \binom{m}{i}\binom{n}{j } e[u_{1,n , s}^{m - i}u_{2,n , s}^{n - j}v_{1,n , s}^{i}v_{2,n , s}^j]\big ( e[x_s^{i+j } ] - e[g_s^{i+j}]\big).\end{aligned}\ ] ] now , observe that propositions 3.11 , 3.12 and 3.16 of imply that both and are uniformly bounded in all spaces .it also implies that , for any , and , ^{1/p } \leq c_p \ , e[v_{k , n , s}^2]^{1/2},\ ] ] where depends only on .hence , for , , , we have \big| \le c\ , e[v_{1,n , s}^2]^{i/2 } \ , e[v_{2,n , s}^2]^{j/2},\ ] ] where does not depend on .since =e[g_i]=0 ] , we get = q_kq_k!\sum_{i_2,\ldots , i_{q_k}=1}^{\infty } a_{k , n}(s , i_2,\ldots , i_{q_k})^2.\ ] ] when , then is bounded from above by where does not depend on , and we get a similar bound when .if , then ( ) , so isz bounded from above by and we have a similar bound when .taking into account assumption we infer that the upper - bound for is of the form where and is independent of .we conclude that - e[q_{1,n}({\bf w}^{(s)})^m q_{2,n}({\bf w}^{(s)})^n ] \big| \\ & & \hskip6 cm \le c \epsilon_n \sum_{k=1}^2 \sum_{i_2,\ldots , i_{q_k}=1}^{\infty } a_{k , n}(s , i_2,\ldots , i_{q_k})^2,\end{aligned}\ ] ] where does not depend on . since , for fixed , in as , by propositions 3.11 , 3.12 and 3.16 of , the convergence holds in all . hence -e[q_{1,n}({\bf g})^mq_{2,n}({\bf g})^n]\big| \\ & \le \sum_{s=1}^{\infty } \big| e[q_{1,n}({\bf w}^{(s-1)})^m q_{2,n}({\bf w}^{(s-1)})^n ] - e[q_{1,n}({\bf w}^{(s)})^m q_{2,n}({\bf w}^{(s)})^n ] \big| \\ & \le c \epsilon_n \sum_{k=1}^2 \sum_{i_1,\ldots , i_{q_k}=1}^{\infty } a_{k , n}(i_1,i_2,\ldots , i_{q_k})^2 = c \big((q_1!)^{-1}+ ( q_2!)^{-1}\big ) \epsilon_n . \end{aligned}\ ] ] this proves ._ step 2_. we show that , -e[q_{1,n}({\bf g})^m]\to 0\quad\mbox{and}\quad e[q_{2,n}({\bf x})^n]-e[q_{2,n}({\bf g})^n]\to 0.\ ] ] the proof is similar to step 1 ( and easier ) .thus , we omit it . _ step 3_. without loss of generality we may and do assume that , where is a standard brownian motion . for and , due to the multiplication formula ( [ multiplication ] ) , is a multiple wiener - it integral of order with respect to : \times\ldots\times [ i_{q_k}-1,i_{q_k } ] } \right).\ ] ] in this setting , condition ( [ mixed ] ) coincides with condition ( ) of theorem [ main ] ( or ( ) of theorem [ mainblock ] ) . therefore , -e[q_{1,n}({\bfg})^m]e[q_{2,n}({\bf g})^n]\to 0.\ ] ] combining ( [ moocsq ] ) , ( [ moocsq2 ] ) and ( [ mixedcsq ] ) we get the desired conclusion ( [ goal ] ). the conclusion of theorem [ thmmoo ] may fail if either ( [ influence ] ) or ( [ mixed ] ) are not satisfied .it follows from step 3 above that the theorem fails when does not hold and is gaussian .theorem [ thmmoo ] also fails when ( [ influence ] ) is not satisfied , ( [ mixed ] ) holds , and is a rademacher sequence , as we can see from the following counterexample .consider , and set then and , where are i.i.d .with .it is straightforward to check that ( [ mixed ] ) holds and obviously ( [ influence ] ) is not satisfied . since , we get \neq e[q_{1,n}({\bf x})^2]e[q_{2,n}({\bf x})^2],\ ] ] implying in particular that and are ( asymptotically ) moment - dependent .we are grateful to jean bertoin for useful discussions and to ren schilling for reference .we warmly thank an anonymous referee for suggesting a shorter proof of theorem [ main ] ( which evolved into the proof of a more general statement , theorem [ mainblock ] ) and for useful comments and suggestions , which together with the editor s constructive remarks , have led to a significant improvement of this paper .g. peccati and c.a .tudor ( 2005 ) .gaussian limits for vector - valued multiple stochastic integrals . in : _sminaire de probabilits xxxviii , 247 - 262 .lecture notes in math . *1857 * , springer - verlag , berlin ._ j. rosiski and g. samorodnitsky ( 1999 ) .product formula , tails and independence of multiple stable integrals ._ advances in stochastic inequalities _( atlanta , ga , 1997 ) , 169 - 194 , _ contemp . math ._ * 234 * , amer .soc . , providence , ri . | we characterize the asymptotic independence between blocks consisting of multiple wiener - it integrals . as a consequence of this characterization , we derive the celebrated fourth moment theorem of nualart and peccati , its multidimensional extension , and other related results on the multivariate convergence of multiple wiener - it integrals , that involve gaussian and non gaussian limits . we give applications to the study of the asymptotic behavior of functions of short and long range dependent stationary gaussian time series and establish the asymptotic independence for discrete non - gaussian chaoses . |
we consider the helmholtz equation of the form where is the wave number , represents a harmonic source , is a given data function , and is a spatial function describing the dielectric properties of the medium . here is a polygonal or polyhedral domain in . under the assumption that the time - harmonic behavior is assumed, the helmholtz equation ( [ pde ] ) governs many macroscopic wave phenomena in the frequency domain including wave propagation , guiding , radiation and scattering .the numerical solution to the helmholtz equation plays a vital role in a wide range of applications in electromagnetics , optics , and acoustics , such as antenna analysis and synthesis , radar cross section calculation , simulation of ground or surface penetrating radar , design of optoelectronic devices , acoustic noise control , and seismic wave propagation .however , it remains a challenge to design robust and efficient numerical algorithms for the helmholtz equation , especially when large wave numbers or highly oscillatory solutions are involved . for the helmholtz problem ( [ pde])-([bc ] ) , the corresponding variational form is given by seeking satisfying where and . in a classic finite element procedure ,continuous polynomials are used to approximate the true solution . in many situations ,the use of discontinuous functions in the finite element approximation often provides the methods with much needed flexibility to handle more complicated practical problems .however , for discontinuous polynomials , the strong gradient in ( [ wf ] ) is no longer meaningful .recently developed weak galerkin finite element methods provide means to solve this difficulty by replacing the differential operators by the weak forms as distributions for discontinuous approximating functions .weak galerkin ( wg ) methods refer to general finite element techniques for partial differential equations and were first introduced and analyzed in for second order elliptic equations . through rigorous error analysis , optimal order of convergence of the wg solution in both discrete norm and normis established under minimum regularity assumptions in .the mixed weak galerkin finite element method is studied in .the wg methods are by design using discontinuous approximating functions . in this paper, we will apply wg finite element methods to the helmholtz equation . the wg finite element approximation to ( [ wf ] )can be derived naturally by simply replacing the differential operator gradient in ( [ wf ] ) by a weak gradient : find such that for all we have where and represent the values of in the interior and the boundary of the triangle respectively .the weak gradient will be defined precisely in the next section .we note that the weak galerkin finite element formulation ( [ wg1 ] ) is simple , symmetric and parameter free . to fully explore the potential of the wg finite element formulation ( [ wg1 ] ), we will investigate its performance for solving the helmholtz problems with large wave numbers .it is well known that the numerical performance of any finite element solution to the helmholtz equation depends significantly on the wave number .when is very large representing a highly oscillatory wave , the mesh size has to be sufficiently small for the scheme to resolve the oscillations . to keep a fixed grid resolution , a natural rule is to choose to be a constant in the mesh refinement , as the wave number increases , it is known that , even under such a mesh refinement , the errors of continuous galerkin finite element solutions deteriorate rapidly when becomes larger .this non - robust behavior with respect to is known as the `` pollution effect '' . to the end of alleviating the pollution effect ,various continuous or discontinuous finite element methods have been developed in the literature for solving the helmholtz equation with large wave numbers .a commonly used strategy in these effective finite element methods is to include some analytical knowledge of the helmholtz equation , such as characteristics of traveling plane wave solutions , asymptotic solutions or fundamental solutions , into the finite element space . likewise, analytical information has been incorporated in the basis functions of the boundary element methods to address the high frequency problems . on the other hand , many spectral methods , such as local spectral methods , spectral galerkin methods , and spectral element methods have also been developed for solving the helmholtz equation with large wave numbers .pollution effect can be effectively controlled in these spectral type collocation or galerkin formulations , because the pollution error is directly related to the dispersion error , i.e. , the phase difference between the numerical and exact waves , while the spectral methods typically produce negligible dispersive errors .the objective of the present paper is twofold .first , we will introduce weak galerkin methods for the helmholtz equation .the second aim of the paper is to investigate the performance of the wg methods for solving the helmholtz equation with high wave numbers . to demonstrate the potential of the wg finite element methods in solving high frequency problems, we will not attempt to build the analytical knowledge into the wg formulation ( [ wg1 ] ) and we will restrict ourselves to low order wg elements .we will investigate the robustness and effectiveness of such plain wg methods through many carefully designed numerical experiments .the rest of this paper is organized as follows . in section 2, we will introduce a weak galerkin finite element formulation for the helmholtz equation by following the idea presented in .implementation of the wg method for the problem ( [ pde])-([bc ] ) is discussed in section 3 . in section 4, we shall present some numerical results obtained from the weak galerkin method with various orders .finally , this paper ends with some concluding remarks .let be a partition of the domain with mesh size .assume that the partition is shape regular so that the routine inverse inequality in the finite element analysis holds true ( see ) .denote by the set of polynomials in with degree no more than , and , , the set of polynomials on each segment ( edge or face ) of with degree no more than . for and given , we define the weak galerkin ( wg ) finite element space as follows where and are the values of restricted on the interior of element and the boundary of element respectively .since may not necessarily be related to the trace of on , we write . for a given , we define another vector space where is the set of homogeneous polynomials of degree and ( see ) .we will find a locally defined discrete weak gradient from this space on each element .the main idea of the weak galerkin method is to introduce weak derivatives for discontinuous functions and to use them in discretizing the corresponding variational forms such as ( [ wf ] ) .the differential operator used in ( [ wf ] ) is a gradient .a weak gradient has been defined in .now we define approximations of the weak gradient as follows . for each , we define a discrete weak gradient on each element such that where is locally defined on each element , and .we will use to denote .then the wg method for the helmholtz equation ( [ pde])-([bc ] ) can be stated as follows .a numerical approximation for ( [ pde ] ) and ( [ bc ] ) can be obtained by seeking such that for all denote by the projection onto , . in other words , on each element , the function is defined as the projection of in and is the projection of in . for equation ( [ pde ] ) with dirichlet boundary condition on ,optimal error estimates have been obtained in . for a sufficiently small mesh size , we can derive following optimal error estimate for the helmholtz equation ( [ pde ] ) with the mixed boundary condition ( [ bc ] ) .let and be the solutions of ( [ wg ] ) and ( [ pde])-([bc ] ) respectively and assume that is convex .then for , there exists a constant such that the proof of this theorem is similar to that of theorem 8.3 and theorem 8.4 in and is very long .since the emphasis of this paper is to investigate the performance of the wg method , we will omit details of the proof .first , define a bilinear form as then ( [ wg ] ) can be rewritten with the methodology of implementing the wg methods is the same as that for continuous galerkin finite element methods except that the standard gradient operator should be replaced by the discrete weak gradient operator . in the following , we will use the lowest order weak galerkin element ( =0 ) on triangles as an example to demonstrate how one might implement the weak galerkin finite element method for solving the helmholtz problem ( [ pde ] ) and ( [ bc ] ) .let and denote , respectively , the number of triangles and the number of edges associated with a triangulation .let denote the union of the boundaries of the triangles of .the procedure of implementing the wg method ( [ wg ] ) consists of the following three steps . 1 .find basis functions for defined in ( [ vh ] ) : where and 0 \quad \mbox{otherwise } , \end{array } \right .\psi_j=\left\ { \begin{array}{l } 1 \quad \mbox{on } \;\; e_j , \\ [ 0.08 in ] 0 \quad \mbox{otherwise } , \end{array } \right.\ ] ] for and .please note that and are defined on whole .+ 2 . substituting into ( [ wg8 ] ) and letting in ( [ wg8 ] )yield where and are the values of on the interior of the triangle and the boundary of the triangle respectively . in our computations ,the integrations on the right - hand side of ( [ sys ] ) are conducted numerically . in particular , a 7-points two - dimensional gaussian quadrature and a 3-points one - dimensional gaussian quadrature are employed , respectively , to calculate and numerically .form the coefficient matrix of the linear system ( [ sys ] ) by computing all integrations in ( [ bilinear ] ) are carried out analytically .finally , we will explain how to compute the weak gradient for a given function when .for a given , we will find , for example , we can choose as follows thus on each element , . using the definition of the discrete weak gradient ( [ d - g ] ) ,we find by solving the following linear system : the inverse of the above coefficient matrix can be obtained explicitly or numerically through a local matrix solver . for the basis function , is nonzero on only one or two triangles .in this section , we examine the wg method by testing its accuracy , convergence , and robustness for solving two dimensional helmholtz equations . the pollution effect due tolarge wave numbers will be particularly investigated and tested numerically . for convergence tests ,both piecewise constant and piecewise linear finite elements will be considered . to demonstrate the robustness of the wg method , the helmholtz equation in both homogeneous and inhomogeneous media will be solved on convex and non - convex computational domains . the mesh generation and all computationsare conducted in the matlab environment . for simplicity ,a structured triangular mesh is employed in all cases , even though the wg method is known to be very flexible in dealing with various different finite element partitions .two types of relative errors are measured in our numerical experiments .the first one is the relative error defined by the second one is the relative error defined in terms of the discrete gradient numerically , the -semi - norm will be calculated as for the lowest order finite element ( i.e. , piecewise constants ) . for piecewise linear elements , we use the original definition of to compute the -semi - norm .cc we first consider a homogeneous helmholtz equation defined on a convex hexagon domain , which has been studied in .the domain is the unit regular hexagon domain centered at the origin , see fig .[ fig.domain ] ( left ) .here we set and in ( [ pde ] ) , where . the boundary data in the robin boundary condition ( [ bc ] )is chosen so that the exact solution is given by where are bessel functions of the first kind .let denote the regular triangulation that consists of triangles of size , as shown in fig .[ fig.domain ] ( left ) for ..convergence of piecewise constant wg for the helmholtz equation on a convex domain with wave number .[ cols= " < , < , < ,< , < " , ] we finally investigate the performance of the wg method for the helmholtz equation with large wave numbers . as discussed above , without resorting to high order generalizations or analytical / special treatments , we will examine the use of the plain wg method for tackling the pollution effect .the homogeneous helmholtz problem of the subsection [ convex ] will be studied again .also , the and elements are used to solve the homogeneous helmholtz equation with the robin boundary condition . since this problem is defined on a structured hexagon domain , a uniform triangular mesh with a constant mesh size throughout the domain is used .this enables us to precisely evaluate the impact of the mesh refinements .following the literature works , we will focus only on the relative semi - norm in the present study .cc to study the non - robustness behavior with respect to the wave number , i.e. , the pollution effect , we solve the corresponding helmholtz equation by using piecewise constant wg method with various mesh sizes for four wave numbers , , , and , see fig .[ fig.kh ] ( left ) for the wg performance . from fig .[ fig.kh ] ( left ) , it can be seen that when is smaller , the wg method immediately begins to converge for the cases and .however , for large wave numbers and , the relative error remains to be about % , until becomes to be quite small or is large .this indicates the presence of the pollution effect which is inevitable in any finite element method . in the same figure, we also show the errors of different values by fixing .surprisingly , we found that the relative error does not evidently increase as becomes larger .the convergence line for looks almost flat , with a very little slope .in other words , the pollution error is very small in the present wg result .we note that such a result is as good as the one reported in by using a penalized discontinuous galerkin approach with optimized parameter values .in contrast , no parameters are involved in the wg scheme . on the other hand, the good performance of the wg method for the case does not mean that the wg method could be free of pollution effect .in fact , it is known theoretically that the pollution error can not be eliminated completely in two- and higher - dimensional spaces for galerkin finite element methods . in the right chart of fig .[ fig.kh ] , we examine the numerical errors by increasing , under the constraint that is a constant . huge wave numbers , up to , are tested .it can be seen that when the constant changes from to and , the non - robustness behavior against becomes more and more evident .however , the slopes of =constant lines remain to be small and the increment pattern with respect to is always monotonic .this suggests that the pollution error is well controlled in the wg solution .cc c + + cc c + in the rest of the paper , we shall present some numerical results for the wg method when applied to a challenging case of high wave numbers . in fig .[ fig.solu2d ] and [ fig.solu2d_linear ] , the wg numerical solutions are plotted against the exact solution of the helmholtz problem . herewe take a wave number and mesh size which is relatively a coarse mesh .with such a coarse mesh , the wg method can still capture the fast oscillation of the solution .however , the numerically predicted magnitude of the oscillation is slightly damped for waves away from the center when piecewise constant elements are employed in the wg method .such damping can be seen in a trace plot along -axis or . to see this, we consider an even worse case with and .the result is shown in the first chart of fig .[ fig.solu1d ] .we note that the numerical solution is excellent around the center of the region , but it gets worse as one moves closer to the boundary .if we choose a smaller mesh size , the visual difference between the exact and wg solutions becomes very small , as illustrate in fig .[ fig.solu1d ] .if we further choose a mesh size , the exact solution and the wg approximation look very close to each other .this indicates an excellent convergence of the wg method when the mesh is refined .in addition to mesh refinement , one may also obtain a fast convergence by using high order elements in the wg method .figure [ fig.solu1d_linear ] illustrates a trace plot for the case of and when piecewise linear elements are employed in the wg method .it can be seen that the computational result with this relatively coarse mesh captures both the fast oscillation and the magnitude of the exact solution very well .the present numerical experiments indicate that the wg method as introduced in is a very promising numerical technique for solving the helmholtz equations with large wave numbers .this finite element method is robust , efficient , and easy to implement . on the other hand , a theoretical investigation for the wg methodshould be conducted by taking into account some useful features of the helmholtz equation when special test functions are used .it would also be valuable to test the performance of the wg method when high order finite elements are employed to the helmholtz equations with large wave numbers in two and three dimensional spaces . finally , it is appropriate to clarify some differences and connections between the wg method and other discontinuous finite element methods for solving the helmholtz equation .discontinuous functions are used to approximate the helmholtz equation in many other finite element methods such as discontinuous galerkin ( dg ) methods and hybrid discontinuous galerkin ( hdg ) methods . however , the wg method and the hdg method are fundamentally different in concept and formulation .the hdg method is formulated by using the standard mixed method approach for the usual system of first order equations , while the key to the wg is the use of the discrete weak differential operators . for a second order ellipticproblem , these two methods share the same feature by approximating first order derivatives or fluxes through a formula that was commonly employed in the mixed finite element method . for high order partial differential equations ( pdes ) ,the wg method is greatly different from the hdg .consider the biharmonic equation as an example .the first step of the hdg formulation is to rewrite the fourth order equation to four first order equations . in contrast, the wg formulation for the biharmonic equation can be derived directly from the variational form of the biharmonic equation by replacing the laplacian operator by a weak laplacian and adding a parameter free stabilizer .it should be emphasized that the concept of weak derivatives makes the wg a widely applicable numerical technique for a large variety of pdes which we shall report in forthcoming papers . for the helmholtz equation studied in this paper , the wg method andthe hdg method yield the same variational form for the homogeneous helmholtz equation with a constant in ( [ pde ] ) .however , the wg discretization differs from the hdg discretization for an inhomogeneous media problem with being a spatial function of and .moreover , the wg method has an advantage over the hdg method when the coefficient is degenerated .i. babuka , f. ihlenburg , e.t .paik , s.a .sauter , a generalized finite element method for solving the helmholtz equation in two dimensions with minimal pollution ._ computer methods in applied mechanics and engineering _ 1995 ; * 128 * : 325 - 359 .i. babuka , s.a .sauter , is the pollution effect of the fem avoidable for the helmholtz equation considering high wave number ?_ siam journal on numerical analysis _ 1997 ; * 34 * : 2392 - 2423 . reprinted in _ siam review _ 2000 ; * 42 * : 451 - 484 . f. ihlenburg , i. babuka , dispersion analysis and error estimation of galerkin finite element methods for the helmholtz equation ._ international journal for numerical methods in engineering _ 1995 ; * 38 * : 3745 - 3774 .f. ihlenburg , i. babuka , finite element solution of the helmholtz equation with high wavenumber part ii : the --version of the fem ._ siam journal of numerical analysis _ 1997 ; * 34 * : 315 - 358 . | a weak galerkin ( wg ) method is introduced and numerically tested for the helmholtz equation . this method is flexible by using discontinuous piecewise polynomials and retains the mass conservation property . at the same time , the wg finite element formulation is symmetric and parameter free . several test scenarios are designed for a numerical investigation on the accuracy , convergence , and robustness of the wg method in both inhomogeneous and homogeneous media over convex and non - convex domains . challenging problems with high wave numbers are also examined . our numerical experiments indicate that the weak galerkin is a finite element technique that is easy to implement , and provides very accurate and robust numerical solutions for the helmholtz problem with high wave numbers . galerkin finite element methods , discrete gradient , helmholtz equation , large wave numbers , weak galerkin primary , 65n15 , 65n30 , 76d07 ; secondary , 35b45 , 35j50 |
in recent years much progress has been made in the field of freeform surface design without the assumption of symmetries .the goal of these design methods is the solution of the so called inverse problem of nonimaging optics .this means that for given _ arbitrary _ source and target intensities and one or more freeform surfaces have to be calculated , which map the intensities onto each other . especially the design of _ continuous _ freeform surfaces , on which we will concentrate on in the following , is a challenging problem and of great interest for practical applications .+ the first successful method at calculating a continuous freeform surface utilizing a complex target intensity was developed by ries and muschaweck , but unfortunately the numerical method was not published .their approach is able to handle the far field design problem for single freeform surfaces illuminated by a point source .+ nowadays , many other methods have been developed by different research groups . a quite popular approachare the so - called monge - ampre methods .they are based on the modelling of the design problem by a nonlinear partial differential equation of monge - ampre type and solving it with sophisticated numerical techniques .these methods are able to handle the design problem in the far field for intensity control of point sources and collimated beams as well as intensity and phase control with double freeform surfaces . +another popular approach for the single freeform surface design with point sources is the supporting ellipsoids method developed by oliker . with this methoda freeform mirror is constructed by putting a point source in the focal point of a unification of ellipoids , whereby every ellipsoid has a different position of the second focal point on the target plane to build the required intensity pattern .the challenge of this method is the calculation of a smooth freeform surface by the unification of the ellipsoids .therefore it was further developed by other research groups and generalized to calculate freeform lenses .it can handle the far field as well as the near field design problem .+ also quite often used are ray - mapping techniques , which are frequently based on the calculation of a ray mapping between the source and the target intensity and a subsequent construction of the freeform surface .the aim and challenging part of these methods is to find an _ integrable _ ray mapping , which allows the calculation of a _ continuous _ surface .+ the approach we will concentrate on is a subgroup of the ray - mapping techniques , which are called optimal mass transport ( omt ) methods .these gained some interest in recent years and are partly based on the mathematical concept of optimal mass transport as explained in the following paragraph .they can handle the design problem of a single and double freeform surfaces for intensity control as well as double freeform surfaces for intensity and phase control .the connection between mass transport and freeform design was also discussed in . +this approach for the freeform surface design consists of two separate steps . in the first step a ray mapping betweenthe given input and output intensities is calculated via omt . in the second stepthe freeform surface is constructed with the help of the law of refraction / reflection and the well - known integrability condition , which ensures the continuity of the surface .+ the difference between the omt methods mentioned above is the second step . in first approach by buerle _the freeform is constructed by an optimization procedure , while the second approach by feng _et al . _ uses a simultaneous point - by - point construction method to design a double freeform surface .+ since these attempts seem to be quite successful but do not give theoretical insights about the integrability of the omt map , we want to clarify this point in our work for a single freeform illuminated by a collimated beam .this will be done by deriving a condition for an integrable map and showing that it can be fulfilled ( approximately ) by the omt map .based on our findings , we present an efficent and easy - to - implement numerical freeform surface construction technique differing from the previously published omt methods . to do so this paper is structured as follows .+ in section [ sec : design method ] after a short introduction to the omt and a presentation of its basic properties , we will derive from the law of reflection / refraction and the integrability condition a _ general _ condition for an integrable ray mapping and its corresponding surface for collimated input beams .it will be shown that in a small - angle approximation this condition can be fulfilled by using an omt mapping and therefore the freeform surface design process indeed decouples into the two steps described above .thereby it will be shown that the freeform surface can be constructed from a linear advection equation with appropriate boundary conditions . in section [ sec : numerical algorithm ] we then argue that the advection equation for the freeform construction can be solved by simple integrations , which is different to the omt freeform design methods mentioned above and implies the approximate integrability over a wide range of freeform - target plane distances . the efficiency of this approach is then demonstrated in section [ sec : examples ] by applying it to two challenging design examples , followed by a discussion of our results in section [ sec : conclusion ] .the problem statement of omt , also called the monge - kantorovich problem , is as follows : two positive density functions and with have to be mapped onto each other according to the jacobi equation with a smooth , bijective mapping .if is defined as the set of mappings fulfilling equation ( [ eq:2 ] ) , we are searching for a mapping minimizing the transport cost according to the kantorovich - wasserstein distance whereby denotes the mapping for which the integral has its minimal value .this mapping , which is defined by ( [ eq:1 ] ) , ( [ eq:2 ] ) and ( [ eq:3 ] ) , has the useful property that it is unique and it is characterized by a vanishing curl .the latter property will be important for our findings in the next subsection .+ in the special case of freeform surface design considered here , the densities and correspond to the source and target intensities with the units ( see fig . [fig : geometrie ] ) . therefore equation ( [ eq:1 ] ) describes a global and equation ( [ eq:2 ] ) a local energy conservation .+ for the numerical examples in section [ sec : examples ] , we have implemented the omt method developed by sulman _it provides a good compromise between an easy implementation and an efficient mapping calculation and is thus sufficient for our test purposes .+ the result of this design process step is therefore the mapping , but we have to keep in mind , that it is not obvious at this point , whether the omt mapping is integrable and if it is , for which lens - target distances ( see fig . [fig : geometrie ] ) this is the case .this point will be clarified in the next subsection . in the following ,we want to derive a differential equation for the direct calculation of a freeform surface for a given _ general _ ray mapping .our derivations will lead to a condition an _ integrable _ mapping and its corresponding freeform surface have to fulfill .as we will show , this condition can be fulfilled approximately over a wide range of lens - target distances by the omt mapping defined in the previous subsection .+ to do so , two basic equations are considered . on the one hand , for an incoming beam described by the ray direction vector field and the refracted vector field , the law of refraction with the refractive indices of the lens and of the surrounding medium , has to be fulfilled . on the other hand , we want to ensure the continuity of the surface by the well - known integrability condition since the collimated beam as well as can be expressed in terms of the unknown freeform surface and the given ray mapping ( see fig .[ fig : geometrie ] ) : the equations ( [ eq:4 ] ) and ( [ eq:5 ] ) represent a differential equation for . plugging ( [ eq:4 ] ) into equation ( [ eq:5 ] ) the integrability condition can be written in the form ( see appendix a ) \}_z } { \mathbf{n}\cdot\mathbf{s}_2 } + \mathbf{s}_2(\nabla \times \mathbf{s}_3),\ ] ] which holds for a _ general _ ray mapping .equation ( [ eq:7 ] ) is organized in a way that only the left - hand side ( lhs ) depends on the derivatives of .equation ( [ eq:7 ] ) takes a more familiar form by inserting the vector fields ( [ eq:6 ] ) , which leads to }{\mathbf{n}\cdot \mathbf{s}_2}-(z_t- z(x , y ) ) \nabla \mathbf{v},\ ] ] with the velocity field , the identity vector and .this equation is a semilinear two dimensional advection equation , whereby the unknown surface corresponds to conserved transport quantity and the right - hand side ( rhs ) to a source term .+ in principle one could try to solve equation ( [ eq:8 ] ) after applying suitable boundary conditions , but as it will be demonstrated , it is not appropriate for finding a continuous freeform surface .this can be seen easily by considering the condition that the normalized vector field ( [ eq:4 ] ) has to be equal to the gradient of the surface : plugging this relation into the lhs of ( [ eq:8 ] ) , we get , which can only be fulfilled if the rhs vanishs : }{\mathbf{n}\cdot \mathbf{s}_2}-(z_t- z(x , y ) ) \nabla \mathbf{v } \stackrel{!}{\equiv}0.\ ] ] the importance of this condition is due to the fact that is has to hold for _ every _ integrable ray mapping . since and therefore it reflects the nature of the law of refraction , that according to the definitions ( [ eq:6 ] ) and ( [ eq:9 ] ) the vectors , and have to lie in the same plane . + we now know , that the source term of ( [ eq:8 ] ) has to vanish , but we are still left with the question , if we can find a way to fullfil condition ( [ eq:10 ] ) .the main task is obviously to find a ray mapping for which relation ( [ eq:10 ] ) holds , which is nontrivial , since it couples the mapping with the unknown function . +but if we use the omt mapping , it follows from its vanishing curl that .if we use in addition to that , the small - angle approximation \ ] ] between the surface normal and the outgoing ray , we see that the condition ( [ eq:10 ] ) can be fulfilled approximately by using an omt map .because of the fact that is locally proportional to in contrast to the rhs , ( [ eq:11 ] ) can be interpreted as a far field approximation .this also implies , that the integrability of omt map is only asymptotically exact .+ hence for an omt mapping and the small - angle approximation , we get our final equation which has to be solved to get the required freeform surface .+ if we want to solve a linear advection equation like ( [ eq:12 ] ) , we have to know the function on the inflow part of the boundary , where the velocity field points into the integration area .because of energy conservation , this area is defined as and therefore its inflow part by with the outward boundary normal . together with this boundary condition equation ( [ eq:12 ] ) has at most one solution .+ in our case the boundary conditions can be deduced in the following way .first , we have to realize , that for an incoming collimated beam the boundary of the freeform surface can only determine the tangential deflection of a ray refracted at the boundary .the normal deflection is determined by the inner parts of the surface .therefore it makes sense to parameterize the boundary by a parameter and define the local coordinate system at each point of the boundary by the vectors(see fig .[ fig : boundary ] ) since only determines the tangential deflection , it is sufficient to consider the law of refraction ( [ eq:4 ] ) in the tangential plane spanned by and . hence , we can interpret the boundary value calculation as a two dimensional problem which allows us to derive a differential equation for the boundary values by the two dimensional equivalent of equation ( [ eq:9 ] ) projected on the plane where path length was introduced for dimensional reasons . from thiswe get : which reduces in the far field to whereby the position of the surface in space compared to the target plane is fixed by integration constant . since ( [ eq:12 ] ) itself can be interpreted as a far field approximation , as explained above , equation ( [ eq:16 ] )seems more suitable for our purposes .it provides us with a simple way of calculating the boundary values .the only degree of freedom left is the integration constant .equation ( [ eq:16 ] ) will build the basis of the numerical algorithm for solving equation ( [ eq:12 ] ) presented in the next section .+ at the end of this section , we want to note that ( [ eq:12 ] ) and ( [ eq:16 ] ) can be derived analogously for freeform mirrors by replacing ( [ eq:4 ] ) by the law of reflection and keeping in mind that . is parameterized by . at each point of the boundary the local coordinate systemis spanned by the tangential vector and normal vector to the boundary as well as the unit vector .since the boundary values of the freeform surface only determine the tangential deflection of the rays hitting the boundary , the projection of the law of refraction ( [ eq:4 ] ) on the plane can be used for the calculation of the boundary values.,width=207 ]we could solve equation ( [ eq:12 ] ) by standard computational fluid dynamic approaches , like finite volume methods , which are appropriate for the numerical treatment of linear advection equations .based on the nature of equation ( [ eq:16 ] ) , a different approach is proposed in the following . + considering ( [ eq:16 ] ), we recognize that the boundary values are calculated by the velocity field itself .this is in contrast to the usual fluid dynamical framework and allows us to separate into an arbitrary number of subareas for each of which we can calculate the boundary values by ( [ eq:16 ] ) .therefore , the freeform surface can be calculated on each subarea and the solution on by their unification .this implies that the freeform surface can be constructed by an integration of the omt map along arbitrary paths on , which characterizes the integrability of ray mappings . thus according to ( [ eq:12 ] ) and ( [ eq:16 ] )the omt map is approximately integrable as long as ( [ eq:11 ] ) holds .+ hence , the most convenient way to get the solution of ( [ eq:12 ] ) seems to be a line - by - line integration of ( [ eq:16 ] ) , which along the - and -direction is equivalent to equation ( [ eq:9 ] ) in a far field approximation .one possible way of integrating ( [ eq:16 ] ) is shown as an example in fig .[ fig : integration ] .thus , only the integration constant of one integral has to be fixed from which the others follow automatically . +the proposed approach has the useful feature that we do not need to parameterize the boundary , which allows the calculation of freeform surfaces with complex boundary shapes .the efficiency of the line - by - line integration approach is shown in the next chapter for two challenging design examples . ) by the simple integration of ( [ eq:16 ] ) along straight lines .first the value is fixed and used for the integration along the green line .the values of on the green line serve as starting values for the line - by - line integration along the red lines in the orthogonal direction. then the blue lines are integrated by using the values of on the last red line.,width=207 ]to show the efficiency of the algorithm , we want to apply it to two design examples . in the first one , we will calculate a freeform lens that maps a collimated beam of uniform intensity on the logo of the institute of applied physics ( iap ) in jena with a resolution of 500 x 500 pixels ( see fig .[ fig : iap ] ) .it shows strong intensity gradients between the letters and the background . to omit a division by zero within the implemented omt algorithm we have to use a background intensity for the input and output intensities . for an appropriate speed of convergence the background intensityis set to per cent of the maximum intensity .the second example , which shows smooth intensity variations and a lot of details , is the well - known picture of lena with a resolution of 500 x 500 pixels ( see fig .[ fig : iap ] ) .+ since the freeforms were calculated by integrating equation ( [ eq:16 ] ) , the specific characteristics of both pictures do not have any influence on the speed of the lens construction step explained in section [ sec : numerical algorithm ] , but they increase the mapping calculation time . +the pictures both have a square format .hence , it is convenient to integrate first along the upper side of the square region and then line - by - line in the orthogonal direction with the starting values given by the first integration .therefore we have to solve 501 integrals , which took in both cases less than one second in matlab on an intel core i3 at 2.4ghz with 16 gb ram .this time has to be added up with the mapping calculation time , which strongly depends on the implemented method and the specific features of the picture .+ the integration constant on the upper left side of the integration area was chosen to be .values of and were used for the refractive indices , and the source - target distance was chosen to be .the input and output beam as well as the freeform lens had side length of one .since every spatial value is normalized the results are scalable .+ at this point we want to note that according to ( [ eq:16 ] ) the validity of the approximation ( [ eq:11 ] ) can simply be checked by scaling the numerical results with , which of course has to be done for each intensity and configuration individualy . for our examplesthe quality of the illumination patterns produced by the raytracing did not change significantly even for distances between the lens and the target plane smaller than the side length of the lens .+ in both cases the calculated freeform lens was imported as a grid sag surface into zemax to verify our results by a raytracing simulation .the imported lens data are interpolated by zemax automatically .the results can be seen in fig .[ fig : iap ] . +we presented an efficient numerical method for the single freeform surface design for the shaping of collimated beams .it is based on the derivation of the condition ( [ eq:10 ] ) for integrable mappings by combining the law of reflaction / refraction and the well - known integrability condition in a suitable way .we showed that the condition can be fulfilled in the small - angle approximation ( [ eq:11 ] ) by using a ray mapping calculated by optimal mass transport .equation ( [ eq:11 ] ) therefore represents a quantitative estimate for the applicability of the omt map .this serves as a theoretical basis for the decoupling of the design process into two separate steps : the calculation of the omt mapping and the construction of the freeform surface with a linear advection equation . on the basis of the finding of appropriate boundary conditions for the advection equation, we presented a simple numerical algorithm for the surface construction by solving the standard integrals ( [ eq:16 ] ) , which differs from the previously published omt freeform design methods .+ besides its simplicity , accuracy and quickness , a useful feature of the construction technique is the independence of the freeform boundary shape ( see fig .[ fig : integration ] ) . by using a proper method for the calculation of the omt mapping ,this allows for example the calculation of a freeform for disconnected intensities or source and target intensties with two concave boundaries , which is in general a nontrivial problem , but important for applications .+ the results imply the approximate integrability of the omt map over a wide range of freeform - target plane distances and gives the omt freeform design methods a theoretical basis for collimated input beams .especially the condition ( [ eq:10 ] ) is interesting , also for other ray mapping techniques than omt methods , since it has to hold for every integrable mapping .hence , it opens up new possibilities for nearfield calculations as well as generalizations to it for point sources and double freeforms , which are currently under work .we want to derive equation ( [ eq:8 ] ) from the law of refraction ( [ eq:4 ] ) and the integrability condition ( [ eq:5 ] ) .since the curl of the incident field vanishs , plugging ( [ eq:4 ] ) into ( [ eq:5 ] ) gives with \\ \nonumber\ ] ] it follows }_{=[\mathbf{s}_2(\nabla \times \mathbf{s}_2)]\mathbf{s}_2-|\mathbf{s}_2 |^2 ( \nabla \times \mathbf{s}_2 ) } + \mathbf{s}_2 \times [ ( \mathbf{s}_2\nabla)\mathbf{s}_2 ] \}\ ] ] and we can write ( [ eq : a.1 ] ) as \mathbf{s}_2 + \mathbf{s}_2 \times [ ( \mathbf{s}_2\nabla)\mathbf{s}_2 ] \right\ } = 0.\ ] ] inserting ( [ eq:4 ] ) and using and leads to + n_1 \{\mathbf{s}_2 \times[ ( \mathbf{s}_2\nabla)\mathbf{s}_2 ] \}_z = 0\ ] ] and with it follows \}_z } { \mathbf{n}\cdot\mathbf{s}_2 } + \mathbf{s}_2(\nabla \times \mathbf{s}_3).\ ] ] using the definition and ( [ eq:6 ] ) the terms in equation ( [ eq : a.5 ] ) can be written as and \}_z & = ( \mathbf{s}_2)_x \cdot [ ( \mathbf{s}_2 \cdot \nabla)\mathbf{s}_2]_y - ( \mathbf{s}_2)_y \cdot [ ( \mathbf{s}_2 \cdot \nabla)\mathbf{s}_2]_x \\ & = \begin{pmatrix } -(\mathbf{s}_2)_y \\( \mathbf{s}_2)_x \end{pmatrix}\cdot \left[(\mathbf{s}_2 \cdot \nabla ) \begin{pmatrix } ( \mathbf{s}_2)_x \\( \mathbf{s}_2)_y \end{pmatrix } \right ] \\ & = \mathbf{v}\cdot \left[\left(\mathbf{v}^{\perp}\cdot \nabla \right)\mathbf{v}^{\perp}\right ] \end{split}\ ] ] and \\ & = -(z_t - z(x , y))\nabla \mathbf{v}. \end{split}\ ] ]the authors thank m. esslinger , and m. tessmer for valuable discussions , r. hambach and s. schmidt for valuable discussions and comments on the manuscript , c. liu and d. lokanathan for the help with the zemax implementation and d. musick for the spelling and grammar check .we also acknowledge the federal ministry of education and research germany for financial support through the project kosimo ( fkz:031pt609x ) .r. wu , y. zhang , m.m .sulman , z. zheng , p. bentez , and j.c .miano , `` initial design with l2 monge - kantorovich theory for the monge ampre equation method in freeform surface illumination design , '' opt . express * 22*(13 ) , 1616116177 ( 2014 ) v.i .oliker , `` mathematical aspects of design of beam shaping surfaces in geometrical optics , '' in _ trends in nonlinear analysis _ , m. kirkilionis , s. kromker , r. rannacher , and f. tomi , eds .( springer - verlag , 2003 ) , pp .193 - 222 .v. oliker , and b. cherkasskiy , `` controlling light with freeform optics : recent progress in computational methods for optical design of freeform lenses with prescribed irradiance properties , '' proc .spie * 9191 * , 919105919105 - 7 ( 2014 ) . | the efficient design of continuous freeform surfaces , which transform a given source into an arbitrary target intensity , remains a challenging problem . a popular approach are ray - mapping methods , where first a ray mapping between the intensities is calculated and in a subsequent step the surface is constructed . the challenging part hereby is the to find an _ integrable _ mapping ensuring a _ continuous _ surface . based on the law of reflection / refraction and the well - known integrability condition , we derive a general condition for the surface and ray mapping for a collimated input beam . it is shown that in a small - angle approximation a proper mapping can be calculated via optimal mass transport . we show that the surface can be constructed by solving a linear advection equation with appropriate boundary conditions . the results imply that the optimal mass transport mapping is approximately integrable over a wide range of distances between the freeform and the target plane and offer an efficient way to construct the surface by solving standard integrals . the efficiency is demonstrated by applying it to two challenging design examples . pacs numbers : : ` 42.15.-i , 42.15.eq ` . |
since the introduction of the k nearest neighbor ( knn ) method by fix and hodges in 1951 a lot of different variants of it have appeared in order to make it suitable to different scenarios .the most notable improvements were done in terms of adaptive distance metrics , fast access via space partitioning ( packed r * trees , kd - trees , x - trees , spy - tec ) , knowledge base ( prototype ) pruning ( , ) or classification based on sensitive distributed data ( ,, ) .an overview over the state - of - the - art in nearest neighbor techniques is given in .the continuing richness of investigation work into nearest neighbor can be explained with the omnipresence of cbr ( case based reasoning ) type of problems or just from the practical point of view of its massive parallelizability or simply populartiy .after all nearest neighbor is conceptually easy to understand - many students get to learn the nearest neighbor as the first classifier .all of the beforehand mentioned advances evolve around the question of how to optimize the knn s distance measure for retrieving .the method s apparent laziness might be the reason why a fast preoptimization of has not been paid a lot attention to .the proper choice of is an important factor for achieving maximum performance of a knn .however , as we will show , conventional optimization via cross - validation or bootstrapping is slow and promises potential for being sped up .therefore , we will devote this paper to the concept of fast optimization .there is work addressing this issue in an alternative way by introducing _incremental _ knn classifiers based on different types of trees . this class of nearest neighbors attempts to eliminate the influence of k by choosing the right ad hoc .the rationale behind this method is that for classification tasks the exact majority of a specific class label is not necessarily interesting .it is only interesting when nearest neighbor is used as a density estimator . in other casesit is generally enough to feel safe about which class label rules the nearest set .this class of nearest neighbor starts by polling a minimal amount of nearest neighbors .then it analyzes the labels and if the retrieved collection of labels is indecisive it will poll more nearest neighbors until it considers the collection decisive enough .naturally , the method does not scale to small values of , because e.g. a will never be indecisive .this is a problem because we know from experiments that small are often optimal .incremental nearest neighbors have their strength in very large databases where typical queries do not need to compute all relative distances . during a lifetime of a large database some distancesmight not be computed at all .the lazy distance computation make incremental nearest neighbor methods ideal candidates for real time tasks operating on large volatile data .however , in a cross validation setup which needs to compute all distances incremental knns can not play out their strengths . because of the restrictions and intended use we exempted incremental methods from investigation in this paper .we organize this paper in three parts .in the first part we will study the options to make the estimation of as fast as possible , in the second part we will experimentally compare the result with the conventional approach and in the last part we will draw a conclusion .the knn is commonly considered a classifier from the area of supervised learning theory . in this theorythere exists a training function that delivers a model based on a set of options , a matrix of example values and a vector of labels of respective size ( ) . in turn , the model is used in a classification function that is presented with a matrix of new examples and its task is to deliver a vector of new labels ( ) . and must be related but there is no restriction on what the model can be . it can be a set of complex items , a matrix , a vector or just a single value , indeed . from the perspective of a human the purpose of a modelis to make predictions about the future . in order to be able to do thisthe human brain requires a simplification of the world . only with the simplification of the world to a reduced number of variables and rulesit can compute future states faster than they occur . in case of knn the model of the data and of the smoothing parameter .this means , that the function is an identity function between model and parameters .this conflicts with the common notion of a model because the production of models commonly implies reduction . however , the collected data already are a reduction of the world ! from a practical point of viewthey can be considered as representative model states of the world and the data in the model is used to predict the class variable of a new vector before it is actually recorded .this fits perfectly well with the original notion of models .however , the model is only fixed , when is fixed .according to the framework , fixing the model ( getting the right value for ) is the job of the function .many training techniques in machine learning use optimization strategies developed for real numbers and open parameter spaces .this is not suitable for as it is an integer value and has known left and right limits : an ideal candidate for full search .most frameworks for pattern recognition offer macro optimization for the remaing model parameters that express themself in the initial training options .the structural compatibility of the knn with the pattern recognition frameworks macro optimization functions seduces the users very often to macro optimize .the consequence of this is that knn must compute distances repetitively as it can not assume that specific vectors will simply exchange their role between training and testing in the future . in case of knnthis is exceptionally regrettable .what does macro optimization mean for the computational complexity ?here we assume - but without loose of generality - that the dataset is of size ( -rows in matrix ) and can be exactly divided into equally sized partitions ready to be rearranged into different train and test setups . since everything is being recomputed the computational complexity for this kind of cross - validation of for a brute knn is . is the size of the tested range of and since we are considering full search we accept that depends on the size of the dataset and the number of folds .this means that the .the scan of nearest sets yields a partial complexity of .hence , for full search the total complexity is . in order to reduce this high complexityit is necessary to optimize knn within the train function as it has all necessary information about the relationships among the examples and the labels .this means that should become where is a vector of partition indices of the kind .now , the train function can utilize the fact that no new data will arrive during the training and all possible distance requests can be computed in advance .distances within the same partition need not be computed as they will be never requested .these fields can be set to infinity ( alternatively they can be filled up with the largest value found in the matrix + 1 ) .the lower triangle is symmetrical to the upper triangle of the matrix because vectors between two points have the same norm .the distance matrix has a structure as shown in figure [ fig : initial_matrix ] .the size of is .by we mean the component _ distance _ and by we mean the associated _label_. alone this redesign causes the complexity of the distance computations to get reduced to .however this benefit is achieved at the expense of higher memory use .the brute knn has a space complexity of , now the space complexity has risen to .the next step is to sort the vectors horizontally according to their distance .although collecting the best solutions would be faster for a single run it means for a range of that you effectively obtain the insertion sort .since there exist faster sorting algorithms we choose to sort but by using a different algorithm .the fastest algorithm for doing so is the quick sort .its average complexity is . in the worst case scenariothe sorting complexity of this method is . in that case rows will be sorted with .this means that the worst time complexity so far is and average case is . in order to obtain nearest neighbors for each vector indexed by the rowa counting matrix is initialized with zeros . is the number of symbols or classes . for each row in and and for the columns in the counters for the specific class labelis increased .more precisely , for every row and for every tested the counters are updated by .the complexity of this operation is . for the overall method this adds up to . in parallel the level of correct classificationmust be computed because after every modification of the state for the smaller is lost .therefore a matrix for recording the number of correct classfications is required .how is this number computed ? at every round of the nearest neighbor candidate computations contains in each of its rows a vector that tells how many labels of specific kind are in the nearest neighbors set .the classification label is .the complexity for this operation is .this simple method is ambiguous by nature , as there can be many labels that are represented by the same amount of vectors in a nearby set .computer implementations prefer to return the symbol with the smalest coding .however , it is possible to have a shadow matrix that is the sum of the distances observed for each class label in the set so far .the rule for computing is the same as for with the difference that instead of adding ones to the matrix you add distances . when symbol frequency is ambiguous ( argmax returns more than one value ) it is possible to use to find which samples are closer overall . because of the specific interest into fast optimization the simple argmax processing is used .now , every is compared for equality with ( ground truth ) and the binary result is added to .the will return best . is obtained by averaging .considering all parts of the algorithm together the overall complexity is method for fast computation ( autoknn ) was tested against three other algorithms from the ann library 1.1 : brute , kd - tree and bd - tree knn with default settings .the autoknn and its competitors performed a complete cross validation run on the ad , diabetes , gene , glass , heart , heartc , horse , ionosphere , iris , mushrooms , soybean , statlog australian , statlog german , statlog heart , statlog sat , statlog segment statlog shuttle , statlog vehicle , thyroid , waveform and wine datasets with 3 , 5 , 10 and 20 folds .the goal of the experiment was the measurement of the time required to complete the full course of testing different .the data was separated into stratified partitions which were used in different configurations in order to obtain a training and a testing set .the autoknn computes the classification results for all while the other algorithms are bound to use a logarithmic search . by logarithmic searchthe following schema is meant : .this schema is practically motivated and rational under the assumption that the influence of additional labels on the result diminishes with higher values of .practical consideration is primarily test time .example : while autoknn required 15,35s for a complete scan based on the _ ad _ dataset , on same data exact brute knn needed 353,7s in logarithmic mode and 19595,7s in full mode .the use of the logarithmic mode makes results with exactly the same values impossible .however , the differences in resulting and thus in accuracy were absolutely negligible so that the results are directly comparable nonetheless. we added the experiment times for all databases up to a total for each cross validation size .the results are shown in the figure and the table under [ fig : result ] .time measurements were performed on a amd phenom ii 965 with 8 gb of ram with a linux 2.6.35 kernel .the algorithms are implemented in c / c++ and were compiled with gcc 4.4.5 with o3 option .only core algorithm operation was measured and all time for additional i / o was ignored . for best comparability , ann library sources were statically included .the nearest neighbor approach is considered user friendly and is frequently used for data mining , classification and regression tasks .it is embedded into many automatic environments that make use of knn s flexibility .although knn has been used , analyzed and advanced for almost six decades a repeating question can not be answered by current literature : what is the fastest way to estimate the right value for and what are the expenses for doing so .the approach chosen here is to move the esimation away from the meta framework right into the training function .the advantage of this is that additional information about the data can be made .this additional information allows to precompute the distances among all vectors without waste and to reuse them numerous times . from this design change which is known to practitioners but not discussed in literature a reduction in time complexitycan be observed from to in average case .the experiments show , that this has significant impact on the speed of the estimation task .the comparison between kd - tree knn and the proposed approach proves moreover that having a better time complexity saves practically more time than an efficient distance measure for this task .the cost of this improvement is a higher space complexity ( now ) . in order to esimtate the practical impact of this complexity exchange we studied the contents of the uci repository .the uci should be a reasonable crossover of the problems people face in real life . out of 162 datasets we found that 90% of them have less than 50k examples , 80% of them have less than 10k examples and half of the uci s datasets has less than 1000 examples ( for exact distribution see fig .[ fig : uci ] ) .these sizes can be easily handled on higher class commodity computers .this leads to the conclusion that turning in space complexity for time complexity is a good choice most of the time .future implementations should offer an integrated searching .the results also show that the so found values for can be transfered not only to other exact knn but also to approximate knn working on kd - tree and bd - tree models .this work has been made possible through the funding of the paren ( pattern recognition and engineering ) project by the bmbf ( federal ministry of education and research , germany ) .00 fix , e. , hodges , j.l . _ discriminatory analysis , nonparametric discrimination : consistency properties_. technical report 4 , usaf school of aviation medicine , randolph field , texas , 1951 .carlotta domeniconi , dimitrios gunopulos , jing peng ._ adaptive metric nearest neighbor classification_. cvpr , vol . 1 ,pp.1517 ( 2000 ) jing peng , douglas r. heisterkamp , h. k. dai ._ adaptive kernel metric nearest neighbor classification_. icpr , vol .3 , pp.30033 ( 2002 ) t. deselaers , r. paredes , e. vidal , h. ney . _learning weighted distances for relevance feedback in image retrieval_. icpr 2008 , tampa , florida , usa ( 08/12/2008 ) i. kamel , c. faloutsos ._ on packing r - trees_. proceedings of the 2nd conference on information and knowledge management ( cikm ) , washington dc ( 1993 ) thomas h. cormen , charles e. leiserson , ronald l. rivest ._ introduction to algorithms_. ch .10 , mit press and mcgraw - hill ( 2009 ) s. berchtold , d. keim , h .-_ the x - tree : an index structure for high - dimensional data_. in proceedings of 22th international conference on very large databases ( vldb96 ) , pp 2839 , morgan kaufmann ( 1996 ) dong - ho lee , hyoung - joo kim ._ an efficient technique for nearest - neighbor query processing on the spy - tec_. ieee transactions on knowledge and data engineering , vol .15 , no . 6 , pp . 1472 - 1486 , ieee educational activities department , piscataway , nj , usa ( 2003 ) jose salvador - sanchez , filiberto pla , francesco j. ferri . _ prototype selection for the nearest - neighbor rule through proximity graphs_. prl , vol . 18 , no . 6 , pp .507 - 513 ( 1997 ) u. lipowezky ._ selection of the optimal prototype subset for 1-nn classification _ prl , vol .907 - 918 ( 1998 ) keith price bibliography - nearest neighbor literature overview david w. aha . _the omnipresence of case - based reasoning in science and application_. journal on knowledge - based systems , vol 11 , pp .261 - 273 ( 1998 ) young , b. , bhatnagar , r. : secure k - nn algorithm for distributed databases .university of cincinnati ( 2006 ) shaneck , m. , yongdae , k. , kumar , v. : privacy preserving nearest neighbor search . dept . of computer science minneapolis ( 2006 ) kantarcoglu , m. , clifton ,c. : privately computing a distributed -nn classifier .lncs , vol .3202 , 279290 ( 2004 ) mount , d. m. : approximate nearest neighbor library 1.1 , department of computer science and institute for advanced computer studies , university of maryland ( 2010 ) uci machine learning repository : pattern recognition and engineering ( paren ) , dfki , bmbf project : ( 2008 - 2010 ) janick v. frasch and aleksander lodwich and faisal shafait and thomas m. breuel _ a bayes - true data generator for evaluation of supervised and unsupervised learning methods_. pattern recognition letters , volume 32:11 , p.1523 - 1531 , 2011 , doi : 10.1016/j.patrec.2011.04.010the following diagrams are results from knn based on natural and synthetic datasets .synthetic datasets were obtained using wgks the standard deviation was estimated based on a 10x cross - validation .the diagrams are non linear . sections of little change are compressed , hence x - axis are discontinuous . | the k nearest neighbors ( knn ) method has received much attention in the past decades , where some theoretical bounds on its performance were identified and where practical optimizations were proposed for making it work fairly well in high dimensional spaces and on large datasets . from countless experiments of the past it became widely accepted that the value of has a significant impact on the performance of this method . however , the efficient optimization of this parameter has not received so much attention in literature . today , the most common approach is to cross - validate or bootstrap this value for all values in question . this approach forces distances to be recomputed many times , even if efficient methods are used . hence , estimating the optimal can become expensive even on modern systems . frequently , this circumstance leads to a sparse manual search of . in this paper we want to point out that a systematic and thorough estimation of the parameter can be performed efficiently . the discussed approach relies on large matrices , but we want to argue , that in practice a higher space complexity is often much less of a problem than repetitive distance computations . |
presently , modern surgery has at its disposal a great variety of different surgical techniques to heat biological tissue in a localized and safe way .all these techniques are based on specific applicators , _i.e. _ devices which extract ( cryosurgery ) or introduce heat ( laser , radiofrequency current , microwave or ultrasound treatments ) .theoretical heat modeling is a very cheap and fast methodology to study the thermal performance of these applicators .in fact , a lot of previous work has been conducted to model these procedures by using the bioheat equation as governing equation ( see , and references therein ) .this equation is based on the classical fourier theory of heat conduction and is widely used for modeling the heating of biological tissue .nevertheless , it has been hypothesized that for heat transfers on very small time scales the classical model will fail , and an alternative thermal wave theory with a finite thermal propagation speed could be more suitable to describe these phenomena . for the aforementioned reasons, we are lead to modeling the surgical heating of biological tissue by means of the hyperbolic heat transfer equation , and then compare these alternative results with those obtained from the standard fourier theory ( implying the parabolic heat transfer equation ) . in summary ,the main theme of this work is to present in depth the mathematical and physical background for a general discussion of heat models related to heating of biological tissue by means of energy applicators , and then , in particular , point out the physical differences between the classical fourier theory and the hyperbolic wave theory .moreover , as an illustrative example , we give some numerical results of the analytical modeling for radiofrequency heating ( _ rfh _ ) and for laser heating , each applied to the cornea in order to correct refractive errors .both surgical techniques have in common that they may involve high heat transfers during very short exposure times .this paper is organized as follows . in sec .[ heatmodels ] we will set up the mathematical groundwork for heat models related to _rfh _ and laser heating with a special emphasis on parabolic and hyperbolic heat models .then , in sec . [ applications ] we present the results for typical laser and _ rfh _ interventions in order to substantiate the physical discrepancies between both heat models in concrete examples .we will also comment on the domain of applicability of both theories and their main features .finally , sec .[ conclusions ] will present the conclusions and give an outlook on interesting future work .the most fundamental relation to model heat transfer is the generally valid , _i.e. _ model - independent , equation for thermal current conservation with a given internal heat source : here , denotes the thermal flux and is the temperature at point in the domain at time . as usual ,thermal conductivity is denoted by and diffusivity by , where is the volumetric heat capacity , being the density and the specific heat of the material under consideration . fig .[ currentpic ] gives a schematic view of thermal current conservation .fourier s law of heat conduction constitutes the foundation of all classical heat models .it proposes that the heat flux is proportional to the negative of the temperature gradient ( see fig .[ fourierpic ] ) : however , in this relation temporal changes instantaneously affect heat flux and temperature gradient . as an immediate consequenceany perturbations in classical heat models are propagated with infinite speed . assuming that thermal current conservation ( [ current ] ) and fourier s law ( [ fourier ] ) hold , directly yields the classical , _i.e. _ parabolic heat transfer equation ( _ phte _ ) with heat sources : the standard bioheat equation was introduced by pennes and is based on the parabolic heat transfer model , identifying the following explicit contributions for the heat sources in a biological system where the subindex denotes a surgical heat source ( _ e.g. _ laser or radiofrequency treatment ) , refers to blood perfusion , and to any source related to metabolic activity . with the advancement of modern surgery ,medical treatments involve progressively smaller time scales and higher energy fluxes .for example , for radiofrequency and laser surgery _phte _ models could become inappropriate .the reason is that on small time scales ] , something which is entirely absent in the much simpler expression for the fourier flux in eq .( [ fourier ] ) . exhibiting these features ,the hyperbolic heat transfer model with its finite thermal propagation appears to provide an ideal framework to describe surgical interventions such as laser ablation ( a procedure named as laser thermokeratoplasty , or short _ ltk _ ) and _ rfh _ of the cornea ( _ ck _ , conductive keratoplasty ) , which use very short and high - energetic pulsations ( typical time periods are and , respectively ) .therefore , in the following section , we will focus on applications of heat models in surgical techniques , in particular _rfh _ and laser surgery applied to the cornea , and thereby develop a realistic model for the heating of biological tissue by employing the _ hhte _ as previously derived in eq .( [ hhte ] ) .there exist numerous processes in which great amounts of heat are applied to materials in very short exposure times .this section will focus on the modeling of heating biological tissue as it occurs during medical intervention with surgical radiofrequency or laser devices to the cornea .laser heating is a process which implies tissue heating caused by absorbing the optical energy of a high - energy laser beam ( _ ltk _ ) . on the other hand , radiofrequency heating ( _ rfh _ )is ultimately based on ohm s law , which states that currents flowing through a resistor generate heat .it is essentially ionic motion in the tissue which will provoke biological heating aimed at producing any medical effects .both are surgical techniques which may not only involve small time scales but also high energy fluxes , which makes them an important group of applications in which differences between parabolic and hyperbolic models could have great effects . as we have previously mentioned in sec .[ heatmodels ] , these type of processes represent non - equilibrium processes on small time scales ( _ i.e. _ compared to the relaxation time ) , since the system requires considerably more time to reach the equilibrium state after the initial thermal energy input . for this reason , _ phte _ models , with their infinite thermal propagation speed , may not provide an appropriate description for the underlying physical structure , and it may be necessary to rely on _ hhte _ models instead . among the many surgical procedures in which laser heating is employed , we focus our attention on corneal laser heating , also referred to as laser thermokeratoplasty ( _ ltk _ ) . in the case of laser heating, we consider a theoretical model consisting of a semi - infinite fragment of homogeneous isotropic biological tissue in which the laser beam falls on the entire tissue surface .[ laserpic ] depicts the chosen model geometry , in which we consider a one - dimensional model with the -axis parallel to the direction of the incident laser beam . above the surface , for , free thermal convectionis supposed to cool the tissue surface .as we want to study the problem from the point of view of the _ hhte _ model , we have to start off from the governing equation provided by eq .( [ hhte ] ) . for this one - dimensional laser heating model, we have to solve the corresponding heat transfer equation for penetration depth including the appropriate heat - source term and boundary conditions . neglecting blood perfusion and metabolic activity in eq .( [ bioheat ] ) , and hence , the heat source will only have contributions from .this surgical source should be obtained from the beer - lambert law , which empirically states that for radiation the intensity decreases exponentially with penetration depth . including a factor with the temporal dependence to model an energy pulse of duration , this yields ,\ ] ] where denotes the dimensionless fresnel surface reflectance , is the absorption coefficient , and is the incident energy flux at the tissue surface . as usual, denotes the heaviside function . after combining eqs .( [ hhte ] ) and ( [ beer ] ) , the final form of the governing equation is } , \end{array}\ ] ] where is the dirac delta function .the corresponding initial boundary conditions are given by \lim\limits_{x\to\infty } t(x , t)=t_0 & & \qquad\forall t>0 \\[.25 cm ] \displaystyle{\partial t\over\partial x}(0,t ) = \displaystyle { { h\over k}\left(\tau \frac{\partial t}{\partial t}(0,t)+t(0,t)-t_a\right ) } & & \qquad\forall t>0 \label{coolflux } \end{array}\ ] ] where is the initial temperature and is the ambient temperature .furthermore , is the thermal convection constant at the interface tissue given by newton s law of cooling , which states that the convective flux of an object is proportional to the difference between its own temperature and the ambient temperature : .\ ] ] since at the interface tissue with the heat flux is given by the newton s law of cooling , the last condition of eq .( [ coolflux ] ) is obtained by imposing this law onto the relation for the hyperbolic heat flux eq .( [ hypflux ] ) ( see also ) .the explicit analytical solution of this problem has been obtained in ref . and is mainly based on the use of laplace transforms. it also contains for comparison the fully analytical _solution as an application to the thermokeratoplasty technique . for the numerical estimates ,well - established thermal and optical properties of the cornea were used .all essential physical constants for the model are summarized in tab .[ corneaparam ] .[ corneaparam ] density & conductivity & diffusivity & convection + [ kg m & [ w m k ] & [ m s ] & [ w m k ] + & & & 20 + fresnel reflectance & absorption coefficient + & [ m ] + 0.024 & 2000 + .35 cm figure [ laserabl ] shows the numerical estimates for temperatures at various tissue penetration depths obtained as a function of time .this figure represents the hyperbolic ( solid line ) and parabolic ( dashed line ) temperature evolution at the four locations and mm for the thermokeratoplasty technique .we can mainly find two differences between both solutions : firstly , according to the parabolic model , for the values and mm the maximum temperature has already been reached and is steadily decreasing . for and mm ,the maximum temperature has not yet been reached and is initially increasing .for the _ hhte _ predictions , we observe that for all penetration depths in some part of the plot ( or the entire plot ) the temperature is increasing .the great differences in temperatures between both models ( overall at positions closer to the surface and mm ) is even more important when the energy application is pulsed , since in every new pulse the difference between the initial temperature could become greater and greater .secondly , we can confirm that the behavior of both solutions is very different . at values and mm, we notice a delay in the abrupt temperature drop associated with switching off the laser beam .this delay is related to the finite thermal propagation speed in the _ hhte _ model . using in eq .( [ speed ] ) the numerical value for diffusivity ( _ viz .[ corneaparam ] ) and for the relaxation time predicts for the speed of these steps , which exactly agrees with the known finite speed of the thermal wave in the cornea .this fact demonstrates the wave character of the _ hhte _ model . for depths and mm , however , these drops are not observed , because they occur outside the range of the time interval considered . radiofrequency heating ( _ rfh _ ) is a surgical procedure broadly employed in many clinical areas such as the elimination of cardiac arrhythmias , the destruction of tumors , the treatment of gastroesophageal reflux disease and the heating of the cornea for refractive surgery . in the remainder of this sectionwe will focus on a model of the corneal _ rfh _ treatment .the schematic diagram of the model geometry is shown in fig . [ modelpic ] . here, a spherical electrode of radius is completely embedded and in close contact with the biological tissue , which has infinite dimension . in this case , we can also use a one - dimensional model ( being the spatial variable ) , since the geometry under consideration displays radial symmetry . again , assuming in the bioheat equation ( [ bioheat ] ) , the governing equation results from the combination of eq .( [ hhte ] ) and the explicit form of the ( surgical ) heat source which in the case of _ rfh _ is a product of a radial and a temporal part so that the total applied power outside of the electrode is where is the usual solid angle subtended by a surface .thus , the power ( flux ) per unit solid angle , , satisfies the usual intensity law .note that is included to model a non - pulsed source .as a result , for the corneal _ rfh _ the governing equation and the boundary conditions are : \ ] ] \lim\limits_{r\to\infty } t(r , t)=t_0 & & \qquad\forall t>0 \\[.25 cm ] \displaystyle { { \partial^2t\over\partial t^2}(r_0,t ) + { 1\over\tau}\,{\partial t\over\partial t}(r_0,t ) = { 3k\over\rho_0 c_0 r_0\tau}\,{\partial t\over\partial r}(r_0,t ) } & & \qquad\forall t>0 \label{electroflux } \end{array}\ ] ] the last condition in eqs .( [ electroflux ] ) contains the density and specific heat of the electrode ( where ) , given by and , respectively .this condition is based on the fact that in the interior and on the surface of the spherical electrode it must hold which just states that the heat flux spreading into the biological tissue is due to the change of the thermal energy content of the electrode . applying gauss law to eq .( [ eleccons ] ) and integrating over the entire volume yields where we have assumed that the electrode has relatively small radius and sufficiently high thermal conductivity , which makes the electrode act like a punctual heat source . in the derivation of eq .( [ rfhcond ] ) , this justifies taking to be constant and pulling out of the volume integral . finally , by substituting the expression for the hyperbolic heat flux eq .( [ hypflux ] ) into eq .( [ rfhcond ] ) and using , where is the usual radial unit vector , one readily obtains the last condition for the electrode in eqs .( [ electroflux ] ) .the analytical solution of this problem and the corresponding _ phte _ solution have been obtained in ref . as an application for corneal _rfh_. we have visualized the data generated from ref . in fig .[ rfhpic ] in order to discuss the main characteristics of the model .figure [ rfhpic ] represents the hyperbolic ( solid lines ) and parabolic ( dashed lines ) temperature profiles along the radial axis for the times and ms .similar to the laser - heating case , for small times and locations near the electrode surface the temperatures from the _ hhte _ are greater than from the _ phte_. this fact also becomes important when instead of a non - pulsed _ rfh _application a pulsed power is applied .moreover , the presence of pronounced peaks in fig . [ rfhpic ] reveals the wave character of the _ hhte _ model and its prediction of a finite heat conduction speed .in this paper , we have outlined the fundamentals and main differences between the parabolic and hyperbolic model of heat conduction .these differences encourage the use of the _ hhte _ approach in processes in which great amounts of heat are transferred to any material in very short times .specifically , laser and radiofrequency heating are two surgical techniques of this type of processes . through the application of _ hhte _ to these surgical techniques, we have shown the main characteristics of this model and its differences with _predictions . at the moment, we are working on the analytical modeling of the hyperbolic bioheat equation including a source term for blood perfusion ( _ viz . _ eq .( [ bioheat ] ) ). this will be less important for non - perfused tissues ( such as the cornea ) , but relevant for well - perfused organs such as _ e.g. _ the liver .on the other hand , we are studying the implications of the hyperbolic heat equation for the case of pulsed _ rf _ applications .energy pulses for surgical procedures are being employed in such different areas as conductive keratoplasty ( _ ck _ ) and _ rfh _ to destroy tumors . however , we are well aware of the restrictions of purely analytical models , mainly due to the lack of mathematical tractability . for this reason, we are planning to develop alternative theoretical models based on numerical techniques ( such as finite elements ) in order to use the hyperbolic bioheat equation in models with more complicated geometry ( more realistic electrode and tissue geometries with irregular boundaries ) and also varying thermal characteristics .f. manns , d. borja , j.m .parel , w. smiddy and w. culbertson , semianalytical thermal model for subablative laser heating of homogeneous nonperfused biological tissue : application to laser thermokeratoplasty , _ journal of biomedical optics _ * 8*(2 ) , 288297 , ( 2003 ) .m. trujillo , m.j .rivera , j.a .lpez molina and e.j .berjano , analytical thermal - optic model for laser heating of biological tissue using the hyperbolic heat transfer equation , submitted to _ mathematical medicine and biology _ , ( 2008 ) .berjano , j.l .ali and j. saiz , modeling for radio - frequency conductive keratoplasty : implications for the maximum temperature reached in the cornea , _ physiological measurement _ * 26*(3 ) , 15772 , ( 2005 ) .a. erez and a. shitzer , controlled destruction and temperature distributions in biological tissues subjected to monoactive electrocoagulation , _ journal of biomechanical engineering _ * 102 * , 4249 , ( 1980 ) .lpez molina , m.j .rivera , m. trujillo and e.j .berjano , effect of the thermal wave in radiofrequency ablation modeling : an analytical study , _ physics in medicine and biology _ * 53*(5 ) , 14471462 , ( 2008 ) .lpez molina , m.j .rivera , m. trujillo , f. burdo , j.l .lequerica , f. hornero and e.j .berjano , assessment of hyperbolic heat transfer equation in theoretical modeling for radiofrequency heating techniques , _ the open biomedical engineering journal _ * 2 * , 2227 , ( 2008 ) . in a physical system ( simply - connected domain ) is related to both , temperature change and divergence of the heat flux .the coefficient gives the volumetric heat capacity in terms of thermal conductivity and diffusivity . ] andcross - sectional surface area .the corresponding flux , thermal energy per time and area , is proportional to the temperature gradient .the sign indicates that heat flows from to . ] andambient temperature .the laser pulse has a duration of with an incident energy flux . the represented curves for the _ hhte _ ( solid lines ) and fourier ( dashed lines ) model correspond to penetration depths .the relaxation time is . , width=491 ] , and ms of a non - pulsed _ rf _application with total power and duration .the initial temperature is and the electrode radius .the relaxation time is taken ., width=491 ] | in modern surgery , a multitude of minimally intrusive operational techniques are used which are based on the punctual heating of target zones of human tissue via laser or radio - frequency currents . traditionally , these processes are modeled by the bioheat equation introduced by pennes , who considers fourier s theory of heat conduction . we present an alternative and more realistic model established by the hyperbolic equation of heat transfer . to demonstrate some features and advantages of our proposed method , we apply the obtained results to different types of tissue heating with high energy fluxes , in particular radiofrequency heating and pulsed laser treatment of the cornea to correct refractive errors . hopefully , the results of our approach help to refine surgical interventions in this novel field of medical treatment . , , , , and heat models , fourier heat equation , parabolic heat equation , bioheat equation , radiofrequency surgery , laser ablation |
sonoluminescence ( sl ) is an intriguing phenomenon , which consists of light emission by small collapsing bubbles inside liquids .such bubbles are created by ultrasonic waves when the pressure of the liquid is reduced relatively to the pressure of the gas present in the medium . in general , the appearance of a cavity within a liquid implies the existence of a surface ( the wall ) dividing the region into two parts , each one occupied by a fluid : the inner cavity consisting of a gas and/or liquid vapor , and the liquid portion outside .the bubble initially expand to a maximum volume , and then collapse . at the final stages of the collapse ,the gas inside the bubble radiate light .it is observed that the light emission is enhanced when atoms of noble gas are present inside the bubble . in the 90s , gaitan et al . obtained in laboratory the necessary conditions to create and trap a single bubble the size of a few microns , levitating in a bottle of water under the action of a strong , stationary - wave sound field , emitting periodically flashes of light in each acoustic cycle .the trapping of a single sonoluminescent bubble in the liquid , yielding what is called single - bubble sonoluminescence ( sbsl ) , and the production of repeated cycles of expansion and contraction , excited by ultrasonic acoustic waves , allowed a more accurate study of the phenomenon .this stability of the bubble made possible more detailed studies about the duration of the flash of light and the size of the bubble .the conversion of the energy of sound waves into flashes of light occurring in such bubbles with few microns in size is an interesting field of research , if one considers the difficulties inherent to the process , the inadequacy of some theories and models proposed , and experimental limitations . despite the existence of a wide variety of theories and models , the phenomenon is not completely explained , and there are still many open questions . among them , we can cite the heating mechanism of the gas inside the bubble ; the process of light emission ; the role of temperature of the liquid in the intensity of the emitted radiation ; the reason why water is one of the ideal fluids for observing the phenomenon .up to now , the phenomenon has been extensively studied , but its detailed mechanisms remain still unclear .the study of sl involves topics of physics such as hydrodynamics and thermodynamics , besides electromagnetism , statistical mechanics and atomic physics if one wants to go into the emitting mechanism .the dynamics of the bubble can be described by the navier - stokes equation , provided the initial state of the bubble is defined by a set of physical parameters .the energy of one photon emitted in sbsl , if compared with the energy of one atom vibrating in the sound wave which gave rise to it , typically provides a value of of order , which shows the high focalization of energy in this effect .the temperature in the interior of the bubble can reache thousands of degrees kelvin during the collapse phase . in this workwe carry out a modeling of sbsl hydrodynamics , using the radius of the bubble as the variable of interest .there are several models that describe the time evolution of the radius of the bubble .the most common are : the rayleigh - plesset model , the herring - trilling model , and the keller - miksis model .all of these models are derived from the navier - stokes equation , using different simplifying assumptions .rayleigh - plesset equation describes the behaviour of compressible or incompressible fluids , and can therefore be used to compare the effects of the compressibility of the liquid on the time evolution of the bubble radius .other physical parameters like the liquid viscosity , the properties of the gas inside the bubble , and the superficial tension of the wall can be taken into account in these models . in order to compare the effects of these factors ,we define in this work an useful parameter , the _ damping factor _ , which is defined as the ratio between the first and second highest values of the bubble radius .this parameter is relevant , since it compares two values of the radius : one before and the other one after the light emission .a detailed study of the bubble behaviour , including the variation of the wall speed as a function of time , is also carried out .this work is addressed mainly to ( under)graduate students and teachers .the subject requires the domain of calculus and many concepts from various fields of physics at intermediate level. the work can be useful in physics courses at university level , as well as an illustrative example of numerical calculus applied to a highly nonlinear phenomenon .it may also be valuable as an introductory - level text on sbsl addressed to young researchers .the work is the result of a research project in sonoluminescence carried out together with undergraduate students .the behaviour of a nonrelativistic fluid is described by the navier - stokes equation , which is valid at each point of the fluid and can be written as where is the fluid mass density , is the velocity field inside the fluid , is the resultant of body forces ( e.g. , gravity ) per unit volume of the fluid , is the pressure field inside the fluid and is the fluid viscosity .the term on the left - hand side of the above equation is the material derivative of the velocity of the fluid element .the material derivative is given by the operator where the first term on the right - hand side is the time derivative with respect to a fixed reference point of space ( the euler derivative ) , and the second term represents the changes of the field velocity along the movement of the fluid .the quantity on the right - hand side stands for the acceleration originated from the body forces acting on the fluid element , such as gravity or electromagnetic forces , for example .the second and third terms represent respectively the hydrostatic force and the viscous force , both per unit volume .the form navier - stokes equation is written above means that we are considering the fluid as newtonian .now , we remember that the flow inside and around the bubble is restricted to the radial direction , in other words , the problem exhibits spherical symmetry , which in fact is valid even beyond the neighborhood of the bubble , provided the shape of the flask is spherical , a typical experimental situation .thus , we take into account only the expansion and contraction motion of the bubble s radius . besides , we assume as a first approach that the compressibility of the liquid is much smaller than that of the gas inside the bubble . in this case , one derives from eq .( [ 1 ] ) the rayleigh - plesset equation ( a dot stands for one time derivative , , etc ) : where is the bubble s radius , is the variable gas pressure inside the bubble ( is assumed to be uniform in our model ) , is the pressure of the liquid measured at any remote point from the bubble ( typically , atm ) , is the driven acoustic pressure at the point where the bubble is placed , and is the liquid surface tension at the bubble wall . is assumed to be sinusoidal and starting an expansion cycle in , that is , being the amplitude of the driven pressure and the ultrasound angular frequency in resonance with the natural oscillations of the flask , such that the driven pressure generates a stationary ultrasound wave that traps the bubble at its center , on a pressure antinode .often , the effects of the compressibility of a liquid can be neglected in many problems of hydrodynamics . however , in the case of sbsl this approach is no longer justified , because a large amount of the acoustic energy driven to the bubble is emitted back from it to the liquid , in the form of a spherical shock wave ( only a small amount is in fact converted into light ! ) , which obviously could not exist in an incompressible medium ( the acoustical wave emitted by the bubble is experimentally important , since its detection by a hydrophone signals the presence of the trapped bubble at the center of the spherical flask ) .it follows that when the compressibility of the liquid is considered , a new term is added to the right - hand side of eq .( [ 3 ] ) , leading to the modified rayleigh - plesset equation where is the speed of sound in the liquid ( which henceforth we will assume to be water ) .we adopt a van der walls equation of state for describing the gas pressure inside the bubble , which reads where is the static bubble radius , that is , the ambient bubble radius when it is not acoustically forced , is the characteristic van der waals hard - core radius of the gas inside the bubble and is the ratio between the specific heats of the gas at constant pressure and at constant volume ( the adiabatic index ) .so , the gas pressure varies with time only by means of the bubble radius , .it was assumed in eq .( [ 3.2 ] ) that the gas undergoes a so fast cycle of expansion and collapse that it is adiabatic. however , a more accurate analysis allows one to conclude that the expansion is approximately isotermic ( ) and only the final part of the collapse is indeed adiabatic . our simplified model here consider the whole cycle as adiabatic . in eq .( [ 4 ] ) , the time derivative of the gas pressure is explicitly given by equations ( [ 3 ] ) and ( [ 4 ] ) are second - order differential equations for the radius , which have no analytical solution .so , in order to solve numerically these equations for a given set of system parameters , we used the mathematical program matlab ( the corresponding program codes can be requested by email to any of the authors ) .let us first consider the case in which the compressibility of the liquid is neglected .figure [ fig:100 ] shows the solution of eq .( [ 3 ] ) for a given set of parameters describing some properties of the gas and the liquid , and at forcing pressure atm .the parameters used are shown in table [ table1 ] , where the noble gas argon is considered because it is well - known in sbsl literature that , after some cycles , it is the only remaining constituent of the initial air bubble , due to chemical reactions that the other components participate , which carry them to the water outside . the solution shown in fig .[ fig:100 ] is qualitatively similar to the experimental results , but quantitatively very different : the experimental afterbounces are much smaller than those that appear in fig . [ fig:100 ] .this is in agreement with the fact that indeed a considerable amount of energy is lost in the first contraction ( collapse ) , which , as we have already pointed out , is due to sound radiation by the bubble , an effect that is impossible theoretically under the assumptions that led to eq .( [ 3 ] ) ..[table1 ] parameters used in the numerical simulation .[ cols= " < , < " , ] we can see from table [ table2 ] that the superficial tension strongly affects the dynamics of the bubble , when compared with the other two properties considered here , since it substantially reduces the damping factor . according to the experimental results existing in the literature for sbsl , the damping factor lies between the two extreme values shown in the table , and closer to the lower values , thus pointing out the importance of including the surface tension in the calculations .in this work we have presented a model describing a single bubble sonoluminescence .we obtained solutions for the equations of the model in the cases of compressible and incompressible liquids . comparing figs .[ fig:100 ] and [ fig:200 ] with the experimental results , we show that the modified rayleigh - plesset equation valid for a compressible liquid , given by eq .( [ 4 ] ) , describes the bubble dynamics with a better accuracy , which is due to the experimentally observed acoustic radiation from the bubble , a fact that demands the liquid compressibility to be different from zero .we introduced the damping factor in order to compare the effects of some properties of the system on the dissipation of energy of the bubble after the collapse ( first compression ) .we pointed out that the superficial tension of the liquid can not be neglected in any model describing sbsl dynamics , as we summarized in table [ table2 ] .it is worth to mention that chemical effects was not considered in the present work .the chemical factors clearly affect the thickness of the bubble wall , thereby also contributing to the generation of a shock wave .of course , a more realistic treatment must include such kind of ingredient .also , as mentioned above , the process that the bubble gas undergoes is not adiabatic during the whole cycle , as we considered here , but instead isotermic during the expansion , which affects the value of the exponent . however , as pointed out in ref . , just modifying the numerical calculation in order to interpolate between and is not sufficient to completely describe the behaviour of in all of its details .matlab routines that generates our results can be requested by email to any of the authors and we hope that the simulations presented here will be valuable for teachers , ( under)graduate students and young researchers in order to have a first contact with single - bubble sonoluminescence and some of its remarkable features .the authors thank the brazilian foundations cnpq , faperj and capes for the financial support . | sonoluminescence ( sl ) is the phenomenon in which acoustic energy is ( partially ) transformed into light . it may occur by means of many or just one bubble of gas inside a liquid medium , giving rise to the terms multi - bubble- and single - bubble sonoluminescence ( mbsl and sbsl ) . in the last years some models have been proposed to explain this phenomenon , but there is still no complete theory for the light emission mechanism ( especially in the case of sbsl ) . in this work , we will not address this more complicated particular issue , but only present a simple model describing the dynamical behaviour of the sonoluminescent bubble , in the sbsl case . using simple numerical techniques within the software matlab , we discuss solutions considering various possibilities for some of the parameters involved : liquid compressibility , superficial tension , viscosity , and type of gas . the model may be used as an introductory study of sonoluminescence in physics courses at undergraduate or graduate levels , as well as a quite clarifying example of a physical system exhibiting large nonlinearity . |
random matrices arise in many mathematical contexts , and it is natural to ask about the properties that such matrices satisfy . if we choose a matrix with integer entries at random , for example , we would like to know the probability that it has a particular integer as an eigenvalue , or an integer eigenvalue at all . similarly ,if we choose a matrix with real entries at random , we would like to know the probability that it has a real eigenvalue in a particular interval .certainly the answer depends on the probability distribution from which the matrix entries are drawn . in this paper , we are primarily concerned with uniform distribution , so for both integer - valued and real - valued cases we must restrict the entries to a bounded interval . in an earlier paper , the authors show that random matrices of integers almost never have integer eigenvalues .an explicit calculation by hetzel , liew , and morrison shows that a matrix with entries independently chosen uniformly from ] by and ( where is the natural logarithm ) . then for any integer between and , where the implied constant is absolute . on the other hand , if then is empty .[ theorem : particular integer eigenvalue ] we remark that the function is continuous and , with the exception of the points of infinite slope at , differentiable everywhere ( even at , if we imagine that is defined to be 0 when ) .notice that equation is technically not an asymptotic formula when is extremely close to , because then the value of can have order of magnitude or smaller , making the `` main term '' no bigger than the error term . however , equation is truly an asymptotic formula for , where is any function tending to infinity ( the exponent arises because approaches 0 cubically as tends to 2 from below ) . by summing the formula over all possible values of , we obtain an asymptotic formula for .we defer the details of the proof to section [ enumeration ] .let .the probability that a randomly chosen matrix in has integer eigenvalues is asymptotically .more precisely , [ corollary : all integer eigenvalues ] if has eigenvalue , then the scaled matrix has eigenvalue , which is the argument of that appears on the right - hand side of .thus one interpretation of theorem [ theorem : particular integer eigenvalue ] is that for large , the rational eigenvalues of tend to be distributed like the function .note that the entries of are sampled uniformly from a discrete , evenly - spaced subset of ] .let )} ] ; the answer , perhaps surprisingly , is no .the next theorem provides this latter distribution .define to be the density function for real eigenvalues of matrices in )} ] , then the expected number of eigenvalues of in the interval ]we are conditioning on having real eigenvalues , which occurs with probability ( this can be obtained by integrating , analogously to the proof of corollary [ corollary : all integer eigenvalues ] ; the computation by hetzel , liew , and morrison is more direct ) .( normalized ) versus ( normalized),width=384 ] note that the distribution is bimodal , having its maxima at thus , a random matrix in )} ] having rational eigenvalues , since any matrix with real eigenvalues is a small perturbation from one with rational eigenvalues . that this is not true for shows that the eigenvalue distribution of is not purely the result of magnitude considerations butalso encodes some of the arithmetic structure of the integers up to .we remark that theorem [ theorem : particular integer eigenvalue ] can also be obtained from a powerful result of katznelson .let be a convex body containing the origin in , and embed the set of integer matrices as lattice points .then theorem 1 of gives an asymptotic formula for the number of singular integer matrices inside the dilate .taking ^ 4 ] .similarly , the complex eigenvalues converge to the `` circular law '' predicted by girko , namely the uniform distribution on the unit disk centered at the origin .very recently , tao and vu have shown that the circular law is universal : one can replace the gaussian distribution by an arbitrary distribution with mean 0 and variance 1 .similar results have been established for random symmetric matrices with entries independently chosen from a gaussian distribution ( the `` wigner law '' ) or from other distributions . those who are interested in the connections between analytic number theory and random matrix theory might wonder whether those connections are related to the present paper .the matrices in that context , however , are selected from classical matrix groups , such as the group of hermitian matrices , randomly according to the haar measures on the groups .the relationship to our results is therefore minimal .we begin with some elementary observations about matrices that will simplify our computations .the first lemma explains why the functions and are supported only on ] is bounded in absolute value by 2 .[ lemma : baby gershgorin ] we invoke gershgorin s `` circle theorem '' , a standard result in spectral theory : let be an matrix , and let denote the disk of radius around the complex number .then gershgorin s theorem says that all of the eigenvalues of must lie in the union of the disks in particular , if all of the entries of are bounded in absolute value by , then all the eigenvalues are bounded in absolute value by .the key to the precise enumeration of is the simple structure of singular integer matrices : for any singular matrix , either at least two entries of equal zero , or else there exist nonzero integers with such that moreover , this representation of is unique up to replacing each of by its negative . [ baby.enumeration ] if one of the entries of equals zero , then a second one must equal zero as well for the determinant to vanish .otherwise , given with none of the equal to zero , define , and set and , so that . since is singular , the second row of must be a multiple of the first row that is , there exists a real number such that and .since and are relatively prime , moreover , must in fact be an integer .this argument shows that every such matrix has one such representation .if is another such representation , then implies , which shows that ; the equalities , , and follow quickly . for a matrix define it is easily seen that this is the discriminant of the characteristic polynomial of .we record the following elementary facts , which will be useful in the proof of lemma [ lemma : double eigenvalue ] and proposition [ f_w proposition ] .let be a matrix with real entries .a. has repeated eigenvalues if and only if . b. has real eigenvalues if and only if . c. if and only if has two real eigenvalues of opposite sign .d. if and , then the eigenvalues of have the same sign as . [ discriminant ]let denote the eigenvalues of , so that , , and , each of which is real . parts ( a ) , ( b ) and ( d ) follow immediately from these observations , and part ( c ) from the fact that if are complex .the next lemma gives a bound for the probability of a matrix having repeated eigenvalues .it is natural to expect this probability to converge to 0 as increases , and indeed such a result was obtained in for matrices of arbitrary size .we give a simple proof of a stronger bound for the case , as well as an analogous qualitative statement for real matrices which will be helpful in the proof of theorem [ theorem : distribution of real eigenvalues ] . the number of matrices in with a repeated eigenvalue is for every .the probability that a random matrix in )} ]this is easily seen to be a zero - probability event , as is the event that . for matrices in , we enumerate how many can satisfy . if then there are choices for ; otherwise there are at most choices if and no choices otherwise .( here is the number - of - divisors function ; the factor of 2 comes from the fact that and can be positive or negative , while the `` at most '' is due to the fact that not all factorizations of result in two factors not exceeding . )therefore the number of matrices in with a repeated eigenvalue is at most where the inequality follows from and the well - known fact that for any ( see for instance ) .let be the mbius function , characterized by the identity the well - known dirichlet series identity is valid for ( see , for example , ( * ? ? ?* corollary 1.10 ) ) .in particular , , and we can estimate the tail of this series ( using ) to obtain the quantitative estimate for nonzero integers , and parameters , , define the function then where the implied constant is independent of and .[ initial count and mu insertion ] fix an integer , and let , so that is singular . by lemma [ baby.enumeration ] , either at least two entries of equal zero , or else has exactly two representations of the form . in the former case ,there are choices for each of the two potentially nonzero entries , hence such matrices in total ( even taking into account the several different choices of which two entries are nonzero ) . in the latter case , there are exactly two corresponding quadruples of integers as in lemma [ baby.enumeration ] . taking into accountthat each entry of must be at most in absolute value , we deduce that where is defined as above .because of the symmetries , we have the only term in the sum where is the term , and for all other terms we can invoke the additional symmetry , valid by switching the roles of and in the definition of .we obtain where the last step used the fact that . using the characteristic property of the mbius function, we can write the last expression as as claimed . let and be integers with , and let and be integers with .then where [ converting from n to cd lemma ] we have since and are positive , we can rewrite this product as where we have used and to slightly simplify the inequalities .the first factor on the right - hand side of equation is if this expression is positive , and 0 otherwise ; it is thus precisely .similarly , the second factor on the right - hand side of equation is ( note that this expression is always positive under the hypotheses of the lemma ) , which is simply . multiplying these two factors yields the lemma follows upon noting that both and are by definition , so that the second summand becomes simply , and the term may be subsumed into since .we have already used the trivial estimate provided .we will also use , without further comment , the estimates and these estimates ( also valid for ) follow readily from comparison to the integrals and .most of the technical work in proving theorem [ theorem : particular integer eigenvalue ] lies in establishing an estimate on a sum of the form for a fixed .the following proposition provides an asymptotic formula for this sum ; we defer the proof until the next section . assuming this proposition , though, we can complete the proof of theorem [ theorem : particular integer eigenvalue ] , as well as corollary [ corollary : all integer eigenvalues ] .let and be real numbers , and let and be the functions defined in equation .then where was defined in equation .[ key proposition ] the functions and defined in equation are homogeneous of degree in the variables and , so that lemma [ converting from n to cd lemma ] implies inserting this formula into the conclusion of lemma [ initial count and mu insertion ] yields we bound the first error term by summing over to obtain so that we have the estimate we now apply proposition [ key proposition ] to obtain where we have used equation and the fact that and are convergent ( so the partial sums are uniformly bounded ) . note that for any , if one eigenvalue is an integer then they both are ( since the trace of is an integer ) .thus if we add up the cardinalities of all of the , we get twice the cardinality of , except that matrices with repeated eigenvalues only get counted once . however , the number of such matrices is by lemma [ lemma : double eigenvalue ] .therefore the sum is a riemann sum of a function of bounded variation , so this becomes the corollary then follows from the straightforward computation of the integral , noting that .it remains to prove proposition [ key proposition ] . recallingthat the functions and defined in equation are formed by combinations of minima and maxima , we need to separate our arguments into several cases depending on the range of .the following lemma addresses a sum that occurs in two of these cases ( and ) .note that because of the presence of terms like in the formula for , we need to exercise some caution near .let and be real numbers , with . then [ lemma where lower summation limit could be 1 ] suppose first that .then the sum in question is which establishes the lemma in this case . on the other hand , if then the sum in question is we subtract from the main term and compensate in the error term to obtain since we are working with the assumption that . because the function is increasing on the interval and bounded on the interval ] , then the product is a random variable whose distribution function is for ] . for , we easily check that .thus is distributed on ] .the lemma follows upon computing .it will also be helpful to define the following functions , which are symmetric in and : to prove theorem [ theorem : distribution of real eigenvalues ] , we first consider the distribution function associated to the density . for a random matrix in )} ]is closed under negation , it is clear that , so it suffices to compute for ] .let be fixed in the range ] .clearly the eigenvalues of less than correspond to the negative ( real ) eigenvalues of . by lemma[ lemma : double eigenvalue ] , we are free to exclude the null set where is singular or has repeated eigenvalues . outside of this null set , has exactly one negative eigenvalue if and only if , by lemma [ discriminant](c ) . likewise by lemma [ discriminant](d ), has exactly two negative eigenvalues if and only if and and .we thus have : we may express this probability as the average value where for fixed and , ( here denotes the indicator function of the indicated relation ) . to complete the proof it suffices to show that equals the function defined in equation .the probabilities appearing in equation are effectively given by lemma [ product distribution ] .however , there is some case - checking involved in applying this lemma , since the value of , say , depends on whether , , or .we make some observations to reduce the number of cases we need to examine .note that is bounded between 0 and 1 for any ] prescribed by lemma [ product distribution ] . from the identity we see also that .thus is never lower than , and we need only consider whether ( in which case ) .we therefore have and inserting these two evaluations into the formula , we obtain it can be verified that this last expression is indeed equal to the right - hand side of the definition of . since , to finish the proof of theorem [ theorem : distribution of real eigenvalues ] it therefore suffices to prove that equals the formula given in equation .proposition [ f_w proposition ] expresses as an integral , of a function that is independent of , over the square ^ 2 $ ] .since the region varies continuously with , we can compute the derivative by an appropriate line integral around the boundary of .indeed , by the fundamental theorem of calculus , we have where we have used the symmetry to reduce the integral to just the top and bottom edges of ( where and , respectively ) . for this range of , the line intersects the bottom edge of at , while the hyperbola intersects the top edge at .thus by the definition of , equation becomes the following elementary antiderivatives , which are readily obtained by substitution and integration by parts , follow for any fixed nonzero real number from the definitions , , and of , , and : therefore in this case ( after some algebraic simplification ) , which verifies the first case of theorem [ theorem : distribution of real eigenvalues ] .( note that the integrands really are continuous , despite terms that look like , because the function is continuous at 0 ; hence evaluating the integrals by antiderivatives is valid . ) now , the line does not intersect , while the hyperbola intersects the top edge at .thus by the definition of and the antiderivative of , equation becomes which verifies the second case of theorem [ theorem : distribution of real eigenvalues ] .as before , the line does not intersect , while the hyperbola intersects the bottom edge at . thus by the definition of and the antiderivative of , equation becomes which verifies the third case of theorem [ theorem : distribution of real eigenvalues ] .one could also use the same method to extract the individual distributions of the greater and lesser eigenvalues of : for instance , eliminating the factor of 2 from equation would yield an expression for the distribution of just the lesser eigenvalue of . | random matrices arise in many mathematical contexts , and it is natural to ask about the properties that such matrices satisfy . if we choose a matrix with integer entries at random , for example , what is the probability that it will have a particular integer as an eigenvalue , or an integer eigenvalue at all ? if we choose a matrix with real entries at random , what is the probability that it will have a real eigenvalue in a particular interval ? the purpose of this paper is to resolve these questions , once they are made suitably precise , in the setting of matrices . |
the protein constituents of the cytoskeleton of eukaryotic cells can be divided broadly into the following three categories : ( i)_filamentous _ proteins , ( ii ) _ motor _ proteins , and ( iii)_accessory _ proteins .the three classes of filamentous proteins which form the main scaffolding of the cytoskeleton , are : ( a)_actin _ , ( b)_microtubule _ , and ( c)_intermediate filaments_. the three superfamilies of motor proteins are : ( i)_myosin _ superfamily , ( ii)_kinesin _ superfamily , and ( iii)_dynein _ superfamily .microtubules serve as tracks for kinesins and dyneins. .the main cellular mission of the molecular motor kinesin-5 is to cross - link anti - parallel microtubules and to slide them apart , thus playing a critical role during bipolar spindle formation .kinesin-5 is absolutely essential for the dynamic assembly and function of the mitotic and meiotic spindles . in this work ,i have closely followed the paper by peskin and oster .the molecular motor kinesin-5 is a four headed motor with two heads on each side .fig-[schematic ] shows a schematic representation of the model we have developed for this motor . in our model, we assume the two heads of the motor on one side to be joined at a hinge .similarly , the other two heads on the other side are also joined by another hinge .these two hinges are then linked by a flexible connector .we also assume that the two heads on each side are free to rotate about their respective hinge .the two heads can also pass each other freely .however there is one restriction on their relative displacement .the maximum separation between two heads on one side can be equal to the separation between two discrete binding sites on the microtubule .this separation , which is a constant , is denoted by .we assume that the microtubules are aligned anti - parallel because kinesin-5 has a much stronger preferences for cross - linking anti - parallel overlapped microtubules than parallel ones .the end directed stepping activity of kinesin-5 slides microtubules away from each other .fig.[pair ] shows schematically the model of one pair of heads on a microtubule .the factors included in the model which are responsible for the motor to move forward are : ( i ) the back head has a higher probability to detach than the front head .( ii ) the attached head prefers a geometry in which it leans in the forward direction i.e towards the end of the microtubule .the first of these factors ensures that the back head detaches and the second one ensures that it reattaches as a front head .when both the heads are attached to the microtubules then the back head is in a more favourable orientation than the front one .therefore , atp hydrolysis and subsequent detachment is more likely at the back head .once the back head detaches , the front head relieves itself of the strain by leaning forward and swinging the detached back head forward .hence this is the process by which the motor moves forward .the coordinates of the two heads are taken to be and .the hinge is located halfway betwween them .the coordinate of the hinge , is therefore and the restriction dictates let [ cols="<,^ , < " , ] = probability that the free head binds behind the bound head .each head is also assigned a number which signifies its state .if it is attached to the track it is assigned and if it is detached from the track it is assigned .the state corresponds to both the heads of the corresponding side completely detached from the track and that implies the termination of the walk .so we only consider the states , and .fig.[trans ] shows the possible transitions and their rates . in order to monitor the motion of one pair of heads on a microtubule and the velocity of this motion , it is simpler and adequate to monitor the motion of the mid - point of the heads .the average velocity with which the mid - point moves on the microtubule can be taken as the average velocity with which the microtubule slides past the motor heads .we define , coordinate of the mid - point , which is identified as follows : position of the hinge in the state position of he bound head in the state so when both the heads are attached then the mid - point is halfway between the two atpase sites .as soon as one head detaches , the mid - point shifts to the location of the bound head i.e one of the two binding sites on either side of the previous location of the mid - point .so from fig.[pair ] we can see that the midpoint jumps and this is the step size of the motion of the mid - point .a state diagram showing the possible steps of the mid - point and their rates is shown in fig.[step ] .it is assumed that the binding sites are labelled by half - integer indices and the mid - points between them are , consequently , labelled by integer indices .so when both the heads are attached , the mid - point is at an integer site and when only one head is attached , the mid - point is at a half - integer site .so using the idea of peskin and oster explanined above , the motion of a pair of heads on one side can be simplified to the motion of the mid - point of the head alone .similarly , the same thing can be done for the other pair of heads as well .so instead of monitoring the motion of four heads we can monitor the motion of the two mid - points like a two - headed motor .now these two mid - points are coupled together because they are parts of the same 4-headed kinsesin .this constraint introduces a restriction on the difference between the coordinates of the mid - points .if and are the coordinates of the mid - points then restriction is .so this ensures that if the mid - point is at a site on a microtubule then cross - linking can take place only through sites and on the other microtubule .similarly , if the mid - point on a microtubule is at a site , then the possible cross - linking sites on the other microtubule are and .the orientations allowed are shown in fig.[orientations ] .let probabilty of finding the mid - point at at time . from fig.[step ]we can write the following master equations : we define and .\end{aligned}\ ] ] similarly , \ ] ] in the steady state .so from equations ( [ m0derivative ] ) and ( [ n0derivative ] ) we get also from normalisation we have , solving equations ( [ ss1 ] ) and ( [ ss2 ] ) , we get mean position = \ ] ] \ ] ] if we define as the average speed of the mid - point then .\ ] ] using the above relation and equations ( [ diffint ] ) and ( [ diffhalfint ] ) we can write \\ & & + \sum^{\infty}_{j=-\infty } \left(j+\frac{1}{2}\right)[\beta_bp_j+\beta_fp_{j+1}- { \alpha}p_{j+\frac{1}{2}}]\end{aligned}\ ] ] using and , we get . plugging in the expressions for and from equations ( [ m0 ] ) and ( [ n0 ] ) , we get \label{v}\end{aligned}\ ] ] where , now we couple the motion of the two mid - points which move on their mt tracks with calculated above .the motion of a mid - point counts only if it is attached to a microtubule and cross - linked to the other microtubule .so the relative velocity due to the motion of a mid - point on a microtubule is equal to multiplied to the probability of a mid - point being found at any particular site on that microtubule , which is then multiplied by the probability of a mid - point being found at all the corresponding possible cross - linking sites , and then summed over all the sites of the first microtubule .this prescription can be mathematically expressed as , \label{velocity}\end{aligned}\ ] ] here and are the velocities of the mid - points on the first and second microtubules respectively .similarly , the superscripts on the probabilities also refer to the the microtubules .but , since there is no functional difference between the two microtubules , these indices can be dropped .then we have \label{velocity1}\ ] ] we define three new quantities , plugging these expressions in equation ( [ velocity1 ] ) , now , using equation ( [ diffhalfint ] ) \ ] ] using the definitions of equations ( [ m ] ) and ( [ n ] ) \label{nderivative}\ ] ] similarly , using equation ( [ diffint ] ) \ ] ] using the definitions of equations ( [ m ] ) and ( [ q ] ) \label{qderivative}\ ] ] in the steady state , so we have , &=&0 \label{final1}\\ 2[\alpha m + \alpha(1-p)m-(\beta_f+\beta_b)q]&=&0\label{final2}\end{aligned}\ ] ] also since the variables , and capture all the cross - linking probabilities ( fig.[orientations ] ) so we can write , solving equations ( [ final1 ] ) , ( [ final2 ] ) and ( [ final3 ] ) we get , using equations ( [ final3 ] ) and ( [ mvalue ] ) , we get the final expression for as , \left(1+\frac{2\alpha(\beta_f+\beta_b)}{2\alpha(\beta_f+\beta_b)+(\beta_f+\beta_b)^2+\alpha^2(2-p)}\right)\label{finalvelocity2}\end{aligned}\ ] ]from equation ( [ mvalue ] ) it is obvious that .therefore , which is also consistent with our intuitive expectation . since in ourmodel cross - linking is allowed through sites next to the adjacent sites on microtubules , hence the two mid - points move with respect to each other .this factor actually increases the relative velocity above .the microtubules slide with respect to the mid - points with veocity and the mid - points move with respect to each other which increases the overall relative velocity .next we consider a few special cases to establish that our results are consitent with physical intuition .if the probability of the detachment of the back head and the front head is the same and the probability of the free head binding infront and behind the attached head is also the same , then in that case , using equations ( [ sc1 ] ) and ( [ sc2 ] ) in equations ( [ v ] ) and ( [ finalvelocity2 ] ) , we get and respectively , which is expected on physical grounds . if possible orientations for cross - linking are restricted by imposing the condition that cross - linking is allowed only through adjacent sites on two microtubules , then the relative positions of the mid - points with respect to each other do not change. the relative velocity of sliding will simply be the sum of the velocities with which each mid - point hops on the microtubule .the velocites are added because of the anti - parallel orientations of the microtubules .mathematically , in this case and hence , using equation ( [ sc3 ] ) in equation ( [ finalvelocity ] ) we get which agrees with the above prediction .kruse et al. calculated the relative velocity of sliding between two filaments due to the action of a two headed motor . in this work the length of the filaments , which is denoted by is assumed to be finite . it is assumed that cross - linking can take place only through adjacent sites of the two microtubules .a variable is defined , which is the differnce between the coordinates of the minus ends of the microtubules .thus overlaps can take palce if . the average velocity of sliding as a function of in units of monomers per unit time is where is the probability of cross - linking .when averaged over all possible values of the final expression for average relative velocity is now , equation ( [ kruse2 ] ) can be used for our system by taking the limit of and replacing by . in that case , which is consistent with equation ( [ finalvelocity2 ] ) because in the special case cross - linking was allowed only through adjacent sites on two microtubules .computer simulations as well as analytical studies related to continuum models of motor induced sliding of microtubules and actin filaments have been reported .these studies say that two headed bipolar motors slide anti - parallel filaments with a velocity that is twice that of the free motor velocity . in our case , the motion of kinesin-5 is reduced to a motion of a two headed motor using the model of mid - points . in our case cross - linkingis allowed through adjacent sites on microtubules or sites next to adjacent sites on microtubules . in the continuumlimit the distance between two sites is negligible , so it can be said that all the cross - linking that takes place is effectively through adjacent sites only , as in the continuum limit the adjacent site and the next to adjacent site can be deemed to be the same . hence all the cross - linking can be deemed to be through adjacent sites .in such a case as we calculated in equation ( [ sc4 ] ) our realtive velocity of sliding is equal to , where is the velocity with which a mid - point slides a filament .if is taken analogous to the free motor velocity of a two headed motor , then our result in the continuum limit agrees with earlier studies done with continuum models .i sincerely thank debashish chowdhury for introducing me to molecular motors , and for useful suggestions .i also thank him for his comments on earlier versions of this manuscript . | kinesin-5 , also known as eg5 in vertebrates is a processive motor with 4 heads , which moves on filamentous tracks called microtubules . the basic function of kinesin-5 is to slide apart two anti - parallel microtubules by simultaneously walking on both the microtubules . we develop an analytical expression for the steady - state relative velocity of this sliding in terms of the rates of attachments and detachments of motor heads with the atpase sites on the microtubules . we first analyse the motion of one pair of motor heads on one microtubule and then couple it to the motion of the other pair of motor heads of the same motor on the second microtubule to get the relative velocity of sliding . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.